Computer Games Technology 9781774691762, 9781774693551

This book covers different topics from computer gaming technology, including educational and simulation games, games har

119 45 15MB

English Pages xxiv; 515 [542] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computer Games Technology
 9781774691762, 9781774693551

Table of contents :
Cover
Title Page
Copyright
DECLARATION
ABOUT THE EDITOR
TABLE OF CONTENTS
List of Contributors
List of Abbreviations
Preface
Section 1: Educational and Simulation Games
Chapter 1 SIDH: A Game-Based Architecture for a Training Simulator
Abstract
Introduction
Background
SIDH: An Architecture for a Game-Based Simulator
Fidelity in SIDH
Conclusion
Acknowledgments
References
Chapter 2 An Application of a Game Development Framework in Higher Education
Abstract
Introduction
Game Development Frameworks in Higher Education
Case Study: Applying A GDF in a Software Architercture Course
Experiences of Using GDF in Software Architecture
Related Work
Conclusion and Future Work
Acknowledgments
References
Chapter 3 Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming
Abstract
Introduction
Experiential Learning
Serious Gaming for Education
Driving Simulator Game for Experiential Learning
Gaming Implementation
General Insights and Conclusions
Acknowledgments
References
Chapter 4 Development of a Driving Simulator with Analyzing Driver’s Characteristics Based on a Virtual Reality Head Mounted Display
Abstract
Introduction
Advantages and Disadvantages of Driving Simulations
Research Hypothesis
Experimental Setup
Experimental Results and Discussion
Conclusions
References
Section 2: Games Hardware
Chapter 5 Fast and Reliable Mouse Picking Using Graphics Hardware
Abstract
Introduction
Related Work
Hardware Accelerated Picking
Experimental Results and Discussion
Conclusions and Future Work
Acknowledgments
References
Chapter 6 Ballooning Graphics Memory Space in Full GPU Virtualization Environments
Abstract
Introduction
Background and Motivation
Performance Evaluation
Related Works
Conclusion and Future Works
Acknowledgments
References
Chapter 7 Platform for Distributed 3D Gaming
Abstract
Introduction
Gaming Platforms Analysis: State of the Art in Consoles, Pc and Set Top Boxes
Games@Large Framework
Games@Large Framework Components
Experimental Results
Conclusions
Acknowledgments
References
Chapter 8 Player Profile Management on NFC Smart Card for Multiplayer Ubiquitous Games
Abstract
Introduction
Player Profile Definition For Mug (MUGPP)
Smart Cards in the Management of User Profiles
Playing Mugs With NFC Smart Cards
Architecture to Manage MUGPP on An NFC Smart Card
Mug PPM Use Cases
Conclusions and Perspectives
References
Chapter 9 Real-Time Large Crowd Rendering with Efficient Character and Instance Management on GPU
Abstract
Introduction
Related Work
System Overview
Fundamentals of Lod Selection and Character Animation
Source Character and Instance Management
Experiment and Analysis
Conclusion
Acknowledgments
References
Section 3: Games Software and Features
Chapter 10 Gamer’s Facial Cloning for Online Interactive Games
Abstract
Introduction
Previous and Related Work
Preliminary Concepts
Face Analysis
Face Synthesis
Interactive System
Conclusions
Acknowledgments
References
Chapter 11 A Quantisation of Cognitive Learning Process by Computer Graphics-Games: Towards More Efficient Learning Models
Abstract
Introduction
Methods and Materials
Results and Discussion
Conclusion and Future Works
References
Chapter 12 Real Time Animation of Trees Based on BBSC in Computer Games
Abstract
Introduction
Bbsc-Based Tree Modeling
Model for Physical Simulation of Wind
Animation of Bbsc-Based Trees
Results and Conclusions
Conclusions
Acknowledgment
References
Chapter 13 Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR
Abstract
Introduction
Related Work
Gamified Knowledge Encoding
Gamified Training Environment for Affine Transformations
Experimental Design
Measures
Results
Discussion
Conclusion and Future Work
Acknowledgments
References
Section 4: Games for Social and Health purposes
Chapter 14 Hall of Heroes: A Digital Game for Social Skills Training with Young Adolescents
Abstract
Introduction
Methods
Results and Discussion
Conclusion
References
Chapter 15 Kinect-Based Exergames Tailored to Parkinson Patients
Abstract
Introduction
Materials and Methods in the Design and Implementation of a Kinect-Based Interactive 3D Exergame Platform
The Balloon Goon Game
The Slope Creep Game
Discussion and Future Work
References
Chapter 16 Development of a Gesture-Based Game Applying Participatory Design to Reflect Values of Manual Wheelchair Users
Abstract
Introduction
Value-Guided Approaches To Design
Designing A Gesture-Based Game With Wheelchair Users
Evaluating The Enjoyment Of The Game
Discussion
Limitations
Conclusions
References
Chapter 17 Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games
Abstract
Introduction
Methodology
Discussion
Conclusion
References
Index
Back Cover

Citation preview

本书版权归Arcler所有

Computer Games Technology

Computer Games Technology

Edited by: Jovan Pehcevski

ARCLER

P

r

e

s

s

www.arclerpress.com

Computer Games Technology Jovan Pehcevski

Arcler Press 224 Shoreacres Road Burlington, ON L7L 2H2 Canada www.arclerpress.com Email: [email protected]

HERRN(GLWLRQ2 ISBN: (HERRN)

This book contains information obtained from highly regarded resources. Reprinted material sources are indicated. Copyright for individual articles remains with the authors as indicated and published under Creative Commons License. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable data and views articulated in the chapters are those of the individual contributors, and not necessarily those of the editors or publishers. Editors or publishers are not responsible for the accuracy of the information in the published chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The editors and the publisher have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not been acknowledged, please write to us so we may rectify. Notice: Registered trademark of products or corporate names are used only for explana        © 2022 Arcler Press ISBN: 978-1-77469-176-2 (Hardcover) Arcler Press publishes wide variety of books and eBooks. For more information about Arcler Press and its products, visit our website at www.arclerpress.com

DECLARATION Some content or chapters in this book are open access copyright free published research work, which is published under Creative Commons License and are indicated with the citation. We are thankful to the publishers and authors of the content and chapters as without them this book wouldn’t have been possible.

ABOUT THE EDITOR

Jovan obtained his PhD in Computer Science from RMIT University in Melbourne, Australia in 2007. His research interests include big data, business intelligence and predictive analytics, data and information science, information retrieval, XML, web services and service-oriented architectures, and relational and NoSQL database systems. He has published over 30 journal and conference papers and he also serves as a journal and conference reviewer. He is currently working as a Dean and Associate Professor at European University in Skopje, Macedonia.

TABLE OF CONTENTS

List of Contributors .......................................................................................xv List of Abbreviations .................................................................................... xxi Preface.................................................................................................. ....xxiii Section 1: Educational and Simulation Games Chapter 1

SIDH: A Game-Based Architecture for a Training Simulator ..................... 3 Abstract ..................................................................................................... 3 Introduction ............................................................................................... 4 Background ............................................................................................... 5 SIDH: An Architecture for a Game-Based Simulator ................................. 10 Fidelity in SIDH ....................................................................................... 19 Conclusion .............................................................................................. 21 Acknowledgments ................................................................................... 23 References ............................................................................................... 24

Chapter 2

An Application of a Game Development Framework in Higher Education .................................................................................... 27 Abstract ................................................................................................... 27 Introduction ............................................................................................. 28 Game Development Frameworks in Higher Education ............................. 29 Case Study: Applying A GDF in a Software Architercture Course.............. 34 Experiences of Using GDF in Software Architecture ................................. 45 Related Work ........................................................................................... 48 Conclusion and Future Work ................................................................... 51 Acknowledgments ................................................................................... 51 References ............................................................................................... 52

Chapter 3

Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming ............................................ 57 Abstract ................................................................................................... 57 Introduction ............................................................................................. 58 Experiential Learning ............................................................................... 59 Serious Gaming for Education.................................................................. 62 Driving Simulator Game for Experiential Learning ................................... 64 Gaming Implementation .......................................................................... 72 General Insights and Conclusions ............................................................ 89 Acknowledgments ................................................................................... 90 References ............................................................................................... 91

Chapter 4

Development of a Driving Simulator with Analyzing Driver’s Characteristics Based on a Virtual Reality Head Mounted Display ......... 95 Abstract ................................................................................................... 95 Introduction ............................................................................................. 96 Advantages and Disadvantages of Driving Simulations............................. 99 Research Hypothesis .............................................................................. 100 Experimental Setup ................................................................................ 101 Experimental Results and Discussion ..................................................... 104 Conclusions ........................................................................................... 111 References ............................................................................................. 113 Section 2: Games Hardware

Chapter 5

Fast and Reliable Mouse Picking Using Graphics Hardware .............................................................................................. 117 Abstract ................................................................................................. 117 Introduction ........................................................................................... 118 Related Work ......................................................................................... 119 Hardware Accelerated Picking ............................................................... 120 Experimental Results and Discussion ..................................................... 125 Conclusions and Future Work ................................................................ 130 Acknowledgments ................................................................................. 132 References ............................................................................................. 133

x

Chapter 6

Ballooning Graphics Memory Space in Full GPU Virtualization Environments .................................................................. 137 Abstract ................................................................................................. 137 Introduction ........................................................................................... 138 Background and Motivation ................................................................... 141 Performance Evaluation ......................................................................... 150 Related Works ....................................................................................... 156 Conclusion and Future Works ................................................................ 158 Acknowledgments ................................................................................. 158 References ............................................................................................. 159

Chapter 7

Platform for Distributed 3D Gaming ..................................................... 163 Abstract ................................................................................................. 164 Introduction ........................................................................................... 164 Gaming Platforms Analysis: State of the Art in Consoles, Pc and Set Top Boxes ................................................................... 166 Games@Large Framework...................................................................... 171 Games@Large Framework Components ................................................. 173 Experimental Results .............................................................................. 187 Conclusions ........................................................................................... 194 Acknowledgments ................................................................................. 195 References ............................................................................................. 196

Chapter 8

Player Profile Management on NFC Smart Card for Multiplayer Ubiquitous Games ............................................................. 199 Abstract ................................................................................................. 199 Introduction ........................................................................................... 200 Player Profile Definition For Mug (MUGPP) ........................................... 202 Smart Cards in the Management of User Profiles.................................... 204 Playing Mugs With NFC Smart Cards ..................................................... 206 Architecture to Manage MUGPP on An NFC Smart Card ....................... 208 Mug PPM Use Cases .............................................................................. 214 Conclusions and Perspectives ................................................................ 217 References ............................................................................................. 219

xi

Chapter 9

Real-Time Large Crowd Rendering with Efficient Character and Instance Management on GPU ....................................................... 223 Abstract ................................................................................................. 223 Introduction ........................................................................................... 224 Related Work ......................................................................................... 225 System Overview ................................................................................... 229 Fundamentals of Lod Selection and Character Animation ...................... 230 Source Character and Instance Management ......................................... 233 Experiment and Analysis ........................................................................ 240 Conclusion ............................................................................................ 246 Acknowledgments ................................................................................. 247 References ............................................................................................. 248 Section 3: Games Software and Features

Chapter 10 Gamer’s Facial Cloning for Online Interactive Games .......................... 255 Abstract ................................................................................................. 255 Introduction ........................................................................................... 256 Previous and Related Work .................................................................... 259 Preliminary Concepts............................................................................. 263 Face Analysis ......................................................................................... 268 Face Synthesis........................................................................................ 282 Interactive System .................................................................................. 286 Conclusions ........................................................................................... 289 Acknowledgments ................................................................................. 291 References ............................................................................................. 292 Chapter 11 A Quantisation of Cognitive Learning Process by Computer Graphics-Games: Towards More Efficient Learning Models .................................................................................................. 297 Abstract ................................................................................................. 297 Introduction ........................................................................................... 298 Methods and Materials .......................................................................... 302 Results and Discussion .......................................................................... 307 Conclusion and Future Works ................................................................ 313 References ............................................................................................. 315

xii

Chapter 12 Real Time Animation of Trees Based on BBSC in Computer Games ...... 321 Abstract ................................................................................................. 321 Introduction ........................................................................................... 322 Bbsc-Based Tree Modeling ..................................................................... 323 Model for Physical Simulation of Wind .................................................. 329 Animation of Bbsc-Based Trees .............................................................. 331 Results and Conclusions ........................................................................ 336 Conclusions ........................................................................................... 340 Acknowledgment ................................................................................... 340 References ............................................................................................. 341 Chapter 13 Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR ......................................... 343 Abstract ................................................................................................. 343 Introduction ........................................................................................... 344 Related Work ......................................................................................... 345 Gamified Knowledge Encoding.............................................................. 348 Gamified Training Environment for Affine Transformations ..................... 352 Experimental Design .............................................................................. 362 Measures ............................................................................................... 363 Results ................................................................................................... 365 Discussion ............................................................................................. 368 Conclusion and Future Work ................................................................. 371 Acknowledgments ................................................................................. 372 References ............................................................................................. 373 Section 4: Games for Social and Health purposes Chapter 14 Hall of Heroes: A Digital Game for Social Skills Training with Young Adolescents .......................................................... 385 Abstract ................................................................................................. 385 Introduction ........................................................................................... 386 Methods ................................................................................................ 394 Results and Discussion .......................................................................... 396 Conclusion ............................................................................................ 401 References ............................................................................................. 405

xiii

Chapter 15 Kinect-Based Exergames Tailored to Parkinson Patients ........................ 411 Abstract ................................................................................................. 411 Introduction ........................................................................................... 412 Materials and Methods in the Design and Implementation of a Kinect-Based Interactive 3D Exergame Platform .................... 416 The Balloon Goon Game ....................................................................... 418 The Slope Creep Game .......................................................................... 425 Discussion and Future Work .................................................................. 432 References ............................................................................................. 436 Chapter 16 Development of a Gesture-Based Game Applying Participatory Design to Reflect Values of Manual Wheelchair Users ......................... 441 Abstract ................................................................................................. 441 Introduction ........................................................................................... 442 Value-Guided Approaches To Design ..................................................... 444 Designing A Gesture-Based Game With Wheelchair Users .................... 446 Evaluating The Enjoyment Of The Game ................................................ 462 Discussion ............................................................................................. 472 Limitations ............................................................................................. 476 Conclusions ........................................................................................... 478 References ............................................................................................. 480 Chapter 17 Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games ..................................................................... 485 Abstract ................................................................................................. 485 Introduction ........................................................................................... 486 Methodology ......................................................................................... 487 Discussion ............................................................................................. 499 Conclusion ............................................................................................ 500 References ............................................................................................. 503 Index ..................................................................................................... 507

xiv

LIST OF CONTRIBUTORS P. Backlund InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden H. Engström InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden M. Gustavsson InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden M. Johannesson InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden M. Lebram InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden E. Sjörs InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden Alf Inge Wang Department of Computer and Information Science, Norwegian University of Science and Technology, 7491 Trondheim, Norway Bian Wu Department of Computer and Information Science, Norwegian University of Science and Technology, 7491 Trondheim, Norway Kevin Hulme NY State Center for Engineering, Design and Industrial Innovation, University at Buffalo-SUNY, Buffalo, NY 14260, USA

Edward Kasprzak Milliken Research Associates, Buffalo, NY 14260, USA Ken English NY State Center for Engineering, Design and Industrial Innovation, University at Buffalo-SUNY, Buffalo, NY 14260, USA Deborah Moore-Russo Graduate School of Education, University at Buffalo-SUNY, Buffalo, NY 14260, USA Kemper Lewis Department of Mechanical and Aerospace Engineering, University at Buffalo-SUNY, Buffalo, NY 14260, USA Seyyed Meisam Taheri Department of Human and Information Systems Engineering, Gifu University, Gifu, Japan Kojiro Matsushita Department of Human and Information Systems Engineering, Gifu University, Gifu, Japan Minoru Sasaki Department of Human and Information Systems Engineering, Gifu University, Gifu, Japan Hanli Zhao State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027, China Xiaogang Jin State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027, China Jianbing Shen School of Computer Science & Technology, Beijing Institute of Technology, Beijing 10008, China Shufang Lu State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027, China Younghun Park Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of Korea

xvi

Minwoo Gu Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of Korea Sungyong Park Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of Korea A. Jurgelionis Department of Biophysical and Electronic Engineering, University of Genoa, Via Opera Pia 11a, 16145 Genoa, Italy P. Fechteler Computer Vision & Graphics, Image Processing Department, Heinrich-Hertz-Institute Berlin, Fraunhofer-Institute for Telecommunications, 10587 Berlin, Germany P. Eisert Computer Vision & Graphics, Image Processing Department, Heinrich-Hertz-Institute Berlin, Fraunhofer-Institute for Telecommunications, 10587 Berlin, Germany F. Bellotti Department of Biophysical and Electronic Engineering, University of Genoa, Via Opera Pia 11a, 16145 Genoa, Italy H. David R&D Department, Exent Technologies Ltd., 25 Bazel Street, P.O. Box 2645, Petach Tikva 49125, Israel J. P. Laulajainen Converging Networks Laboratory, VTT Technical Research Centre of Finland, 90571 Oulu, Finland R. Carmichael Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6N, UK V. Poulopoulos Research Unit 6, Research Academic Computer Technology Institute, N. Kazantzaki, Panepistimioupoli, 26504 Rion, Greece Computer Engineering and Informatics Department, University of Patras, 26500 Patras, Greece

xvii

A. Laikari Software Architectures and Platforms Department, VTT Technical Research Centre of Finland, 02044 VTT, Espoo, Finland P. Perälä Converging Networks Laboratory, VTT Technical Research Centre of Finland, 90571 Oulu, Finland A. De Gloria Department of Biophysical and Electronic Engineering, University of Genoa, Via Opera Pia 11a, 16145 Genoa, Italy C. Bouras Research Unit 6, Research Academic Computer Technology Institute, N. Kazantzaki, Panepistimioupoli, 26504 Rion, Greece Computer Engineering and Informatics Department, University of Patras, 26500 Patras, Greece Romain Pellerin CNAM-CEDRIC, 292 rue St Martin, 75141 Paris Cedex 03, France GET-INT, 9 rue Charles Fourier, 91011 Evry Cedex, France Chen Yan CNAM-CEDRIC, 292 rue St Martin, 75141 Paris Cedex 03, France Julien Cordry CNAM-CEDRIC, 292 rue St Martin, 75141 Paris Cedex 03, France Eric Gressier-Soudan CNAM-CEDRIC, 292 rue St Martin, 75141 Paris Cedex 03, France Yangzi Dong Department of Computer Science, University of Alabama in Huntsville, Huntsville, AL 35899, USA Chao Peng Department of Computer Science, University of Alabama in Huntsville, Huntsville, AL 35899, USA Abdul Sattar SUPELEC/IETR, SCEE, Avenue de la Boulaie, 35576 Cesson-Sevigne, France

xviii

Nicolas Stoiber Orange Labs, RD/TECH, 4 rue du Clos Courtel, 35510 Cesson-Sevigne, France Renaud Seguier SUPELEC/IETR, SCEE, Avenue de la Boulaie, 35576 Cesson-Sevigne, France Gaspard Breton Orange Labs, RD/TECH, 4 rue du Clos Courtel, 35510 Cesson-Sevigne, France Ahmet Bahadir Orun School of Computer Science, De Montfort University, Leicester, UK. Huseyin Seker Department of Computer Science & Digital Technologies, University of Northumbria, Newcastle, UK. John Rose School of Psychology, University of Birmingham, Birmingham, UK. Armaghan Moemeni School of Computer Science, De Montfort University, Leicester, UK. Merih Fidan Lifelong Learning Institution, University of Leicester, Leicester, UK. Xuefeng Ao College of Information Science and Technology, Beijing Normal University, Beijing 10087, China Zhongke Wu College of Information Science and Technology, Beijing Normal University, Beijing 10087, China Mingquan Zhou College of Information Science and Technology, Beijing Normal University, Beijing 10087, China Sebastian Oberdörfer Human-Computer Interaction, University of Würzburg, Würzburg, Germany Marc Erich Latoschik Human-Computer Interaction, University of Würzburg, Würzburg, Germany

xix

LIST OF ABBREVIATIONS AAM

Active Appearance Model

ATs

Affine Transformations

ANCOVA

Analysis of Covariance

ATM

Appearance Transformation Matrix

ASD

Autistic Spectrum Disorders

BBSC

Ball B-Spline Curve

BAE

Breathing apparatus entry

CAD

Card Acceptance Device

CBT

Cognitive Behavioral Therapy

COTS

Commercial off-the-shelf

CUDA

Compute unified device architecture

CI

Conditional independence

EEG

Electroencephalogram

EHD

Enhanced Handheld Device

FRMP

Fast and reliable mouse picking

FPS

Frames per second

FOG

Freezing-of-gait

GDF

Game development framework

GMs

Game Mechanics

GPU

Graphics Processing Unit

GTTs

Graphics translation tables

HMD

Head-Mounted Display

HLSL

High Level Shading Language

HCI

Human-Computer Interaction

IDE

Integrated development environment

IDV

Interactive Data Visualization

LLI

Language learning impairments

LPS

Local Processing Server

MMRPGs

Massive Multi-player Role Playing Games

MOO

Multiobjective optimization

MUGs

Multiplayer Ubiquitous Games

MANOVA

Multivariate Analysis of Variance

NUI

Natural User Interface

OBC

On-Board Computer

PVM

Parallel Virtual Machine

PD

Parkinson’s disease

PD

Participatory design

PDA

Personal digital assistant

PVR

Personal video recorder

PALO

Player Actions with a Learning Objective

QoS

Quality of Service

RGI

Real world Gaming system Interaction

RVD

Road Vehicle Dynamics

STBs

Set top boxes

SOO

Single objective optimization

SE

Software engineering

SFT

Solution Focused Therapy

VSD

Value-sensitive design

VE

Virtual environment

VNG

Virtual Networked Gaming

VR

Virtual Reality

WLC

Waitlist control

xxii

PREFACE

The real boom in e-sports began as early as the 1990s, after the lifting of all restrictions on the use of the Internet for commercial purposes and the development of the first professional e-sports tournament organizations in Starcraft, Quake and Warcraft. The new form of adventure and experience that gaming provides has united players, participants, and followers, leading to the development of the first serious prize pools. Thus, around 2000, the prize fund of the tournament organized by the CPL (Cyberathlete Professional League) was worth $15,000. According to a report by Newzoo, whose biggest focus is data analysis of the gaming industry, it is predicted that the global gaming market will generate revenue of $170 billion in 2021, which is nearly 10% more than in 2020. The fact that the number of players is growing every day and it is predicted that it will reach 3 billion by 2023, is the main indicator that playing games can become the main media channel for brands in the next 5 years. The audience or users of video games today are much more different and diverse from the one we classified as classic gaming stereotypes in the past. It is no longer just a oneperson game against a computer simulation. Today, in various forms, gaming can be compared to many established media channels, such as digital (playing video games), social (streaming and influencers) and sports sponsorships (e-sports). Several important aspects should be treated when developing a new game, which are: game development and programming (Unity, C++, C#, Java), visual appearance and user experience design (UX design using GameMaker, Unreal Engine, Godot, Blender etc.), and game testing (QA – quality assurance). This edition covers different topics from gaming technology, including: educational and simulation games, games hardware, games software, and computer games for social and health purposes. Section 1 focuses on educational and simulation games, describing SIDH: a gamebased architecture for a training simulator, an application of a game development framework in higher education, experiential learning in vehicle dynamics education via motion simulation and interactive gaming, and development of a driving simulator with analyzing driver’s characteristics based on a virtual reality head mounted display. Section 2 focuses on games hardware, describing fast and reliable mouse picking using graphics hardware, ballooning graphics memory space in full GPU virtualization environments, platform for distributed 3D gaming, player profile management on NFC smart card for multiplayer ubiquitous games, and real-time large crowd rendering with efficient character and instance management on GPU.

Section 3 focuses on games software, describing gamer’s facial cloning for online interactive games, quantization of cognitive learning process by computer graphicsgames, real-time animation of trees based on BBSC in computer games, a dense pointto-point alignment method for realistic 3D face morphing and animation, and knowledge encoding in game mechanics: transfer-oriented knowledge learning in desktop-3D and VR. Section 4 focuses on games for social and health purposes, describing Hall of heroes - a digital game for social skills training with young adolescents, Kinect-based exer-games tailored to Parkinson patients, and development of a gesture-based game applying participatory design to reflect values of manual wheelchair users, using the revised bloom taxonomy to analyze psychotherapeutic games.

SECTION 1: EDUCATIONAL AND SIMULATION GAMES

CHAPTER 1

SIDH: A Game-Based Architecture for a Training Simulator

P. Backlund, H. Engström, M. Gustavsson, M. Johannesson, M. Lebram , and E. Sjörs InGaMe Lab Reasearch Group, School of Humanities and Informatics, University of Skövde, 54128 Skövde, Sweden

ABSTRACT Game-based simulators, sometimes referred to as “lightweight” simulators, have benefits such as flexible technology and economic feasibility. In this article, we extend the notion of a game-based simulator by introducing multiple screen view and physical interaction. These features are expected to enhance immersion and fidelity. By utilizing these concepts we have constructed a training simulator for breathing apparatus entry. Game Citation: P. Backlund, H. Engström, M. Gustavsson, M. Johannesson, M. Lebram, E. Sjörs, “SIDH: A Game-Based Architecture for a Training Simulator”, International Journal of Computer Games Technology, vol. 2009, Article ID 472672, 9 pages, 2009. https://doi.org/10.1155/2009/472672. Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

4

Computer Games Technology

hardware and software have been used to produce the application. More important, the application itself is deliberately designed to be a game. Indeed, one important design goal is to create an entertaining and motivating experience combined with learning goals in order to create a serious game. The system has been evaluated in cooperation with the Swedish Rescue Services Agency to see which architectural features contribute to perceived fidelity. The modes of visualization and interaction as well as level design contribute to the usefulness of the system.

INTRODUCTION The firefighter profession is stressful and dangerous, exposing its practitioners physically and psychologically highly demanding tasks in extreme environments [1]. One of the most hazardous tasks is to enter a building on fire and to search for victims. This activity is referred to as breathing apparatus entry (BAE) and requires a systematic and thorough scanning of the building where the sight may be extremely limited due to smoke. This in combination with the heat from fires forces the firefighters to hold a low position. When a victim is found, he/she must be dragged to a safe environment before the search can continue. The training for BAE is traditionally carried out in training areas with buildings of different types where victims are replaced with dummies. These methods have been shown to be effective but they are not optimal for all situations. First of all, they are relatively costly as they require trained instructors and access to a dedicated training area where each type of environment (e.g., hotel, ship, train, gas station, etc.) requires a separate physical model. If the same persons participate in repeated exercises they are likely to become familiar with these models. Simulator training solves the problem of students getting familiar with the physical models as new virtual models are fairly easy to create. Furthermore, as live training sessions are costly they need to be well prepared to create high value. Hence, virtual environments form an effective and efficient complement and preparation for real-world training [2] in order to increase its quality. Simulators form a class of systems which emulate the operational environment, often by means of visual representations. Various game-based training systems are sometimes referred to as “lightweight” simulators                            !    "  

SIDH: A Game-Based Architecture for a Training Simulator

5

technology, economy, and user control. The system presented in this paper extends the concept of game-based training systems by adding multiple screen view and a novel interaction mode. These features add to immersion and increase the feeling of “being there”. One issue in relation to all sorts of simulator training, and in particular concerning game-based simulators, is the transfer of skills and abilities to the real operational environment. Fidelity refers to the extent which the virtual system emulates the real world and is hence expected to be an important aspect of the system. '*;   cooperation between the University of Skövde and the Swedish Rescue Services Agency (SRSA). An important property of SIDH, which differentiate it from traditional simulators, is that it is game-based. Game technology, hardware, as well as software have been used to produce the application. More important, the application itself is deliberately designed to be a game. Indeed, one important design goal is to create an entertaining and motivating experience combined with learning goals. In this way, learning becomes self-motivating and the system may be used for off-hour training. SIDH also utilizes multiple screen view to make the player surrounded by the virtual world and a novel sensor-based interaction mode to put a physical load on the player. The aim of this paper is to report on the architecture of a game based ?\    >     ? this we extend the knowledge of how game technology can be utilized for serious games purposes as well as the practical usefulness of such a system.

BACKGROUND In this section, we give a brief account for our view on the concept of serious games and serious gaming, that is, the activity of using games for purposes other than entertainment. We also summarize previous work on firefighter simulations and give a brief background on Swedish firefighter training.

Serious Games Today, the term serious games is becoming more and more popular (see, e.g., [4, 5]). The term itself is established, but there is no current singleton definition of the concept. Zyda [6, page 26] defines a serious game as follows: “a mental contest, played with a computer in accordance with specific rules, that uses entertainment to further government or corporate training, education, health, public policy, and strategic communication objectives.”

6

Computer Games Technology

Furthermore, Zyda [6] argues that serious games have more than just story, art, and software. It is the addition of pedagogy (activities that educate or instruct, thereby imparting knowledge or skill) that makes games serious. However, he also stresses that pedagogy must be subordinate to story and that the entertainment component comes first. *  ^!                   !                 >>      >  entertainment (whether or not the user is consciously aware of it). A game’s purpose may be formulated by the game’s designer or by the user her/himself. The desired purpose, that is, a serious game, can be achieved through a spectrum ranging from the mere utilization of game technology      >>     >   >      games for some nonentertainment purpose or the use and/or adaptation of commercial games for nonentertainment purposes, which means that also a commercial off-the-shelf (COTS) game, used for nonentertainment purposes, may be considered a serious game. We also propose that any combination of the above would constitute a feasible way to achieve the desired effect. Serious games can be applied to a broad spectrum of application areas, for example, defense, government, healthcare, marketing and communications, education, and corporate and industry [4]. A question of interest concerns the claimed positive effects of such games, or of applications from related and sometimes overlapping areas such as e-learning, edutainment, game-based learning, and digital game-based learning. In addition to obvious advantages, like allowing learners to experience situations that are impossible in the real world for reasons of safety, cost, time, and so on [7, 8], serious games, it is argued, can have positive impacts on the players’ development of certain skills. We also note that some of these positive effects of gaming are not necessarily     >  _     by Mitchell and Savill-Smith [9], analytical and spatial skills, strategic skills and insight, learning and recollection capabilities, psychomotor skills, visual selective attention, and so on, may be enhanced by playing computer games. Other reports [10] have also pointed out the positive effects on motor  > ^ `  >   >  >       > !  example, by Enochsson et al. [11], who found a positive correlation between experience in computer games and performance in endoscopic simulation by medical students. The better performance of gamers is attributed to their three-dimensional perception experience from computer gaming. The positive effects of games may hence be further utilized if we can identify the correct content and accurately exploit the user’s experience as a driving

SIDH: A Game-Based Architecture for a Training Simulator

7

force for developing serious games. Swartout and van Lent [12] identify the areas of experience-based systems and experience-based education as >     = >> {       "      >      >   >    \  though gaming is not a replacement for simulation, it may well serve as  >    |*         between gaming and simulation to be an interesting one.

Immersive Visualization There are several ways of visualizing virtual worlds in order to offer a more immersive experience to the viewer than can be achieved using ordinary computer displays. The choice of visualization system depends on in what purpose it will be used and, what resources in terms of time, space, and finances are available. The term head-mounted display (HMD) is used on a category of visualization systems which presents a virtual world directly in front of the viewer’s eyes, via one or two small displays. Used with a head tracking system the viewer is able to look at different parts of the world by turning his/her head. There are many different types of HMDs in the market, for instance helmets, goggles, and lightweight glasses. There are also a number of features which may be supported by an HMD device, such as stereoscopic vision and semitransparent displays. A common property of HMDs is,

  !       A CAVE is an, usually supercomputer-based, environment for visualization of, and interaction with virtual models and worlds. The

 }~\   ~ \  }~\€ was developed by Electronic Visualization Laboratory [14]. The purpose was to build a system      !        >   |‚ {  original CAVE had three walls, consisting of screens for rear projection, and measures 2.5×2.5×2.5  ƒ   "!   >    are presented for a viewer whose head movements are tracked in order to correct the projection in real time. The viewer is also able to interact with the virtual world, for instance, by using a special glove. Several variations of CAVE have been built since 1991. One example is the six-sided VR-CUBE which was built at the Royal Institute of Technology in Stockholm in 1998 [16]. Here the viewer is completely surrounded by video and audio from the virtual world—in all directions.

8

Computer Games Technology

          The SRSA is the government authority responsible for the training of fire and rescue operatives. All municipal fire and rescue services staff are trained and certified by the SRSA. BAE is one of the tasks a firefighter has to perform. It is of crucial importance that the firefighter can remain orientated with very limited or no vision in a building. Traditionally the firefighter students practice search methods with and without smoke in different physical buildings.            ^     to develop training programs to prepare for such tasks [17]. There exist a   >         in the literature. Tate et al. [2] present a study where a virtual environment training    > >  >    {  environment allowed users, equipped with an HMD device, to navigate in ;   =†'''         >> |‡       > >         >                  traditional mission preparation and the other half prepared using the virtual environment. The result of the evaluation showed that the second group had a better performance than the group using traditional preparation. *     ! ˆ   '   |‰ >         command training environment which allows its users to inspect a house on

      {          Š          ‹  |Œ present an application for managing virtual reality scenarios and their main         and understanding. There are also examples of commercial simulation software used       {            VectorCommand Ltd. [20] which allows users to train emergency management.

Fidelity in Games and Visualization Buy-in, that is, how a person recognizes that an experience is relevant for their training is related to the amount of fidelity and perceived realism. It is often thought that high fidelity and realism will increase the chances of

SIDH: A Game-Based Architecture for a Training Simulator

9

buy-in. Alexander et al. [3] describe how realism can be perceived important even though it has no particular bearing on the task to be trained. Fidelity and transfer are crucial factors in simulation training. The

 > >           >              when the simulation training is part of a larger context; whereas a high level              dangerous or to expensive to practice in real life [21]. Fidelity can be divided in to three categories: physical, functional, and psychological   ‹            >   looks, sounds, and feels like the operational environment in terms of visual >! !   ‘              to which the simulation acts like the operational equipment in reacting to   ^         ‹            which the simulation replicates the psychological factors (i.e., stress, fear) experienced in the real-world environment, and engaging the trainee in the same manner as the actual equipment would in the real world [3]. A major issue concerning simulation training or game-based training is about the absence of a surrounding context where the knowledge will be used. According to Alexander et al. [3], it is a fact that real environments create stress and excitement that cannot be recreated in a virtual environment. When it comes to learning new abilities it can sometimes be necessary to try to reach the same levels of stress and excitement in the simulator as in the “real” world. If we could reach this relationship it would be named as

 >       ! !   this line of argument, that both negative stress and positive stress are shown to improve retain and transfer of knowledge from the virtual environment to the “real” world. In a study of game-based training and its effects on American soldiers made by Morris et al. [22], it emerged that soldiers who were exposed to stress during the military training performed better in a “real” situation, than the soldiers who were not exposed to stress. In fact the group that was exposed to stress got considerably higher points during the following practice. The higher points were due to the fact that they managed                }   !                  complexity and graphic realism and how the users will be able to use the things they learn in the virtual reality situation in the “real” world. According ’  ‡!           “   

10

Computer Games Technology

            >     ! >  !  >     ‘   >     form the largest >      ‹          well the simulator can recreate the psychological factors, that is, stress and fear, which the user later will meet in the “real” world [3]. The psychological

   >        is a rather abstract element.

SIDH: AN ARCHITECTURE FOR A GAME-BASED SIMULATOR The main goal with SIDH was to develop a cave-based simulator game for training some aspects of BAE. The project was carried out by expertise in game development and serious games, in close cooperation with expertise in firefighter training from SRSA. The game is not intended to replace conventional training, but rather to be used as a complement with focus on search strategies and ability to orient in unknown premises. The game is to resemble real-world BAE in as many aspects as possible. This means that the player’s task is to completely search different premises, with higher or lower complexity of the architecture, and rescue any persons  *  >   >    ^ !  ! to real smoke!             "         The player should also be exposed to heat and physical and psychological stresses, which are characteristic parts of BAE. Furthermore, in respect of the educational purpose, it is crucial to record the player’s actions during a game session for later analysis. To reduce overhead time in the developing process, it was decided that       >  }ƒ{' *  !   >      >      

                  was committed in order to choose one of three common games—FarCry, ”^ ‡!  !                    the eyes of the player (or the spectator). The position and orientation of this camera decides what is shown on the screen. By setting the orientation of a  •      !         the world from that angle, regardless of the player orientation. Each view is thus rotated 90 degrees with respect to the adjacent screens. To produce   –—  >_    !  == ‘ƒ~€  cameras was adjusted to 90 degrees. This is important since two adjacent views will overlap with a greater FOV value, and with a lesser value all parts of the world will not be visible at the same time.

Interaction Model One of the goals with SIDH has been to make the player’s interaction with the game as natural as possible. The player has to be able to move and look around in the virtual world, crouching as well as in a raised position. Moreover, it has to be possible to pick up and drop victims and control the game flow.

Navigation One problem with navigating in a cave environment is the lack of obvious directions in terms of forward, backward, left, and right. Normally, when playing a first-person game, forward is defined by the center of the display, that is, the direction in which the player is currently looking. Pressing the forward key on the keyboard will always move the player in the desired direction since the world is rotating around the player and the keyboard. In the cave, however, it is not possible to use a single forward key since the directions are fixed to the real world. Hence, in order to decide what direction is forward for a cave player it is necessary for the software to keep track of the player’s current orientation. In SIDH, this problem is solved by using a GameTrak [25], which is an USB device working as a joystick

SIDH: A Game-Based Architecture for a Training Simulator

13

with a string to pull as Z-axis. The GameTrak [25] is placed centered in the top of the cave, top down, with the string attached to the fogfighter nozzle (Figure 2). In this way, the nozzle’s position in 3D space can be calculated. Since there may be some discrepancy between the direction expected by the player and the calculated direction (due to player deviation from the center of the cave), the calculated forward direction is visualized as a marker on the screens.

Figure 2˜{  ™ {^> €      nozzle (bottom right). N.B., only one of the GameTrak’s [25] two controls is used.

'*; !  "     > !  be activated when the player approaches it. All levels in the SIDH game   >   >      "  Important elements to game functionality are recycled in all levels, thus reducing the overhead when creating new tracks. 13 different levels (Table 1) have been developed in close cooperation with the SRSA. There is also a tutorial in which the player gets to learn the game and the interaction mode and two bonus levels. The game covers     "!   !  >!       >   relevant sample of environments for rescue personnel to train in. Each level provides new challenges with respect to size, complexity, and smoke. The                             >   ^     ‘  !     >  challenges in terms of unexpected situations, such as victims hidden in           ƒ          ! sirens, screaming victims, and explosions. Table 1: Game levels and their learning objectives Level

Description

Objective

0

Tutorial

To handle the interaction model and game goals

1

Two-room apartment

{       

2

Club building

To realize that time is limited

3

Garage

To realize the advantage of crouching in smoke   

SIDH: A Game-Based Architecture for a Training Simulator 4

Bonus level

To get a break

5

One-room apartment

To look in closets

6

Youth hostel

To handle large number of rooms

7

ƒ >

To handle complex corridor architecture, closet

8

Grocery store

To handle areas divided by shelves

9

Three-room apartment

{     !  ! 

10

Large apartment

To handle extremely limited sight in complex architecture

11

Basement storage area

{         _^

12

Four-room apartment

To handle doors at unexpected positions

13

Hotel corridor

To handle numerous identical doors

14

Butcher shop

To handle veiled areas

15

Bonus level

{   

17

Each level starts with a female virtual instructor introducing the game ^{  • >     'š'!          >                   >   \       (Figure 4) where the player receives oral and visual feedback concerning his/her result.

Figure 4˜{   { >          >  {    > >      player.

18

Computer Games Technology

A level is completed when all rooms and areas have been scanned and all victims rescued within time limit and with preserved player stamina. If the result is unsatisfying, the virtual instructor will inform the player of this and the player gets a new chance. The number of trials on each level is maximized to three in order to expose the player to a variety of environments. When a level is successfully completed the player will get a score based on the proportion of the facilities that has been searched, time spent on the ^!     ! ! *      the search exceeds 90%, the player will be awarded a star bonus which will also generate a bonus score on the following level.

    Besides implementing cave visualization and the interaction model, the source code had to be modified in order to implement required features and the specialized game flow. An example of such a modification is concerning the player’s health level, which is constantly decreasing when acting in smoke-filled areas. When the player is in the room where the instructor is, that is, outside the hostile premises, the health level is recharged. Furthermore, a supervising feature had to be implemented. SIDH is using the game engine’s native functionality for recording metadata of a game to   { ^ >       >>   the screen when played. Hence, the feature works like screen capture except that the recording does not contain any image data, which requires a lot more storage space. The compact format allows SIDH to record every game     ^ *           !    >    functionality concerning the state and the activities of the player has been implemented. The player’s position, orientation, and health level are logged ten times per second, and the markings are logged when committed. Together with information about the level geometry, this data is processed by an external tool which has been developed. This tool creates an orthographic image which shows not only what parts of the level the player has visited but also what parts actually have been seen. The algorithm for deciding the player’s sight is based on information about the player’s orientation, if there is smoke at the current position, the density of the smoke, and if the player is crouching or not. The image created is loaded into the game as a texture which is shown in the debrief room described above. The tool can also be

SIDH: A Game-Based Architecture for a Training Simulator

19

used standalone to present an orthographic animation of the player’s actions during the game. *                 and cave visualization are implemented as additions to the original game features. Hence, it is possible to play SIDH on a single PC using standard input devices.

FIDELITY IN SIDH This section reports on the results of an analysis of the perceived usefulness of the game as a training tool with respect to its fidelity. Data was collected from 32 students and one teacher during spring of 2008. The students participated in two questionnaires; the first one asked them about their expectations on simulator training and the second one captured their opinion after the simulator session. We observed training sessions for 11 students and carried out interviews with 10 students. Finally, the teacher in charge of the training sessions was interviewed. All data was compiled and we used a qualitative analysis approach to categorize it [26]. We focus on the aspect of fidelity even though complementary data was collected. *  Š   ^    Š    general data such as age and gender. We also asked the following questions (translated from Swedish). “How much do you play computer games on your spare time? How big is your experience from computer games? Do you think that it is possible to learn things from computer games? What is your prior knowledge of BAE entry? What is your prior knowledge of simulator training? In what ways do you think that simulator training can affect your performance in live exercises? In what other ways do you think that games and simulators can be used in your training?” In the follow-up questionnaire: we asked the students questions about how their perception of the concept had changed. We also asked the following questions about their perception of the sessions and the usefulness of the  '  €{     > for the simulator to be useful? In which situations should simulator training be used? Would you consider using the simulator off-hours? What did you learn from your sessions? Did you learn anything which was useful during the live exercises? Do you think that the effect would be the same form playing in a single screen mode on a PC? What is the teacher’s role during simulator training?”

20

Computer Games Technology

The interviews were carried out following an interview guide which was adapted depending on the answers from the questionnaire. The questions concerned the participants’ opinions on transfer (i.e., to what extent can knowledge from the simulator sessions be transferred to other situations?);

  !   > >>         >            œ€!    !   ways can games and simulators affect motivation for training in a general sense?). Fidelity is perceived to be a crucial part for the learning aspect and a

        ^>>   We also claim that there is a connection between the construction of the simulator and what level of seriousness it will be perceived to have. One          !           >>                   supports the learning since it constitutes a chance to operate in a “close to real life” environment. In general, there was a positive perception of SIDH in terms of graphical appearance, sound, and user interaction. However, the   !             !    inclined to see it as a complementary tool to live training. He highlights the fact that SIDH is useful to mediate search strategies for different types of environments. The psychological strain was an important learning factor during the simulator training sessions. The students maintained that they learned how they, as individuals, react under stress and time pressure, and that the   Š   >   ƒ  also indicate that the simulator sessions require the person to be structured and calm during the minutes that they spend in the simulator. This is one important aspect of correctly carrying out a search strategy. Hence, our    >           of the stress and time pressure present in the original task. Game features, such as time pressure and pressure of saving lives, may hence be utilized to    >     One student suggested that simulator sessions should be used between the theoretical parts of the course and live training. He claimed that by doing this the student would be given a smoother transitional stage between the two parts. From a pedagogical perspective it makes sense that the students prepare for live training in order to increase its quality. This approach helps the students to get better results in live training since they got a chance to

SIDH: A Game-Based Architecture for a Training Simulator

21

>     >    !  Š         €     { _                     The generation of smoke in SIDH has some resemblance to real smoke  ‡{ ^        down making it necessary to crouch in order to get a better view. Even though visually similar some of the participants commented negatively on the naturalness of the smoke claiming that sight was actually too good. One potentially negative aspect of this is that the students tend to trust visual stimuli at the expense of tactile stimuli, which is actually a poor strategy in many cases. '*;                >   ž     navigate as opposed to a single-screen solution where the player sits still and the virtual world rotates. This feature, to some extent, adds to immersion. A majority of the observed participants felt a sense of misdirection and got lost. {    '*;        as game sessions become physically straining. The current version of SIDH       >         endoscopy,” Journal of Gastrointestinal Surgery, vol. 8, no. 7, pp. 874–880, 2004. W. Swartout and M. van Lent, “Making A game of system design,” Communications of the ACM, vol. 46, no. 7, pp. 32–39, 2003. B. Sawyer, “The serious games summit: emergent use of interactive games for solving problems is serious effort,” Computers in

SIDH: A Game-Based Architecture for a Training Simulator

14. 15.

16. 17. 18.

19.

20. 21. 22.

23.

24. 25. 26. 27.

25

Entertainment, vol. 2, no. 1, p. 5, 2004. Electronic Visualization Laboratory, 1991, http://www.evl.uic.edu/ core.php?mod=4&type=1&indi=161. C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, “Surround-screen projection-based virtual reality: the design and implementation of the CAVE,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ‘93), pp. 135–142, Anaheim, Calif, USA, August 1993. Center for Parallel Computers, “The PDC Cube,” 1998, http://www. pdc.kth.se/projects/vr-cube. š’^!‘   ˜   >  !Fire Engineering, vol. 157, no. 6, pp. 117–119, 2004. {†'ˆ };'  !‘     environment,” in Proceedings of the Richard Tapia Celebration of Diversity in Computing Conference, pp. 30–33, Atlanta, Ga, USA, October 2003. E. Perdigau, P. Torguet, C. Sanza, and J.-P. Jessel, “A distributed virtual       !Proceedings of the 2nd International Conference on Virtual Storytelling (ICVS ‘03), vol. 2897 of Lecture Notes in Computer Science, pp. 227–230, Toulouse, France, November 2003. Vector Command Ltd., February 2008, http://www.vectorcommand. com. E. Farmer, J. Rooij, J. Riemersma, P. Jorna, and J. Moraal, Handbook of Simulator-Based Training!  !?^ !~!†'!|ŒŒŒ C. S. Morris, P. A. Hancock, and E. C. Shirkey, “Motivational effects of adding context relevant stress in PC-based game training,” Military Psychology, vol. 16, no. 2, pp. 135–147, 2004. J. Linderoth, Datorspelandets mening—bortom idén om den interaktiva illusionen, Doctoral Dissertation, Göteborgs Universitet, Göteborg, Acta Universitatis Gothoburgensis, 2004. Valve Corporation, June 2008, http://orange.half-life2.com. In2Games Ltd., June 2008, http://www.in2games.uk.com. M. Q. Patton, Qualitative Research & Evaluation Methods, Sage, Thousand Oaks, Calif, USA, 3rd edition, 2002. H. Gustavsson, H. Engström, and M. Gustavsson, “A multi-sampling

26

Computer Games Technology

approach for smoke behaviour in real-time graphics,” in Proceedings of the Annual SIGRAD Conference, Skövde, Sweden, November 2006. 28. P. Backlund, H. Engström, C. Hammar, M. Johannesson, and M. ’ ! '*;        ™;‘              and how game development can be integrated with the education process. ` >  !    >        >  and the GDF would affect the learning of software architecture with focus on the technical aspects of the GDF. This paper focuses on how the technical aspects of a GDF affect the learning of software architecture, the selection of appropriate GDF for a software architecture course, and how a GDF can be applied in a software engineering course. The main contribution of this paper is a presentation of a novel GDF concept that can be used in courses that includes software development, experiences from actual usage of the GDF, and some course design considerations. The rest of the paper is organized as follows. Section 2 describes and motivates for how a GDF can be used in higher education and what criteria should be considered when choosing one. Section 3 describes a case study of applying a GDF in a software architecture course. Section 4 describes experiences from using a GDF in a software course. Section 5 describes similar approaches, and Section 6 concludes the paper.

GAME DEVELOPMENT FRAMEWORKS IN HIGHER EDUCATION This section presents the motivation for applying GDFs in higher education, a model for how GDFs can be integrated with a course, and requirements for how to choose the appropriate GDF for educational purposes.

GDF and Education The main motivation for introducing GDF in software engineering (SE) or computer science (CS) courses is to motivate students to put more effort into software development project in order to improve software development skills. Game development offers an interesting way of learning and applying the course theory. By introducing a game development project in a course, the students have to establish and describe most of the functional requirements themselves (what the game should be like). This can be a motivating factor especially for group-based projects, as each group will develop a unique application (the game); it will encourage creativity, and it will require different skills from the group members (art, programming, story, audio/music). The

30

Computer Games Technology

result will be that the students will have a stronger feeling of ownership to the project. Furthermore, students also could learn about game development technology. The main disadvantages by introducing a game development project and a GDF into a SE or CS course is that the student might spend too much time on game-specific issues and that the project results might be difficult to compare. It is critical that the students get motivated applying a GDF in a course and that they get increased motivation for learning and applying course theory through a game development project. Tom Malone has listed three main characteristics that make things fun to learn: they should provide the appropriate level of challenge, they should use fantasy and abstractions to make it more interesting, and they should trigger the player’s curiosity [11]. These characteristics can directly be applied when developing a game for learning purposes. However, we can also consider these characteristics when introducing a GDF in a SE or CS course. By allowing the students to develop their own games using a GDF, such projects are likely to trigger students’ curiosity as well as provide a challenge for students to design fun games with their knowledge, skills, imagination, and creativity. The level of the challenge can be adjusted according to the project requirements given in courses by the teacher. Thus, the challenge level can not only be adjusted to the right level for most participants, but also tailored for individual differences. As the students will work in groups, group members helping other group members can compensate for the individual differences. An open platform and agile courses requirements should be provided for students to design their own games, combined with their ability, fantasy, and comprehension of lecture content. {   ™;‘      motivating initiative in courses to learn about various topics such as software requirements, software design, software architecture, programming, 2D and 3D graphic representation, graphic programming!      ! physics, animation, user interfaces, and many other areas within computer science and software engineering. It is most useful for learning new skills and     >      theory by applying know skills and knowledge in a project using a GDF.

Circulatory Model of Applying a GDF in a Course There are several good reasons for introducing a GDF and game development projects in CS and SE courses as described in previous section, but in order to make it a success it is important that the GDF is well integrated with the

An Application of a Game Development Framework in Higher Education

31

course. Based on our experiences, we have developed a circular model for how to apply a GDF in a CS or SE course through six steps (see Figure 1). The model is intended for courses where a software development project is a major part of the course.

Figure 1: Circulatory model of GDF’s application in courses.

To choose one appropriate development platform according to the course content, it is important to consider the process of the course related to the development project. This process starts with choosing an appropriate GDF (step A) for the course related to some requirements (described in the   €¢ !      >_  >?€ "  the limitations and constraints of the chosen GDF. In the initial phase of the student project, it is important that the students get the required technical guidance and appropriate requirements (step C) related to the GDF. It is important that the students get to know the GDF early, for example, by introducing an exercise to implement a simple game in the GDF. It is critical         ^  ™;‘     the required feedback. The next step is for the students to start designing and implementing (step D) their own game according to the constraints      ™;‘           version of their project implementation and documentation, the students should get the chance to evaluate and analyze (step E) their own projects to learn from their successes and mistakes. This information should then be used to provide feedback in order to improve the course (step F). The feedback from the students might indicate that another GDF should be used or that the course constraints on the projects should be altered. The core of this model is that the teacher should encourage the students to explore the course theory through a game development project using a GDF and give the opportunity to improve the game development project through feedback from the students.

32

Computer Games Technology

Criteria for Choosing the Right GDF How to choose an appropriate GDF that easily can be integrated with course content should be based on the educational goals of the course, the technical level and skills of students, and the time available for projects and/or exercises. Based on experiences from using GDFs and from student projects in CS and SE courses, we have come up with the following requirements for choosing a GDF for a CS or SE course. (1) It must be easy to learn and allow rapid development. According to Malone’s recommendation of how to make things fun to learn, it is crucial that we provide the appropriate level of challenge. If the GDF is too much of a challenge and requires too much to learn before becoming productive, the whole idea of game development will be wasted, as the student will lose motivation. An important aspect of this is that the GDF offers high-level APIs that makes it possible for the students to develop impressive results without writing too many lines of code. This is especially critical in the first phase of the project.(2)It must provide an open development environment to attract students’ curiosity. Malone claims that fantasy and curiosity are other important factors that make things fun to learn. By providing a relatively open GDF without too many restrictions on what you can produce, the students get a chance to realize the game of their dreams. This means that the GDF itself should not restrict what kind of game the students can make. This requirement would typically rule out GDFs that are tailored for producing only one game genre such as adventure games, platform games, or board games. In addition, ideally an open development environment should offer public and practical interfaces for developers to extend their own functions. In this respect, open source game development platforms are preferred.(3)It must support programming languages that are familiar to the students. The students should not be burdened to have to learn a new programming language from scratch in addition to the course content. This would take away the focus of the educational goals of the course. We suggest to choose GDFs that support popular programming languages that the students know like C++, C#, or Java. It is also important that the programming languages supported by the GDF have high-level constructs and libraries that enable the programmers to be more productive as less code is required to produce fully functional systems. From an educational point of view, programming languages like Java and C# are better suited than C and C++, as they have more constraints that force the programmers to write cleaner code, and there is less concern related to issues like pointers and memory leakage. From a game development perspective, programming languages like C and C++

An Application of a Game Development Framework in Higher Education

33

are more attractive as they generally produce faster executables and thus faster games.(4)It must not conflict with the educational goals of the course. When choosing a GDF it is important that the inherent patterns, procedures, design, and architecture of the GDF are not in conflict with the theory taught in the course. One example of such a conflict could be that the way the GDF enforces event handling in an application is given as an example of bad design in the textbook.(5)It must have a stable implementation. When a GDF is used in a course, it is essential that the GDF has few bugs so the students do not have to fight a lot of technical issues instead of focusing on the course topics. This requirement indicates that it is important that the GDF is supported by a company or a development community that has enough resources to eliminate serious technical insufficiencies. It is also important that the development of the GDF is not a dead project, as this will lead to compatibility issues for future releases of operating systems, software components, and hardware drivers.(6)It must have sufficient documentation. This requirement is important for both the course staff and the students. The documentation should both give a good overview of the GDF as well as document all the features provided. Further, it is important that the GDF provides tutorials and examples to demonstrate how to use the GDF and its features. The frameworks should provide documentation and tutorials of high quality enabling self-study.(7)It should be inexpensive (low costs) to use and acquire. Ideally, the GDFs should be free or have very low associated cost to avoid extra costs running the course. This requirement also involves investigating additional costs related to the GDF such as requirements for extra or more powerful hardware and/or requirements for additional software. The goal of the requirements above is to save the time and effort the students have to spend on coding and understanding the framework, making them concentrate on the course content and software design. Thus, an appropriate GDF could provide the students with exciting experiences and offer a new way of learning through a new domain (games). The requirements   >   !   > ™;‘ that would cause less effort spent on technical issues, and incompatibility between GDF and the course contents. ‘  Š   !  ^       "  between requirements one and two. The level of the freedom the developer  ^      ^    "   > a development environment that allows rapid development and is easy to learn. A more open GDF usually means that the developer must learn more

34

Computer Games Technology

APIs as well as the APIs themselves are usually of lower level, and thus harder to use. However, it is possible to get a bit of both worlds by offering high-level APIs that are relatively easy to use but still allow the developer to access underlying APIs that give the developer the freedom in what kind of games can be made. This means that the GDF can allow inexperienced developers to just modify simple APIs or example code to make variants of existing games, or to allow more experienced developers to make unique games by using more of the provided underlying APIs. How hard the GDF is to use will then really depend on the ambition of the game developer and not on the GDF itself. This can also be a motivating factor to learn more about the GDF’s APIs.

CASE STUDY: APPLYING A GDF IN A SOFTWARE ARCHITERCTURE COURSE This section describes a case study of a software architecture course at the Norwegian University of Science and Technology (NTNU) where a GDF was introduced.

The Software Architecture Course The software architecture course is a postgraduate course offered to CS and SE students at NTNU. The course is taught every spring, its workload is 25% of one semester, and about 70 postgraduate students attend the course every semester. The students in the course are mostly of Norwegian students (about 80%), but there are about 20% foreign students mostly from EUcountries. The textbook used in this course is the “Software Architecture in Practice, Second Edition”, by Bass, Clements et al. [12]. Additional papers are used to cover topics that are not sufficiently covered by the book such as design patterns, software architecture documentation standards, view models, and postmortem analysis [13–16]. The education goal of the course is: “The students should be able to      central concepts in software architecture literature and be able to use and describe design/ architectural patterns, methods to design software architectures, methods/ techniques to achieve software qualities, methods to document software architecture, and methods to evaluate software architecture.” The course is taught in four main ways:(1)ordinary lectures given in English;(2)invited guest lectures from the software industry;(3)exercise in design patterns;(4)a software development project with emphasis on

An Application of a Game Development Framework in Higher Education

35

software architecture.30% of the grade is based on an evaluation of a software architecture project that all students have to do, while 70% is given from the results of a written examination. The goal of the project is for the students to apply the methods and theory in the course to design a software architecture and to implement a system according to the architecture. The project consists of the following phases.(1)COTS (Commercial Off-The-Shelf) exercise: learn the development platform to be used in the project by developing some simple test applications.(2)Design pattern: learn how to utilize design pattern by making changes in an existing system designed with and without design patterns.(3)Requirements and architecture: describe the functional and the quality requirements, and design the software architecture for the application in the project.(4)Architecture evaluation: use the Architecture Trade-off Analysis Method (ATAM) [12, 17] to evaluate the software architecture in regards to the quality requirements. Here one student group will evaluate another student group’s project.(5)Implementation: do a detailed design and implement the application based on the created architecture and based on the results from a previous phase.(6)Project evaluation: evaluate the project after it has been completed using a Post-Mortem Analysis (PMA) method. *   >   >_ !   ^   pairs. For the phases 4–6, the students work in self-composed groups of four students. The students spend most time on the implementation phase (6 weeks), and they are also encouraged to start the implementation in earlier phases to test their architectural choices (incremental development). In previous years, the goal of the project has been to develop a robot controller for a robot simulator in Java with emphasis on an assigned quality attribute  !>  ! ! 

Choosing a GDF for the Software Architecture Course In Fall 2007, we started to look for appropriate GDFs to be used in the software architecture course in spring 2008. We looked for both GDFs where the programmer had to write the source code as well as visual drag-anddrop programming environments. The selection of candidates was based on GDFs we were familiar with and GDFs that had developer support. Further, we wanted to compare both commercial and open source GDFs. From an initial long list candidate GDFs, we chose to evaluate the following GDFs more in detail. (i)

XNA: XNA is a GDF from Microsoft that enables development of homebrew cross-platform games for Windows and the XBOX 360 using the C# programming language. The initial version of

Computer Games Technology

36

(ii)

(iii)

(iv)

Microsoft XNA Game Studio was released in 2006 [18], and in 2008 Microsoft XNA Game studio 3.0 was released that includes support for making games for XBOX Live. XNA features a set of high-level API enabling the development of advanced games in 2D or 3D with advanced graphical effects with little effort. The XNA platform is free and allows developers to create games for Windows, Xbox 360, and Zune using the same GDF [19]. XNA consists of an integrated development environment (IDE) along with several tools for managing audio and graphics. JGame: JGame is a high-level framework for developing 2D games in Java [20]. JGame is an open source project and enables developers to develop games fast using few lines of code as JGame will take care of typical game functionality such as sprite handling, collision detection, and tile handling. JGame games can be run as stand-alone Java games, Java applets games running in a web browser or on mobile devices (Java ME). JGame does not provide a separate IDE but is integrated with Eclipse. Flash: Flash is a high-level framework for interactive applications including games developed by Adobe [21]. Most programming in Flash is carried out in Action script (a textual programming language), but the Flash environment also provides a powerful graphical editor for managing graphical objects and animation. Flash applications can run as stand-alone applications or in a web browser. Flash applications can run on many different operating systems like Windows, Mac OS X, and Linux as well as on mobile devices and game consoles (Nintendo Wii and Sony Playstation 3). Programming in Flash is partly visual by manipulating graphical objects, but most code is written textually. Flash supports development of both 2D and 3D applications. Scratch: is a visual programming environment developed by MIT Media Lab in collaboration with UCLA that makes it easy to create interactive stories, animations, games, music, and art and to share the creations on the web [22]. Scratch works similar to Alice [23] allowing you to program by placing sprites or objects on a screen and manipulate them by drag-and-drop programming. The main difference between Scratch and Alice is that Scratch is in 2D while Alice is in 3D. Scratch provides its own graphical IDE that includes a set of programming primitives and functionality to import various multimedia objects.

An Application of a Game Development Framework in Higher Education

37

An evaluation of the four GDF candidates is shown in Table 1. From the four candidates, we found Scratch to be the least appropriate candidate. {    '         teach software architecture using this GDF, as the framework did not allow exploring various software architectures. Further, Scratch was also very limited in what kind of games that could be produced, limiting the options for the students. The main advantage using Scratch is that it is very easy to learn and use. JGame suffered also from some of the same limitations as Scratch, as it put some restrictions on what software architecture could be  !  " >  >  {  main advantage using JGame was that it was an open source project with access to the source code and that all the programming was done in Java. }''\ ¢{¢† ˆ     programming courses. An attractive alternative would be to use Flash as a GDF. Many developers use Flash to create games for kids as well as games for the Web. Flash puts little restrictions on what kind of games you can develop (both 2D and 3D), but there are some restrictions on what kind of software architecture you can use in your applications. The programming language used in Flash, Action Script, is not very different from Java so it should be rather easy for the students to learn. The main disadvantage using Flash in the software architecture course was the license costs. As the computer and information science department does not have a site license for the Flash development kit, it would be too expensive to use. XNA was found an attractive alternative for the students, as it made it possible for them to create their own XBOX 360 games. XNA puts little restrictions on what kinds of software architectures you apply in your software, and it enables the developers to create almost any game. XNA has strong support from its developer (Microsoft) and has a strong community of developers along with a lot of resources (graphics, examples, etc.). The main disadvantages using XNA as a GDF in the course were that the students had to learn C# and that the software could only run on Windows machines. Compared to JGame and other Java-based GDFs, XNA has a richer set of high-level APIs and a more mature architecture.

38

Computer Games Technology

Table 1: Evaluation of four GDF candidates Selection requirement

XNA

JGame

Flash

Scratch

(1) Easy to learn

Relatively easy to learn, but requires to learn several core concepts to utilize the offered possibilities.

Easy to learn, but requires to learn a small set of core concepts.

Relatively easy to learn, but requires to learn several core concepts to utilize the offered possibilities.

Very easy and intuitive to learn and supports dynamic changes to the game in run time.

(2) Open develop XNA puts little environment restrictions on what kind of games can be developed and supports development of both 2D and 3D games. Not open source project.

JGame supports a limited set of games, mainly classical 2D arcade games. Open source project.

Flash puts little restrictions on what kind of games can be developed and supports development of both 2D and 3D. Not open source project.

Scratch limits the options of what kind of games the user can make through the limited options provided in the graphical programming environment. Not open source project.

(3) Familiar programming language

All programming is done in C#.

All programming is done in Java

Some programming can be done using drag-and-drop, but most will be written in Action Scripts.

All programming is done in the visual drag-and-drop programming language Scratch.

(4) Not in con"     tional goals

XNA puts little restrictions on what kinds of software architectures can be used.

JGame puts some restrictions on what kinds of software architecture can be used.

Flash puts some restrictions on what kinds of software architectures can be used.

Scratch puts strict restrictions on what kinds of software architectures can be used.

(5) Stable implementation

XNA has a very stable implementation and is updated regularly.

JGame has a relatively stable implementation and is updated regularly.

Flash has a very stable implementation and is updated regularly.

Scratch has a relatively stable implementation and is updated regularly.

An Application of a Game Development Framework in Higher Education

39

–€'   documentation

XNA is well documented and offers several tutorials and examples. Many books on XNA are available.

JGame is not well documented, but some examples exist.

Flash is well documented and offers several tutorials and examples. Many books on Flash are available.

Scratch is ok documented and has some examples and tutorials available.

(7) Low costs

XNA is free to use. A $99 for a year of membership is required to develop games for XBOX 360.

JGame is free to use.

The Flash development kit costs $199 per license (university license).

Scratch is free to use.

Based on the evaluation described above, we chose XNA as a GDF for our course. From previous experience we knew that it does not require much effort and time to learn C# for students that already know Java.

XQUEST—An Extension of the Chosen GDF After we had decided to use XNA as a GDF in the software architecture course, we launched a project to extend XNA to make XNA even easier to use in the student project. This project implemented XQUEST (XNA QUick & Easy Starter Template) [24], which is a small and lightweight 2D game library/game template developed at NTNU that contains convenient game components, helper classes, and other classes that can be used in the XNA game projects (see Figure 2). The goal of XQUEST was to identify and abstract common game programming tasks and create a set of components that could be used by students of the course to make their life easier. We choose to focus only on 2D. There are a few reasons for this. First, the focus of the student projects is software architecture, not making a game with fancy 3D graphics. Second, students unfamiliar with game programming and 3D programming may find it daunting to have to learn the concepts needed for doing full-blown 3D in XNA, such as shader programming and 3D modeling, in addition to software architectures. To keep the projects in 2D may reduce the effect of students focus only on the game development and not on the software architecture issues.

40

Computer Games Technology

Figure 2: The XQUEST library shown in the XNA development environment.

Teaching Software Architecture Using XNA XNA was introduced in the software architecture course to motivate students to put extra effort in the student project with the goal to learn the course content such as attribute driven design, design and architectural patterns, ATAM, design of software architecture, view points, and implementation of software architecture. This section will go through the different phases of this project and describe how XNA affected these phases.

Introduction of XNA Exercises In the start of the semester the course staff gave an introduction to course where the software architecture project was presented. Before the students started with their project, they had to do an exercise individually or in pairs where they got to choose their own partner. The goal of the first exercise was to get familiar with the XNA framework and environment, and the students were asked to complete four tasks.(1)Draw a helicopter sprite on the screen and make it move around on its own.(2)Move around the helicopter sprite from previous task using the keyboard, change the size of the sprite when a key was pressed, rotate the sprite when another key was pressed, and write the position of the sprite on the screen.(3)Animate the helicopter sprite using several frames and do sprite collision with other sprites.(4)Create the classical Pong game in XNA.

An Application of a Game Development Framework in Higher Education

41

Before the students started on their XNA introduction exercise, they got a two-hour technical introduction to XNA. During the semester, two technical assistants were assigned to help students with issues related to XNA. These assistants had scheduled two hours per week to help students with problems, in addition to answering emails about XNA issues.

Requirement and Architecture for the Game Project After the introduction exercise was delivered, the students formed groups of four students. Students that did not knew anyone were assigned to groups. The course staff then issued the project task where the goal was to make a functioning game using XNA based on students’ own defined game concept. However, the game had to be designed and implemented according to their specified and designed software architecture. Further, the students had to develop a software architecture that focused on one particular quality attribute assigned by the course staff. We used the following definitions for the quality attributes in the game projects: Modifiability, the game architecture and implementation should be easy to change in order to add or modify functionality; Testability, the game architecture and implementation should be easy to test in order to detect possible faults and failures. These two quality attributes were related to the course content and the textbook. A perfect implementation was not the ultimate quest of this XNA game project, but it was critical that the implementation reflected the architectural description. It was also important that the final delivery was well structured, easy to read, and made according to the template provided by the course staff. {  >   >_    Š     >   where the students should deliver requirements and the software architecture          ^      "        {  requirements document focused on a complete functional requirement description of the game and several quality requirements for the game described as scenario focusing on one particular quality attribute. The     >  >>     of the game project, and the students had to document their architecture according to IEEE 1471-2000 [25]. The architecture documentation could            { ‡  required in the architectural description in the game projects.

42

Computer Games Technology

Table 2: List of architecture description for the game project #

Architectural description Details of the implementation attributes

(1)

Architectural drivers

The main drivers that affect the system mostly, including the attribute on which the students focus.

(2)

Stakeholders and concerns

Stakeholders of the system, and their concerns.

(3)

Selection of architectural viewpoint

A list of the viewpoints used, and their purpose, target audience, and form of description. Places to look in for possible viewpoints including the book [12] and the 4 + 1 article by Kruchten [26].

(4)

Quality tactics

Including all attributes and more details for the focused ones.

(5)

Architectural patterns

The major patterns of your architecture, both architectural and major design ones.

(6)

Views

A separate section for each required views: logic, process, and development views or other views added by students.

(7)

Consistency among views

Discuss the consistency between each described view.

(8)

Architectural rationale

In this section and subsections, add why things are chosen.

We also required that the students wrote the code skeleton for the architecture they had designed. This was done to emphasize the importance of starting the implementation early and to ensure that students designed an architecture that was possible to implement.

Evaluation of the Game Project After the requirements, the architecture and the code skeleton were delivered; the student groups were assigned to evaluate each other’s architecture using ATAM. The whole idea was for one project group to evaluate the architecture of the other group’s game to give feedback on the architecture related to the quality focus of the software architecture [27]. It included attribute utility tree, analysis of architectural approach, sensitivity points, trade-off points, risks and non-risks, and risk themes.

Detailed Design and Implementation The focus of implementation phase was to design, implement, and test the game application. The documentation delivered in this phase focused on the

An Application of a Game Development Framework in Higher Education

43

test results from running the game related to the specified requirements and the discussion of the relationship between the implemented game and the architectural documentation [14, 15]. Table 3 lists what should be delivered in the implementation phase. Table 3: Design & implementation phase description #

Implementation deliverables

Details of implementation

(1)

Design and implementation

A more detailed view of the various parts of the architecture description of game design.

(2)

User’s manual

To guide the users; the steps to compile and run the game.

(3)

Test report

Contains both functional requirements and quality requirements (quality scenarios).

(4)

Relationship with the architecture

List the inconsistencies between the game architecture and the implementation and the reasons for these inconsistencies.

(5)

Problems, issues, and points learned

Listing problems and issues with the document or with the implementation process.

For the test report part in the Table 3, the functional requirements and quality requirements had the attributes like shown in Table 4 and Table 5. The test reports should also include a discussion about the observation of the test unless there was nothing to discuss about the test results. Table 4: Attributes of functional requirements F1: The role in game should be able to jump along happily Executor Date Time used Evaluation

Super Mario III 23.3.2005 5 min Fail: White role cannot jump!

Table 5: Attributes of quality requirements A1: The role in game should not get stuck Executor

Snurre Sprett

44

Computer Games Technology Date Stimuli Expected response Observed response Evaluation

24.3.2005 The role should be able to move around for 10 min Success in 8 of 10 executions Success in 3 of 10 executions Fail

   >  !        of their projects that included all documents, code, and other material from all project phases. The course staff evaluated all the groups’ deliveries and gave grades by judging document and implementation quality, document and implementation completeness, architecture design, and readability and structure of code and report.

The Game Project Workshop In this workshop, selected groups had to give short presentations about the project goal, quality attribute focus, proposed architectural solution with some diagrams or explanations, and an evaluation of how well did the solution worked related to functional requirements and quality focus. Further, the selected groups ran demos of their games, and it was opened for questions from the audience. The workshop provided an open mind environment to let students give each other feedback, brainstorm about improvements and ideas, and to discuss their ideas to give a better understanding of the course content and game architecture design.

Post-Mortem Analysis In the final task in the project, every group had to perform a post-mortem analysis of their project. The focus of the PMA was to analyze successes and problems of the project. The PMA was documented in a short report that included a positive (successes) and a negative (problems) KJ diagram (structured brainstorm map), a positive and a negative causal map (a diagram that shows cause-effect relationships), and experiences from using PMA [13]. The PMA made the students reflect on their performance in the project and gave them useful feedback to improve in the future projects and inputs for the course staff to improve the course. The main topics analyzed in the PMA were issues related to group dynamics, time management, technical issues, software architecture issues, project constraints, and personal conflicts.

An Application of a Game Development Framework in Higher Education

45

EXPERIENCES OF USING GDF IN SOFTWARE ARCHITECTURE The experiences described in this section are based on the final course evaluation, feedback from the students during the project, and the project reports. {           € ^   course answer three questions. The results reported below are a summary of the students’ responses related to the project and the GDF. (1) (a)

(2)

What have been good about software architecture course? About the project itself: “Cool project”, “Really interesting project”, “We had a lot of fun during the project”, “It is cool to make a game”, “Fun to implement something practical such a game”, “Videogame as an exercise is quite interesting”, “I really liked the project”, and “The game was motivating and fun”. (b)Project and learning: “Good architectural discussion in the project group I was in”, “Learned a lot about software architecture during the project”, “The project helped to understand better the arguments explained in the lectures, having fun at the meantime”, “Fun project where we learned a lot”, “I think that the creation of a project from the beginning, with the documentation until the code implementation, was very helpful to better understand in practice the focus of the course”, “The game project was tightly connected to the syllabus and lectures and gave valuable experience. The main thing I learned was probably how much simpler everything gets if you have a good architecture as a basis for your system”, and “The interplay of game and architectural approaches”.(c)The project being practical work: “I think it was pretty good that you guys made us do a lot of practical work”, and “To choose C# as a platform is a good idea as it is used a lot in the software industry; at the same time it is very similar to Java so it is rather easy to learn the language.”(d)Interplay between groups: “It was              ¤ >_      presentation”. What have been not so good about the course software architecture?

Computer Games Technology

46

XNA support: “The way the student assistants were organized; during the implementation periods at least they should be available in a computer lab and not just in the classroom”, “Maybe the use ¥¢‘ ^¥”†\'{     *   use it. Maybe some extra lecture focus on the use of XQUEST Framework was better”, and “We did not have lectures on XNA; could have got some more basic infoHmm”(b)XNA versus software architecture: “Took a lot of time getting to know C#, I liked it, but I did not have the time to study architecture” and “The use of game as a project may have removed some of the focus away from the architecture. XNA and games in general limit the range of useful architectures.” (3) What would you have changed for next year’s course? (a) Project workload: “Maybe just little more time to develop the game” and “I would change the importance of the project. I think that the workload of the project was very big and it can matter the 50% of the total exam.”(b)XNA support: “Perhaps have some C# intro?” and “It would be helpful to have some lab hours”. (c)Project constraints: “Maybe more restrictions on game type, to ensure that the groups choose games suited for architectural experimentation.” The responses from the students were overall very positive. In the previous years, the students in the software architecture course had to design the architecture and implement a robot controller for a robot simulator in Java. The feedback from the XNA project was much more positive than the feedback from the robot controller project. Other positive feedback we got from the students was that they felt they learned a lot from the game project, that they liked the practical approach of the project and having to learn C#, and the interaction between the groups (both ATAM and the project workshop). (a)

The negative feedback from the course evaluation was focusing on the lack of XNA support and technical support during the project and that some student felt that there was too much focus on C#, XNA, and games and too little on software architecture. The suggestions to improve the course were mainly according to the negative feedback, namely, to improve XNA support and to adjust the workload of the project. One student also suggested limiting the types

An Application of a Game Development Framework in Higher Education

47

of games to be implemented in project to ensure more focus on software architectural experimentation.

Snapshots from Some Student Projects Figure 3 shows screenshots from four student game projects. The game at the upper left corner is a racing game, the game at the upper right corner is a platform game, and the two games below are role-playing games (RPGs). Some of the XNA games developed were original and interesting. Most games were entertaining but were lacking contents and more than one level due to time constraints.

(a)

(b)

48

Computer Games Technology

(c)

(d) Figure 3: Game based on XNA framework (Top left: Racing; Top right: Codename Gordon; Bottom: RPG).

RELATED WORK This paper describes experiences from utilizing the special features of a GDF in a software architecture course. The main benefits from applying a GDF in a CS or SE course is that the students get more motivated during the software development project. As far as we know, there are few papers that describe

An Application of a Game Development Framework in Higher Education

49

the usage of a professional GDF concept applied in universities courses that is not directly a target for learning game development, especially no papers about usage of XNA in higher education. However, there are some related approaches in education described in this section. El-Nasr and Smith describes how the use of modifying or modding existing games can be used to learn computer science, mathematics, physics, and ascetic principles [10]. The paper describes how they used modding of the WarCraft III engine to teach high school students a class on game design and programming. Further, they describe experiences from teaching university students a more advanced class on game design and programming using the Unreal Tournament 2003 engine. Finally, they present observations from student projects that involve modding of game engines. Although the paper claims to teach students other things than pure game design and programming, the GDFs were used in the context of game development courses. The framework Minueto [28] is implemented in Java, and it is used by students in their second year of undergraduate studies at McGill University in Montreal, Canada. The framework encapsulates graphics, audio, and keyboard/mouse inputs to simplify Java game development. It allows development of 2D games, such as card games and strategy games, but it lacks in support for visual programming and suffers from limited documentation. The Labyrinth [29] is implemented in Java!"   = to-use computer game framework. The framework enables instructors to >   >  >  >      {   ^    ‹ =`  !  ! lets the students change different aspects of the game. However, it cannot be used to develop different genres of game, and there is little room for changing the software architecture of the framework. The JIG (Java Instructional Gaming) Project [30] is a collaborative effort between Scott Wallace (Washington State University Vancouver) and Andrew Nierman (University of Puget Sound) in conjunction with a small group of dedicated students. It has three aims: (1) to build a Java Instructional Game Engine suitable for a wide variety of students at all levels in the curriculum; (2) to create a set of educational resources to support the use of the game engine at small, resource-limited, schools; (3) to develop a community of educators that use and help improve these resources. The JIG Project was proposed in 2006, after a survey of existing game engines

50

Computer Games Technology

revealed a very limited supply of existing 2D Java game engines. JIG is still in development. GarageGames [31] offers two game engines written in C++. The Torque Game Engine targets 3D games, while the Game Builder provides a 2D API and encourages programmers to develop using a proprietary language (C++ can also be used). Both engines are aimed at a wide audience, including students and professionals. The engines are available under separate licenses ($50 per license per year for each engine) that allow full access to the source code. Documentation and tutorials cover topics appropriate for beginners and advanced users. The University of Michigan’s DXFramework [32] game engine is  }¦¦{      >  ‡; ! although previous versions have included a 3D API as well. This engine is designed for game programming education and is in its third major iteration. The DXFramework is an open source project. Compared to XNA, DXFramework has no competitive advantage as it has limited support for visual programming, and it is not easier than XNA to learn. The University of North Texas’s SAGE [33] game engine is written in C++, and targets 3D games, not 2D. Like the DXFramework, SAGE is   >   > educational usage. The source code can be downloaded and is currently available without license. Marist College’s GEDI [34] game engine provides a second alternative for 2D game design in C++, and is also designed with game programming educational use in mind. Source code can be downloaded and is currently available without license, but GEDI is still in the early phases of development. Only one example game is distributed with the code, and little documentation is available. For business teaching, Arena3D [35] is a game visualization framework with its animated 3D representations of the work environments; it simulates patients queuing at the front desk and interacts with the staff. IBM has also produced a business game called INNOV8 [36] which is “an interactive, 3-D business simulator designed to teach the fundamentals of business process management and bridge the gap in understanding between business leaders and IT teams in an organization”.

An Application of a Game Development Framework in Higher Education

51

CONCLUSION AND FUTURE WORK In this paper we have presented a case study of how a GDF was evaluated, chosen, and integrated with a software architecture course. The main goal of introducing a GDF and a game development project in this course was to motivate students to learn more about software architecture during the game development project. The positive feedback from the students indicates that this was a good choice as the students really enjoyed the project and learned software architecture from carrying out the project. We will continue to explore the area of using games, games concept, and game development in CS and SE education and evaluate how this affects the students’ motivation and performance. The choice of XNA as a GDF proved to be a good choice for our software architecture course. The main disadvantage using XNA is the lack of support for non-Windows operating systems like Linux and Mac OS X. Mono. XNA is a cross platform implementation of the XNA game framework that allows XNA to run on Windows, Mac OS X, and Linux using OpenGL [37]. The project is still in an early phase. An alternative to solve this problem is to let the students choose between different GDFs, for example, XNA and a Java-based GDF. The main challenge for this approach is that the course staff needs to know all the GDFs offered to the students to give proper technical assistance. Based on the feedback from the students; the technical support is very important and must be considered before providing choices of more GDFs.

ACKNOWLEDGMENTS The authors would like to thank Jan-Erik Strøm and Trond Blomholm Kvamme for implementing XQUEST and for their inputs to this paper. They would also like to thank Richard Taylor and Institute for Software Research (ISR) at University of California, Irvine (UCI), for providing a stimulating research environment and for hosting a visiting researcher.

52

Computer Games Technology

REFERENCES 1.

R. Rosas, M. Nussbaum, P. Cumsille et al., “Beyond Nintendo: design           students,” Computers & Education, vol. 40, no. 1, pp. 71–94, 2003. 2. M. Sharples, “The design of personal mobile technologies for lifelong learning,” Computers & Education, vol. 34, no. 3-4, pp. 177–193, 2000. 3. A. Baker, E. O. Navarro, and A. van der Hoek, “Problems and programmers: an educational software engineering card game,” in Proceedings of the 25th International Conference on Software Engineering (ICSE ‘03), pp. 614–619, Portland, Ore, USA, May 2003. 4. L. Natvig, S. Line, and A. Djupdal, ““Age of computers”; an innovative combination of history and computer game elements for teaching computer fundamentals,” in Proceedings of the 34th Annual Frontiers in Education (FIE ‘04), vol. 3, pp. 1–6, Savannah, Ga, USA, October 2004. 5. E. O. Navarro and A. van der Hoek, “SimSE: an educational simulation game for teaching the software engineering process,” in Proceedings of the 9th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE ‘04), p. 233, Leeds, UK, June 2004. 6. G. Sindre, L. Natvig, and M. Jahre, “Experimental validation of the learning effect for a pedagogical game on computer fundamentals,” IEEE Transactions on Education, vol. 52, no. 1, pp. 10–18, 2009. 7. B. A. Foss and T. I. Eikaas, “Game play in engineering education: concept and experimental results,” International Journal of Engineering Education, vol. 22, no. 5, pp. 1043–1052, 2006. 8. A. I. Wang, O. K. Mørch-Storstein, and T. Øfsdahl, “Lecture quiz—a mobile game concept for lectures,” in Proceedings of the 11th IASTED International Conference on Software Engineering and Application (SEA ‘07), pp. 305–310, Cambridge, Mass, USA, November 2007. 9. A. I. Wang, T. Øfsdahl, and O. K. Mørch-Storstein, “An evaluation of a mobile game concept for lectures,” in Proceedings of the 21st Conference on Software Engineering Education and Training (CSEET ‘08), pp. 197–204, Charleston, SC, USA, April 2008. 10. M. S. El-Nasr and B. K. Smith, “Learning through game modding,” Computers in Entertainment, vol. 4, no. 1, pp. 45–64, 2006.

An Application of a Game Development Framework in Higher Education

53

11. T. W. Malone, “What makes things fun to learn? Heuristics for designing instructional computer games,” in Proceedings of the 3rd ACM SIGSMALL Symposium and the First SIGPC Symposium on Small Systems (SIGSMALL ‘80), pp. 162–169, ACM Press, Palo Alto, Calif, USA, September 1980. 12. P. Clements, L. Bass, and R. Kazman, Software Architecture in Practice, Addison-Wesley, Reading, Mass, USA, 2nd edition, 2003. 13. A. I. Wang and T. Stålhane, “Using post mortem analysis to evaluate software architecture student projects,” in Proceedings of the 18th Conference on Software Engineering Education and Training (CSEET ‘05), pp. 43–50, Ottawa, Canada, April 2005. 14. J. O. Coplien, “Software design patterns: common questions and answers,” in The Patterns Handbook: Techniques, Strategies, and Applications, pp. 311–320, Cambridge University Press, New York, NY, USA, 1998. 15. A. Rollings and D. Morris, Game Architecture and Design: A New Edition, New Riders Games, Indianapolis, Ind, USA, 2003. 16. D. P. Perry and A. L. Wolf, “Foundations for the study of software architecture,” ACM Sigsoft Software Engineering Notes, vol. 17, no. 4, pp. 40–52, 1992. 17. R. Kazman, M. Klein, M. Barbacci, T. Longstaff, H. Lipson, and J. Carriere, “The architecture tradeoff analysis method,” in Proceedings of the 4th IEEE International Conference on Engineering Complex Computer Systems (ICECCS ‘98), pp. 68–78, Monterey, Calif, USA, August 1998. 18. Microsoft Corporation, “XNA Developer Center,” June 2008, http:// msdn.microsoft.com/en-us/xna/aa937794.aspx. 19. B. Nitschke, Professional XNA Game Programming: For Xbox 360 and Windows, John Wiley & Sons, New York, NY, USA, 2007. 20. JGame project, “JGame: a Java game engine for 2D games,” November 2008, http://www.13thmonkey.org/~boris/jgame/index.html. 21. Adobe, “Animation software, multimedia software—Adobe Flash CS4 ‹ !¢  ‡——‰! >˜žž   ž> ž"  22. Lifelong Kindergarten Group, MIT Media Lab, “Scratch: Imagine, Program, Share,” June 2008, http://scratch.mit.edu. 23. Carnegie Mellon University, “Alice.org,” June 2008, http://www.alice. org.

54

Computer Games Technology

24. T. Blomholm Kvamme and J.-E. Strøm, Evaluation and extension of an XNA game library used in software architecture projects, M.S. thesis, Department of Computer and Information Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway, June 2008. 25. IEEE Std 1471-2000, “IEEE Recommended Practice for Architectural Description of Software-Intensive Systems,” Software Engineering Standards Committee of the IEEE Computer Society, 2000. 26. P. Kruchten, “The 4+1 view model of architecture,” IEEE Software, vol. 12, no. 6, pp. 42–50, 1995. 27. A. BinSubaih and S. C. Maddock, “Using ATAM to evaluate a gamebased architecture,” in Proceedings of the 2nd International ECOOP Workshop on Architecture-Centric Evolution (ECOOP ‘06), Nantes, France, July 2006. 28. A. D. Minueto, An undergraduate teaching development framework, M.S. thesis, School of Computer Science, McGill University, Montreal, Canada, 2005. 29. J. Distasio and T. Way, “Inclusive computer science education using a ready-made computer game framework,” in Proceedings of the 12th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE ‘07), pp. 116–120, Dundee, Scotland, June 2007. 30. Washington State University Vancouver and University of Puget Sound, “The Java Instructional Gaming Project,” June 2000, http:// ai.vancouver.wsu.edu/jig. 31. GarageGames, “GarageGames,” June 2008, http://www.garagegames. com. 32. C. Johnson and J. Voigt, “DXFramework,” June 2008, http://www. dxframework.org. 33. I. Parberry, “SAGE: a simple academic game engine,” June 2008, http:// larc.csci.unt.edu/sage. 34. R. Coleman, S. Roebke, and L. Grayson, “GEDI: a game engine for teaching videogame design and programming,” Journal of Computing Science in Colleges, vol. 21, no. 2, pp. 72–82, 2005. 35. Rockwell Automation Inc, “Arena Simulation Software,” June 2008, http://www.arenasimulation.com.

An Application of a Game Development Framework in Higher Education

55

36. IBM, “INNOV8—a BPM Simulator,” June 2008, http://www.ibm. com/software/solutions/soa/innov8.html. 37. Monoxna, “Monoxna—Google Code,” November 2008, http://code. google.com/p/monoxna.

CHAPTER 3

Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming Kevin Hulme1 , Edward Kasprzak2 , Ken English1 , Deborah MooreRusso3 , and Kemper Lewis4 1

NY State Center for Engineering, Design and Industrial Innovation, University at Buffalo-SUNY, Buffalo, NY 14260, USA 2 Milliken Research Associates, Buffalo, NY 14260, USA 3 Graduate School of Education, University at Buffalo-SUNY, Buffalo, NY 14260, USA 4 Department of Mechanical and Aerospace Engineering, University at Buffalo-SUNY, Buffalo, NY 14260, USA

ABSTRACT Creating active, student-centered learning situations in postsecondary education is an ongoing challenge for engineering educators. Contemporary Citation: Kevin Hulme, Edward Kasprzak, Ken English, Deborah Moore-Russo, Kemper Lewis, “Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming”, International Journal of Computer Games Technology, vol. 2009, Article ID 952524, 15 pages, 2009. https://doi.org/10.1155/2009/952524 Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

58

Computer Games Technology

students familiar with visually engaging and fast-paced games can find traditional classroom methods of lecture and guided laboratory experiments limiting. This paper presents a methodology that incorporates driving simulation, motion simulation, and educational practices into an engaging, gaming-inspired simulation framework for a vehicle dynamics curriculum. The approach is designed to promote active student participation in authentic engineering experiences that enhance learning about road vehicle dynamics. The paper presents the student use of physical simulation and large-scale visualization to discover the impact that design decisions have on vehicle design using a gaming interface. The approach is evaluated using two experiments incorporated into a sequence of two upper level mechanical engineering courses.

INTRODUCTION The field of vehicle dynamics has the advantage that every student has either driven or been a passenger in an automobile. For most students, traveling in an automobile has been a regular occurrence for their entire lives. Thus, vehicle motions are inherently familiar to the student. Also, with 30 million vehicles being manufactured each year worldwide, advances in computing technology and electromechanical systems have expanded the influence engineers have over the stability and control of vehicle dynamics. In the field of automotive engineering, while a well-designed passive vehicle is still necessary, augmenting systems such as antilock brakes, electronic skid prevention, yaw control, active differentials, and similar systems improve safety and performance over passive designs. This increased control over the automobile will soon be inseparable from the increased complexity of the subsystems, all of which will combine to determine handling characteristics of the automobile. As a result, engineers are increasingly turning to simulation and virtual prototyping, rather than physical prototyping, to explore new design concepts. As the usage of simulation increases, the demand for students with hands-on experience in configuring and executing simulation-based research will also increase. The most authentic experience in vehicle dynamics education would             ! >  >    maneuvers, use on-board instrumentation to collect vehicle data and modify the vehicle to see resulting changes in its characteristics. While not impossible, concerns about cost, time, space, safety, and weather constraints

Experiential Learning in Vehicle Dynamics Education via Motion ...

59

make this impractical at most schools. An alternative solution is to make use of a simulated environment. Many students have already experienced controlling a sophisticated driving simulation environment in the form of a driving game like Gran Turismo 4 [1] or Race Pro [2], but do not associate the gaming environment with the models and equations that engineers use in designing a vehicle. This paper presents the development and evaluation of a learning environment that provides a game-based context for vehicle dynamics education. While driving game-based approaches have been developed for mental health, driver workload, and rehabilitation applications, gamingbased contexts have not been used in vehicle dynamics education. The presented work describes an innovative methodology for coupling gaming, motion simulation, and educational practices together in a cohesive pedagogical approach. The project’s theoretical underpinnings are based on situated learning where new educational material is presented in an authentic context, and social interaction and collaboration are required for learning to occur. Through a learner-centered approach, students use a physical simulation and large-scale visualization in a gaming-inspired format to discover the impact that design decisions have on a dynamic system. While incorporating a gaming environment may intuitively make sense from the perspective of creating a more engaging experience for students,   >      >    be developed for including a gaming-based learning environment in an engineering course. In Section 2 of the paper, the educational motivation for the work is presented, along with the pedagogy that supports the development of the framework. The development of serious games is presented in Section 3, providing a background for the development of the gaming context for vehicle dynamics education. The learning environment, core methodology, and accompanying gaming infrastructure are detailed in Section 4. Section 5 presents the incorporation of the simulation game environment into two upper-level mechanical engineering courses, including assessments about the impact of the experience on the students. The paper is concluded with a set of collective insights and conclusions.

EXPERIENTIAL LEARNING Relating theoretical and analytical results to real-world phenomena is one of the most difficult tasks in education. While text, equations, diagrams, and

60

Computer Games Technology

graphs are an efficient means of presenting large amounts of information, such representations are necessarily abstractions of reality. A significant part of a student’s learning process is learning how to transform these abstractions into knowledge that will allow them to applying their understanding to real-world products and systems. Many attempts to cross this gap are used by educators, including in-class demonstrations, laboratory experiments, videos, and computer graphic simulations [3–6]. In a study of the application of information technology to education, the President’s Information Technology Advisory Council (PITAC) recommended the development of technologies for education and training that use simulation, visualization, and gaming to actively engage students in the learning experience [7]. In the same report, PITAC also recommended the development of educational experiences that provide learners with access to world-class facilities and experiences using either actual or simulated devices. The serious gaming approach presented in this paper concentrates on developing an innovative means of incorporating items from the accreditation criteria to assist in the development of educational experiences that will translate well to industrial application [8]: (i)an ability to apply knowledge of mathematics, science, and engineering;(ii)an ability to design and conduct experiments as well as to analyze and interpret data;(iii)an ability to identify, formulate, and solve engineering problems;(iv)an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice.In addition to the guidance of the accrediting body, the National Survey of Student Engagement (NSEE) provides an opportunity for senior college students to      >       ^Œ (i)Level of Academic Challenge: challenging intellectual and creative work is central to student learning.(ii)Student-Faculty Interaction: contact with professors offers students an opportunity to see how experts think about and solve practical problems.(iii)Active and Collaborative Learning: students learn more when intensely involved in the educational process and are encouraged to apply their knowledge in many situations.(iv)Supportive Campus Environment: students perform better when their college is committed to their success and cultivates positive social relationships among different groups of people.(v)Enriching Educational Experiences: learning opportunities inside and outside classroom (diversity, technology, collaboration, internships, community service, capstones) enhance learning. These points of guidance emphasize the need for students to not only study,

Experiential Learning in Vehicle Dynamics Education via Motion ...

61

but also to practice the application of their knowledge in an active manner that is similar to what they will experience after college. One approach to provide these experiences is simulation, which enables practicing engineers to determine the performance of a design before the system is actually constructed. Consequently, an engineer may explore the merits of alternative     >       !      the cost of designing a product. Many opportunities exist in engineering education to mimic how engineers in industry have increasingly turned to simulation in product and systems design processes. However, the use of simulation does not ensure student engagement. In order to provide an engaging experience for students to learn in an educational context, the simulation must be developed with student engagement in mind. Different types of simulators can be effective in teaching, training, or demonstrating design concepts to students. Computer graphics-based simulators that display the simulated system provide the user with a meaningful understanding of the system’s behavior. Physical simulation tools, such as motion simulators, augment the display of graphical information with physical motion, providing learners with immediate feedback when designing dynamic systems and the opportunity to both see  >    >     `   already familiar with computer games that are based on physically accurate >       =   >          _ some parameters of the underlying simulation. In this situation, students may adjust parameters, but not experience any lasting learning. A successful pedagogy will couple authentic experiences for a student along with support from an instructor to promote lasting learning. While serious gaming could be applied to a number of engineering courses, vehicle dynamics education             potential for the development of a simulation game-based learning environment. Designers of automotive vehicles often use motion simulation  >             >           Creating a prototype vehicle is costly and controlling environmental    !   >  ƒ         the vehicle dynamics through the use of a motion platform. Developing a serious game for vehicle dynamics education would provide learners with an authentic engineering experience in an engaging and relatable manner. The next section provides some background of the evolution of games and their use as educational tools.

62

Computer Games Technology

SERIOUS GAMING FOR EDUCATION Video games and gaming systems have increasingly been found in applications more diverse than just entertainment, including use in training, education, research, and simulation. This emerging field of Serious Games (e.g., [10, 11]) is intended to provide an environment for an accurate and an engaging context within which to motivate participants beyond the capability of conventional approaches. {                 '  ™ !   = >      the genre exist. Of particular interest for the research at hand are two such  ˜€ =  !€ . Prensky |‡  >   ™ =? ’                _  matter with game-play, and the ability of the player to retain and apply the _          š ^ |        “mixes skill, strategy, and chance to simulate some aspect of reality” as  ' ™  {  ' ™       relevant subclasses of its own, including “vehicle simulation” (i.e., providing a participant with a realistic interpretation of operating a vehicle, such as  €!   >  !   ! = third-person driving games). The approach presented in this work relies on a simulation game architecture to achieve the educational goals of the course. Numerous computer games, all of which were developed primarily for   >> !   "    developments in the present work. For dissemination and realism, it is useful to have a simulation that allows for multiple participants simultaneously [14]. It is also likely that these participants may not be colocated (i.e., in the same room, the same city, the same state, or even in the same country). Thus, an over-the-network capability is certainly desirable. An effective example in the gaming world that addresses this need is the game called ?¨"|‚!  |ŒŒ‡?¨"!  ‘ |! simple networked 3D multiplayer game, in which the objective is to destroy >> •^?¨"    |Œ‰—^  game called Battlezone [16]. Battlezone is a game many consider to be the       !              ` ground vehicle training [17].

Experiential Learning in Vehicle Dynamics Education via Motion ...

63

Figure 1˜?¨"

Various gaming systems exist that allow the player to generate a   > =             ƒ  >   '}|‰!    |Œ‰Œ{ _    !Š >! is to design and build a virtual city. The developer is presented with an empty plot of land, within which buildings, roadways, landscaping, and so forth. can be placed to form a representation of a functional city. The popularity of the game was due, in large part, to its open-endedness, reusability, and ease-of-use. These types of “Sim” games have become immensely popular in recent years and have led to various similar games that have ties to    >    =>  perspective. Easily the most popular of these newer breed of games is Roller Coaster { |Œ!    |ŒŒŒ  ‘ ‡''} in that the player is presented with an empty plot of land, the objective is to build a functioning amusement park, including pathways, landscaping, and of course rides. During game play, the player builds a virtual roller coaster > ==>    ^             >  {   >  = >    !   coaster ride behaves true to real-world form. While these examples provide a small sample of the game-based learning applications that have been >   ! >        game    >  > {   > 

64

Computer Games Technology

description of the simulation game and the physical environment where the students play the game.

Figure 2: Roller coaster tycoon.

DRIVING SIMULATOR GAME FOR EXPERIENTIAL LEARNING This section describes the methodology behind the implementation of a motion-based simulation game in the form of a driving simulator geared toward undergraduate and graduate engineering students. This simulation game was generated primarily for education, research, and training, and for sake of authenticity and it exploits the availability of an on-site motion simulator. A brief background on motion simulators is presented first.

Motion Simulators A motion platform, parallel manipulator [20], or Stewart Platform [21, 22], is a powered, mechanical, self-contained system for the execution of motionbased simulation and is described by the number of degrees-of-freedom (DOF) that can be simulated by the hardware. Figure 3 shows the motion platform currently in use in NYSCEDII’s simulator facility. The motion platform has an on-board computer that converts incoming messages into actuator commands that result in the movement of the top of the platform. Such systems are commonly used in the automotive and other industries (e.g., flight, training, and entertainment). NYSCEDII’s passenger cabin located atop the platform is a 1999 Ford Contour, cut in half to accommodate a front seat driver and passenger.

Experiential Learning in Vehicle Dynamics Education via Motion ...

65

Figure 3: Moog 2000E motion platform and passenger cabin.

One of the primary advantages of motion simulators is that they are able to provide users with immediate feedback when designing dynamic systems. When a student drives or rides in the simulator, they experience many of the same sensory inputs as in a real vehicle.

Methodology A common complaint of graduating undergraduate engineers is that while they are well versed in engineering theory, they are under rehearsed in engineering application. Many engineering curricula do not spend enough time assuring that when students complete their studies that they enter the work force with sufficient practical, experiential, and hands-on knowledge of their craft. To this end, PITAC [7] has indeed recommended the development of “technologies for education and training that use simulation, visualization, and gaming” and “educational experiences that provide learners with access to world-class facilities and experiences using either actual or simulated devices.” One option for exposing engineering students to hands-on driving experiences would be to use an actual vehicle upon an actual roadway or test track. While this is undoubtedly the most authentic means of exposing a student to the practical principals of real world driving, clearly there are

66

Computer Games Technology

insurmountable safety concerns that deem this option impractical. As well,  >   =    !™{|€ might seem to be a more straightforward option than the full development of a complete games-based driving simulation. However, such off-the-shelf games, geared primarily for an entertainment audience, do not allow for the creation of custom learning scenarios that are designed to focus a student’s attention on the system dynamics rather than the outcome of a particular race. By developing an engaging simulation game that focuses on the differences     !       !    types, tire dynamics, etc.), students can develop a better understanding of the information available as engineers make decisions at different stages of a vehicle design project. Data-logging can be performed directly within the simulation, changes to the vehicle can be made very quickly—even while driving, and weather and climate conditions are operator controlled.   !  >    >        as they integrate gaming elements, simulation elements, and pedagogical elements by including support from system infrastructure, instructors, and peers [23]. They balance subject matter with game-play, focusing on the skills and strategies of the driver to retain and apply the subject matter to the real world. Much of the novelty in the simulator is its integration of computer programming, computer graphics, linear algebra, numerical methods, and systems analysis with the vehicle dynamics theory fundamental to the Road Vehicle Dynamics (RVD) courses for which the simulation was developed. The methodology employed, adapted from the general structured process for education gaming and simulation [24], is as follows. |€;     ‡€'      gaming tasks to support this required learning outcome with an appropriate form of assessment.(3)Consider a strategic ordering of the game tasks within the gaming session and assessment process.(4)Implement the session   >>       > >!  !   "  (5)Assess the impact and effectiveness of the session.(6)Redesign the        ^  {  >   >      methodology is described in Sections 5 and 6. The underlying hardware and software framework of the game-based vehicle simulation environment is described in the next section.

Experiential Learning in Vehicle Dynamics Education via Motion ...

67

Framework The simulation game has been implemented using the C++ programming language [25] and constructed on the Windows platform within Microsoft Visual Studio. The entire gaming framework is outlined in the following steps; refer to Figure 4 for a flowchart overview.

Figure 4: Driving simulation game¡> " 

Establish Network Connection with Motion Platform (MP) (Process a) The computer integrated into the MP can send/receive motion commands, in the form of datagrams [26], from another computer on the network. Figure 5 presents a screen capture of Moog’s Base Maintenance Software [27], for monitoring the state of the six actuators on the motion platform (e.g., error messages, status messages, etc.) in real-time as the simulation executes. This software runs on the On-Board Computer (OBC) and is displayed on an off-board monitor.

68

Computer Games Technology

Figure 5: OBC display screen.

User Controls Simulation with Human Interface Device (HID) (Process b) The driver controls the simulation while on-board the MP by way of a USB steering wheel and pedals. Commands are captured using DirectX’s DirectInput Application Programmer’s Interface (API) [28, 29] in realtime. The selected gaming device, shown in Figure 6, contains numerous components that must be continually tracked, including the accelerator, brake, clutch pedals, the single-axis motion of the steering wheel, the paddle shifters, and 6 buttons located on the steering wheel.

Figure 6: On-board HID.

Experiential Learning in Vehicle Dynamics Education via Motion ...

69

Simulation Computer (SC) Receives Inputs, Performs Analysis for Current Time Step (Process c) The states of the vehicle dynamics model (described in Section 5) are now computed at 60 Hertz and must be converted into DOF’s (i.e., roll, pitch, yaw, heave, surge, and sway)—see Figure 7. This conversion typically involves scaling, limiting, and tilt coordination [30], collectively known as washout filtering [31]. The work presented here uses a proprietary filter based upon a classical washout algorithm [32].

Figure 7: Motion platform DOF’s.

Updated State Delivered to Projection System (PS) and Visualization Screens (VSs) (Process d) The scene graphics are now updated and sent to the PS and to the three 48 ft2 VSs. The simulation graphics have been developed using OpenGL [33], selected primarily to demonstrate that a useful game-based learning environment can be created without the optimizations of a scene graph toolkit (e.g., OpenSceneGraph [34] and Delta3D [35]). The virtual world is populated with a sky, ground (grass, pavement), the driven vehicle and adjacent vehicles, track/roadway obstacles (e.g. cones), curbs, walls, trees, roadway signs, the vehicle control panel, and various other details to promote an authentic representation of a test track. Figure 8 presents screen captures of various simulations.

70

Computer Games Technology

(a)

(b)

(c)

Experiential Learning in Vehicle Dynamics Education via Motion ...

71

(d) Figure 8: (Clockwise, from (a) Experiment 1 (“Skid Pad”), (b) Experiment 2 (“Tri-Radii”), (c) Future Work (“Networked Karting”), and (d) Experiment 3 (“Millersport Hwy”).

DOF’s Sent to Motion Platform Interface Computer (MPIC) (Process e) The updated DOF’s are delivered by the SC to the MPIC, dedicated to the motion process and requires an uninterrupted 60 Hz flow of data. This is accomplished using Parallel Virtual Machine (PVM) [36], a networking API based upon TCP/IP Socket programming [37] and coded in ANSI C. PVM allows us to easily and reliably send data packets between computers and handles cross-platform, nonheterogeneity issues transparently (e.g., endian/ byte-order concerns, char-to-float data conversion, etc.). Once the MPIC receives the updated DOF’s, these commands are finally delivered directly to the MP for motion processing.

MPIC Delivers Computed DOF’s to MP (Process f) An “instance” of the MP is declared and assigned an IP address. The instance is initialized, and then motion commands are updated and delivered to the MP at a rate of precisely 60 Hz. When the simulation is complete, the MP instance is then shutdown, thereby completing the motion process.

72

Computer Games Technology

         System (SS) (Process g) The simulation game uses OpenAL [38] for adding sound events synchronized with the graphical simulation. Sound is a vital simulation cue, and it enhances the impact of visualization and motion cues. Examples include vehicle ignition, engine idle, squealing tires, cone strikes, spinout/ crash, police siren, and vehicle shutdown. Each cue can be made to vary in pitch/gain in accordance with the present state of the simulation. The audio states are sent to the mixing board, are processed, and then are delivered to the amplifier, which delivers the audio signal to a 2.1 stereo sound system, comprised of front-left (SS-L), front-right (SS-R), and subwoofer (SS-SW) channels.

Emergency Stop Switch Is Available, at All Times, to Terminate the Simulation (Process h) Throughout the simulation, both the simulation driver (on board) and simulation operator (off board) can activate an Emergency Stop switch (to view the on-board switch, refer to bottom-right corner of Figure 6). When the red button is struck, the motion simulation comes to an immediate, premature, and smooth conclusion. Such a safety device is required in the case of a hardware malfunction or in the event of driver panic/illness, or any other circumstance that might deem premature termination of the present simulation. The next section discusses the incorporation of the simulation game described into a Road Vehicle Dynamics (RVD) engineering course curriculum, using an experiential learning context.

GAMING IMPLEMENTATION The simulation game is incorporated into a two-course sequence of technical electives on automobile vehicle dynamics, Road Vehicle Dynamics 1 and 2 (RVD1 and RVD2). These courses are open to senior undergraduate and graduate students. Between 60 and 75 students typically enroll in RVD1, with over two-thirds continuing in RVD2. RVD1 is an introductory course on the basics of automobile motion, stability, and control. This includes a review of tire performance and modeling, exploration of the elementary Bicycle Model of vehicle dynamics,

Experiential Learning in Vehicle Dynamics Education via Motion ...

73

and the development of a more detailed four wheel model [39]. A secondary goal of the course is to apply engineering skills and techniques learned in             >  

!               . The broad nature of the material naturally lends itself to such a review, reinforcing the value of the engineering curriculum, giving students the satisfaction of being able to apply their engineering skills to a particular topic and mimics the process of        >  ^     as will be required when entering the workforce. The course traditionally has consisted of daily lectures, weekly homework assignments, and three exams. RVD2 builds on RVD1, exploring advanced techniques such as quasistatic vehicle analysis. Oscillations of the sprung and unsprung masses are investigated with a focus on ride comfort, and the design and analysis of suspensions are covered. The course has more open-ended material and higher expectations of independent student learning, with several paper reviews, in-class presentations and projects throughout the semester.

Experiment 1: Low-Fidelity Simulation, Introductory Gaming Scenario in RVD1 The first experiment is presented in the context of the overall methodology, as presented in Section 4.2.

   ! The learning objective of the first experiment is for students to discover through experiential gaming the fundamental effects that tire properties and vehicle center of gravity location have on vehicle stability, control, and overall performance. Prior to the exercise, students complete the first five weeks of the course focusing mostly on tire behavior and modeling, including a basic introduction to a standard dynamic model of the automobile, although this is limited to the structure of the model with no discussion of its behavior.

Simulation Activities The simulation model used in the first experiment is the classic Bicycle Model of the automobile [39]. This model makes many simplifying assumptions that allow it to be easily analyzed and understood, while still providing all the fundamental characteristics of a real vehicle. The key elements of the

74

Computer Games Technology

Bicycle Model are as follows. It has three degrees-of-freedom (forward velocity, lateral velocity, and yaw rate), plus those used to track the motion of the vehicle in the world (forward position, lateral position, and heading angle). Both front tires are treated as a single tire, as are both rear tires, thus the Bicycle Model name. Figure 9 shows a top view of the Bicycle Model representation. The tires are represented as bilinear, as illustrated in Figure 10, described by (i) a cornering stiffness slope at low slip angles, and (ii) as a constant lateral force output above a certain “breakaway” slip angle. Driver inputs from the on-board steering wheel and pedals are used as inputs to the Bicycle Model which generates updated model states as outputs through numerical integration during every time step. One environment in the simulation game replicates a proving ground skidpad, which is a circular course (of outer radius 500 feet), with street lines indicating several internal         been performed at skidpad facilities [40–42]. This experiment performed for this research effort involves one of the fundamental tests—driving at everincreasing speed while attempting to maintain a given radius.

Figure 9: The bicycle model of the automobile.

Experiential Learning in Vehicle Dynamics Education via Motion ...

75

Figure 10: The bilinear tire model, shown for turning in one direction.

Task Ordering: Tasks 1 and 2 In the first two driving tasks, the driver is given a vehicle with equal strength (i.e., identical cornering stiffness and breakaway slip angle) tires on the front and rear. The first task places the center of gravity (CG) ahead of the vehicle midpoint, while the second task places it behind the vehicle midpoint. The student drives around the skidpad at slowly increasing speed, up to (and beyond) the tire saturation points. Students are asked to describe how the vehicle felt, how stable it was, and how their steering input changed as speed increased. They are also asked to describe what happened when a tire (front or rear) saturated. These tasks expose the students to vehicles with different understeer gradients, stability indices, yaw rate responses, and limit behaviors. In Task 1, the CG is ahead of the vehicle midpoint, resulting in a vehicle with static and dynamic stability. In short, the vehicle does not spin out while cornering. As vehicle speed increases, the amount of steering required to stay on the constant radius circle also increases, indicative of an understeer vehicle. This is also consistent with passenger car behavior (i.e., passenger        €{^|    ! forcing the driver to slow down in order to tighten the turn.

76

Computer Games Technology

In Task 2, the CG is behind the vehicle midpoint, resulting in a vehicle with static instability but with dynamic stability up to a certain speed. Beyond a certain speed the vehicle becomes unstable and spins out. Unlike the understeer car of Task 1, this oversteer car requires less and less steering     >    {^‡     ! resulting in a spin-out. Tasks 1 and 2 allowed the driver to become familiar with the motion simulator and experience the behavior of two very different vehicles. Task 3 built upon these experiences. Task 3 In the third task the driver is given a vehicle with the CG at the vehicle midpoint and chooses tires that have the cornering stiffness distribution biased to either the front or the rear. Based on that decision, the driver is ^ >      ^           ^{ >           trial run, but instead the student is asked to indicate as soon as possible while driving which of the previous cars it felt like. This task enables students to develop hypotheses about the relative location of the CG and the cornering stiffness distribution, test the hypothesis, and draw conclusions. In short, if the CG is forward of the cornering stiffness distribution the vehicle has understeer, is stable, and behaves similarly to the vehicle in Task 1. When the CG is aft of the cornering stiffness distribution the vehicle has oversteer, is unstable above a certain speed, and behaves similarly to the vehicle in Task 2. Task 4 The fourth task challenged the driver to optimize the CG location while driving to achieve the fastest possible speed around the skidpad. Students used two buttons on the steering wheel to move the center of gravity and felt the stability and response of the vehicle change as they moved the CG. The cornering stiffness distribution is unknown to the students in this case. This task allows students to further develop and test their hypotheses about the relationship between vehicle parameters.

Session Implementation A total of 73 students participated in the experiments, separated into 11 groups. Each group spent an hour with the motion simulation system

Experiential Learning in Vehicle Dynamics Education via Motion ...

77

performing the four specific driving tasks. At the end of the experiment each group received a copy of the data collected during all the runs for their analysis.

Assessment The assessment phase focused on both how well the students learned the key concepts in each experiment, but how their gaming experiences impacted their education and learning opportunities.

 "# !  In Tasks 1 and 2, students were asked while driving if they were steering more or less as speed increased, and all were able to notice the trends. Students noted in Task 2 how much more concentration and steering corrections it took to keep this vehicle on the circle. Students also noted how the vehicle cornered more “tail out” than the Task 1 vehicle—this is an observation that would have been difficult to make without the motion cues on the motion platform or without the use of a real vehicle. After the experiment, students analyzed their results, including a plot of steering wheel angle versus lateral acceleration. A sample of one of the student plots is shown in Figure 11 for the Task 1 vehicle. The slope of this curve is the understeer gradient, and the intercept is the Ackermann steer  ª‡{            {  “noise” in the data is a result of the student correcting the vehicle path by      > ¡      >   when speed is varying. The excursion in the right end of the data is attributed         {    the left end of the data shows what happens when the front tires saturate. A similar plot was produced for the Task 2 vehicle, as shown in Figure 12. Here, the slope of the steering angle versus lateral acceleration curve is negative, indicating an oversteer vehicle. Oversteer vehicles are dynamically stable up to a certain critical speed due to yaw damping, above    >?         «‡‚ž sec2 the amount of steering required to stay on the circle is nearly zero. At the far left of the diagram the steering trace shoots upward sharply—this is a result of the driver trying to catch the vehicle as it spins out. In Task 3, when the students were asked to announce as soon as possible which vehicle (Task 1 or Task 2) the car felt like, they were not able to do so at low speeds. It was not until speeds increased above approximately 35 mph

78

Computer Games Technology

          {      this. First is the overwhelming presence of stabilizing yaw damping at low speeds. Since both cars were very stable at low speeds their response felt similar. It was not until speed increased and yaw damping diminished that the differences in behavior were easily detected, at which point they were ' ! >         { !        Š  was increasing or decreasing with speed when speeds were low. These kinds of experiential-based insights would have been impossible without the gaming experiment. In Task 4, by the end of the experiment, every group was able to empirically place the CG within 1-2% of the theoretical optimum. Students  ^         >  experiment proceeded. By the second and third drivers, the students realized that they needed to approach the optimum from the stable side (i.e., CG too far forward) to avoid spin-outs. They would note the CG location when a spin-out did occur, and they made sure to keep the CG ahead of that point during further adjustments.

Figure 11: Student data for a Task 1 vehicle showing the understeer gradient.

Experiential Learning in Vehicle Dynamics Education via Motion ...

79

Figure 12: Student data for the Task 2 vehicle.

Impact on Education A survey was administered to the students in the RVD1 class at the beginning (pretest) and then again at the end (posttest) of the semester. The survey contained the open-ended question. How could engineering education be improved? An outside evaluator read and coded the student responses placing them in the categories shown in Table 1. Student responses commonly fell into categories that suggested the need for the environment that the RVD1 class was trying to provide as shown in Table 1. While it would be speculation, it is interesting to note how the number of responses in items 2 and 3 dropped. Was that due in part to the experiences the students had in the RVD1 course? Similarly, it is interesting to note how the number of responses to item 6 increased. Again, was that due in part to the students’ perceived value of RVD1, an elective course? Further investigation is needed to answer these questions. Also, specific comments to the question, when posed at the end of the semester, seem to highlight the benefit of the incorporation of the simulation game into RVD1. It is also worthy to note that 43 of the 73 people (59%) enrolled in the Road Vehicle Dynamics 1 course in the fall 2007 semester enrolled in the Road Vehicle Dynamics 2 course (which contained the advanced gaming

80

Computer Games Technology

module). Considering that both are elective courses in degree programs that allow for just 2-3 electives speaks to the student response to the courses. In addition, a number of the 30 students who did not enroll in the sequel class were graduating seniors and therefore were not eligible to enroll in the course. Table 1: Pre- and posttest open responses How could engineering education be improved?

Pretest

Posttest

66 students responded*

60 students responded*

(1) More hands-on experiences

17 (26%)

18 (30%)

(2) More practical/authentic/realistic experiences

13 (20%)

8 (13%)

(3) More experience with what is done in industry including the technology and equipment currently used in industry

10 (15%)

4 (7%)

(4) Required internships/more internships

7 (11%)

11 (18%)

(5) More group/interactive experiences

4 (6%)

2 (3%)

(6) More electives offered earlier

2 (3%)

7 (12%)

€` ž >   panies and facilities

1 (2%)

3 (5%)

Experiment Redesign Based on the feedback of the participants, instructors, and technical support personnel, a new set of experiments were designed for the following course, Road Vehicle Dynamics 2. This included more advanced simulation models, gaming environments, and instructional tasks.

Experiment 2: High-Fidelity Simulation, Advanced Gaming Setting The second experiment is also presented in the context of the overall methodology, as presented in Section 4.2.

   ! The learning objective of the second experiment is for students to discover through both their first hand experience and a postprocessing of the data they generated, the fundamental dynamics and impact of the roll stiffness distribution, roll center heights, friction ellipse effects, weight transfer,

Experiential Learning in Vehicle Dynamics Education via Motion ...

81

and the dropped-throttle oversteer. In addition, a secondary objective is for students to understand g-g diagrams and moment method (CN-AY) diagrams using the data generated during the experiment.

Simulation Activities There were two major additions to the vehicle model for Experiment 2. The first is a new tire model. Unlike the linear tire model of Experiment 1, Experiment 2 uses a nonlinear and load sensitive tire model. This model was based on tire data collected at the Calspan Tire Research Facility [43] for the FSAE Tire Test Consortium [44] and subsequently modified to suit the specific vehicle application. It is in the form of a Nondimensional Tire Model [42] and represents tire lateral force as a function of vehicle slip angle and normal load. The second addition involved the transition from a Bicycle Model to a full four-wheel vehicle model with lateral and longitudinal load transfer based on vehicle lateral and longitudinal accelerations. This calculates wheel loads at any given vehicle operating condition so that the load-sensitive          >  > !  >       stiffness distribution and front/rear roll center heights [42], allows additional ways to change vehicle handling characteristics over Experiment 1. While the resulting model is still very simple compared to comprehensive vehicle simulations, it is the next step towards such models and makes for a very useful educational too.

Task Ordering: Task 1 In the first task, the student revisits the skidpad of Experiment 1 (Section 5.1). Each student drives a baseline vehicle at ever increasing speeds up to the limit, all the while being conscious of how much steering is required to stay on the circular skidpad. Then, one of the parameters (roll stiffness distribution, front roll center height, or rear roll center height) is set to a new value and the student is asked to describe the effect of this change. Task 1 also introduced students to the ability to change the values of the vehicle parameters while driving. The steering wheel contained three thumb-activated buttons on each side of the wheel, one each for roll stiffness distribution, front roll center height, and rear roll center height. Once the student had completed the understeer gradient part of the task, he/she was

82

Computer Games Technology

asked to adjust the parameter to experience its effects on vehicle handling, and then to determine an optimum parameter to achieve the highest speed on the skidpad.

Tasks 2 and 3 Tasks 2 and 3 presented a new driving world to the students, named the “Tri-Radial Speedway”. The name arises from the design of the racetrack in the simulation—three corners each with a different radius plus one long straightaway (layout of the track is shown in Figure 13). In Task 2, the students were given a baseline car and told to drive around the track as fast as possible, which achieved two purposes. First, it allowed students to learn the track and identify reference points for braking, turning, and so on. Secondly, it acclimated drivers with the vehicle performance. Driving at the limit on a skidpad is different from driving at the limit on a racecourse in that on the skidpad the vehicle is always at approximately constant speed. On the racetrack the vehicle speed is changing. Under braking there is load transfer to the front of the vehicle which is destabilizing and could result in a more uncontrollable vehicle.

Figure 13: Student vehicle paths plotted on the Tri-Radial speedway.

In Task 3, students were asked to vary one of the three vehicle parameters while driving to achieve a better lap time than they could with the baseline vehicle. Here students had to consider the tradeoffs in setting-up a vehicle.  {   >    >  for approximately 25 drivers, including off course excursions. The direction of travel is counterclockwise. While the track is not complicated, the large number of paths off course    =                ^¡          > “  >   >  !   not necessarily expertly, on the course. With so many students being adept at video games the transition to the simulation in this kind of learning environment was faster than expected.

84

Computer Games Technology

Many students learned about the vehicle instabilities at varying speeds >       >‘ |€!^  too hard, and spun themselves out. They also experienced lateral force rolloff, the reduction in lateral force with large tractive, or braking forces. This was evident in the entrance to the straightaway as drivers who applied             In Task 3 assessment, students created their own g-g and CN-AY diagrams [39]. Figure 14 shows a g-g diagram for a smooth driver over four laps on the course. This plot of planar vehicle accelerations illustrates how much time the vehicle spends at the lateral acceleration limits for this vehicle (approx. 0.9 g). Figure 15 presents a CN-AY diagram for the same driver. This diagram presents yaw acceleration versus lateral acceleration, and a smooth driver will have very small values on the y-axis. ‘ |–|         smooth over the course of four laps. This data includes one spinout. Compared   >     >   >{   driver’s data is more in line with what a professional proving ground driver or race driver would produce from in-car measurements.

Figure 14: g-g diagram for a smooth driver over four laps. Note the repeatability.

Experiential Learning in Vehicle Dynamics Education via Motion ...

85

Figure 15: CN-AY diagram for a smooth driver. Note small yaw moment values.

Figure 16: g-g diagram for a less-smooth driver.

86

Computer Games Technology

Figure 17: CN-AY for a less-smooth driver includes one spinout.

Impact on Education A more comprehensive survey was given at the end of the RVD2 class. The survey results shown in Table 2 reflect the mean of the responses of the 41 students who completed the course (2 students resigned the course). The items in Table 2 capture the responses that most aligned with the objectives of the gaming laboratory experience. The omitted items relate more to teaching methodology employed in the course. The questions were on a 5-point Likert scale with 1 representing strongly disagree and 5 representing strongly agree. An ANOVA was run for each item, and statistical significance between means, as denoted in the table, was found for all of the survey items. These responses provide evidence that students perceived the RVD2 course (and RVD1 course), including the laboratory gaming component to be of significant value in their engineering education. In addition, the survey also included open-ended items, one of which asked them why they had enrolled in RVD2. All 41 students responded to this item, some with multiple responses. They are categorized and compiled in ‘ |‰¢     >> >       >     { >> responses mention their experiences in RVD1 which could certainly include the gaming experiences that were part of the class.

Experiential Learning in Vehicle Dynamics Education via Motion ...

87

An additional item on the survey asked the RVD2 students if they would recommend either RVD1 or RVD2 to other students. All 41 RVD2 students  >   ‘ |Œ>       this recommendation. Note that the second most popular response articulates the hand-on gaming experience in both classes. Table 2: RVD ratings survey results

RVD1 & RVD2 have

Other engineering courses have

Exposed me to genuine engineering problems**

4.439

3.195

Allowed for hands-on learning experiences**

4.512

2.902

Prepared me for the workplace**

3.951

3.097

Allowed me to use the types of technology and facilities that engineers use in today’s workforce**

3.878

2.951

Offered me a chance to identify and formulate engineering problems**

4.220

3.610

Provided engaging learning opportunities**

4.537

3.293

Made use of problems and situations similar to those that I expect to face in the workplace**

3.854

3.049

Helped me be more familiar with what a practicing engineer does**

3.829

3.073

Given me opportunities to perform experiments in engineering**

4.268

3.244

Given me opportunities to analyze engineering data**

4.732

3.537

Given me opportunities to interpret engineering data**

4.707

3.463

Presented new ideas and material in a realistic context**

4.463

3.341

88

Computer Games Technology

Figure 18: RVD2 responses to the question “Why did you take RVD2?”.

Figure 19: Reasons to take RVD1 and RVD2.

Experiential Learning in Vehicle Dynamics Education via Motion ...

89

Experiment Redesign Section 6 addresses some overall insights and conclusions from the collective set of experiments. It also presents some ideas toward future redesigns of the experiments, based on the collected feedback.

GENERAL INSIGHTS AND CONCLUSIONS This paper presents a method of using a simulation game to present students with an authentic vehicle testing scenario. The driving simulation environment is used to augment two standard road vehicle dynamics course offerings. In all, 73 students participated the low-end gaming tasks and 41 of these students continued with the second course and participated in the high-end gaming tasks. Based on the experimental results from performing these tasks, from in-laboratory observations (before, during, and after the experiments), and based on the student surveys (before and after the laboratory experiments), numerous conclusions can be drawn. (i)The game scenarios were successful in attaining their primary goal—to serve as a forum for experiential, inquiry-based learning within an educational setting that had previously been instructed exclusively by !  =  >> '    see and experience the dynamics of a vehicle, hands-on, using the motion simulator, and this exposure was followed by traditional instruction (i.e., representative mathematical theory and governing dynamics equations)      €{            between the students who engaged in the gaming experiments and those from previous nongaming offerings of the courses with respect to the  _     >  <      progress in the students with respect to their comprehension of theoretical concepts and application of these concepts to practical vehicle design issues. In previous years a few in-class computer simulations with plots of output variables were used to illustrate different vehicle behaviors. With the use of the motion simulation experiment the connection between theory and reality was easier to make since the students had experienced the various vehicle behaviors. Interestingly, because of the realistic context that the experiments provided, the instructor also noted a pronounced familiarity with the technical vocabulary of the course after the experiments were conducted. (iii)For each of the laboratory groups, the laboratory instructors could easily detect the progression of knowledge and lesson comprehension during the >   “      !            

90

Computer Games Technology

amount of apprehension and guesswork, as the student would serve as the

>  >       ?   •   shortcomings, the second, and third drivers would conquer the exercises     Š ^    !     > ^! >   >      "    €“ ||ª  in 19 groups across two semesters, almost 200 experimental setups were performed. Using a physical vehicle and test track or road course, the setup time would have dominated the experimental process, limiting the students’   >    >              dynamics.(v)The results from the course surveys show that the course has a considerable impact on how students felt about their engineering education and their perceived experiences in their educational process. The use of experiential learning in the vehicle dynamics curriculum increased students’ opinions of their opportunity to have hands on experiences, use modern engineering tools, and solve problems that were similar to what they expect to see in the workplace. This outcome shows that using simulation to provide authentic learning environments provides educators with a means of following the guidance provided by ABET and the National Survey of Student Engagement, with the ultimate goal of educating engineers that are better prepared for the workforce.(vi)Further development and study will include more experiments aimed at using gaming environments to learn key technical concepts including fundamental vehicle dynamics, driver-vehicle interactions, and driver-to-driver interactions in networked simulations. The networking feature will allow multiple drivers to interact with one another within the same driving environment using both TCP and UDP internet protocols. In addition, future plans could include developing a computing toolkit to allow other researchers and educators interested in         >>        > environments and experiments, similar to the more general serious game design and assessment toolkit in [45]. This toolkit could also be applied to a desktop version of driving simulation environments, allowing for greater dissemination and study.

ACKNOWLEDGMENTS The work described in this paper is supported in part by the National Science Foundation Course, Curriculum and Laboratory Improvement (CCLI) program (Grant DUE-0633596) and the New York State Foundation for Science, Technology, and Innovation (NYSTAR).

Experiential Learning in Vehicle Dynamics Education via Motion ...

91

REFERENCES 1. 2. 3. 4. 5. 6. 7.

8.

9.

10.

11. 12. 13.

14.

Sony Computer Entertainment, Gran Turismo, September 2008, http:// us.playstation.com/granturismo/. Atari, Race Pro, September 2008, http://videogames.atari.com/ racepro/. P. Wankat and F. Oreovicz, “Learning outside the classroom,” ASEE Prism, vol. 10, no. 5, p. 32, 2001. P. Wankat and F. Oreovicz, “An over-stuffed curriculum,” ASEE Prism, vol. 11, no. 2, p. 40, 2001. P. Wankat and F. Oreovicz, “Getting out of your box,” ASEE Prism, vol. 13, no. 3, p. 49, 2003. P. Wankat and F. Oreovicz, “Simulators to stimulate students,” ASEE Prism, vol. 13, no. 5, p. 45, 2004. President’s Information Technology Advisory Committee, “Using information technology to transform the way we learn,” Tech. Rep., ¢ } ƒ   * {  š    and Development, Arlington, Va, USA, 2001. The Accreditation Board for Engineering and Technology, “Criteria for accrediting programs in engineering in the United States,” Tech. Rep., Accreditation Board for Engineering and Technology, Baltimore, Md, USA, 2000. “National survey of student engagement: the college student report,” Annual Report, Center for Postsecondary Research, Indiana University, Bloomington, Ind, USA, 2003. D. Dixon, “Manifest Technology, Making Sense of Digital Media Technology—Simulation-based Authoring for Serious Games,” March 2005, http://www.manifest-tech.com/ce_games/sovoz_serious_games. htm#References. S. Lane, “Promoting Learning by Doing Through Simulations and Games,” soVoz, Inc., White Paper, Princeton, NJ, USA, 2005. M. Prensky, Digital Game-Based Learning, McGraw-Hill, Boston, Mass, USA, 2001. V. Ruohomaki, “Viewpoints on learning and education with simulation games,” in Simulation Games and Learning in Production Management, J. O. Riis, Ed., pp. 14–28, Springer, Berlin, Germany, 1995. M. Matijasevic, “A review of networked multi-user virtual

92

15. 16. 17. 18. 19. 20. 21.

22.

23.

24.

25. 26.

27.

28.

Computer Games Technology

environments,” Tech. Rep. TR97-8-1, The Center for Advanced Computer Studies, Virtual Reality an Multimedia Laboratory, The University of Southwestern Louisiana, Lafayette, La, USA, 1997. }'   !?¨‘!` ‡——‰! >˜žž"ž Battlezone, Original game by Atari Inc., Consumer Division, 1312 Crossman Ave., P. O. Box 61657, Sunnyvale, Calif, USA 94086, 1980. B. J. Schachter, Ed., Computer Image Generation, John Wiley & Sons, New York, NY, USA, 1983. W. Wright, “SimCity,” March 2008, http://simcitysocieties.ea.com/ index.php. C. Sawyer, “Roller Coaster Tycoon,” March 2008, http://www.atari. com/rollercoastertycoon/us/. J. P. Merlet, “Parallel manipulators—part I: theory, design, kinematics, dynamics and control,” Tech. Rep. 646, INRIA, Cedex, France, 1987. D. Stewart, “A platform with six degrees of freedom,” Proceedings of the Institution of Mechanical Engineers, vol. 180, no. 15, pp. 371–384, 1965. E. F. Fichter, “A stewart platform-based manipulator: general theory and practical construction,” The International Journal of Robotics Research, vol. 5, no. 2, pp. 157–182, 1986. C. Aldrich, Learning by Doing: A Comprehensive Guide to Simulations, Computer Games, and Pedagogy in E-Learning and Other Educational Experiences, John Wiley & Sons, San Francisco, Calif, USA, 2005. S. de Freitas, “Learning in immersive worlds: a review of gamebased learning,” Joint Information Systems Committee, December 2007, http://www.jisc.ac.uk/media/documents/programmes/ elearninginnovation/gamingreport_v3.pdf. B. Stroustrup, The C++ Programming Language, Addison-Wesley, Reading, Mass, USA, 1987. J. Postel, “User datagram protocol,” DARPA Network Working Group Report RFC 768, USC/Information Sciences Institute, Los Angeles, Calif, USA, 1980. Moog Systems Division, “Moog 6 DOF 2000E Motion System *   ;  `! ;   ’'‘=—ªª–! \! ¢®! USA, 1999. “DirectInput C/C++ Reference,” Microsoft Corporation, 2007, http://

Experiential Learning in Vehicle Dynamics Education via Motion ...

29. 30.

31.

32.

33. 34.

35. 36.

37. 38. 39. 40.

41.

93

msdn.microsoft.com/en-us/library/bb219807(VS.85).aspx. J. Brooks, “Direct input 7 joystick class,” Technical Article, CodeGuru, September 2008, http://www.developer.com/. R. Roman, “Non-linear optimal tilt coordination for washout algorithms,” in Proceedings of the AIAA Modeling and Simulation Technologies Conference and Exhibit, Austin, Tex, USA, August 2003. M. A. Nahon and L. D. Reid, “Simulator motion-drive algorithms—A designer’s perspective,” AIAA Journal of Guidance, Control, and Dynamics, vol. 13, no. 2, pp. 356–362, 1990. R. J. Telban, W. Wu, and F. M. Cardullo, “Motion cueing algorithm development initial investigation and redesign of the algorithms,” Tech. Rep. NASA/CR-2000-209863, National Aeronautics and Space Administration, Langley Research Center, Washington, DC, USA, 2000. M. Woo, J. Neider, T. Davis, and D. Shreiner, OpenGL Programming Guide, Addison-Wesley, Reading, Mass, USA, 3rd edition, 2000. ; ?  š ƒ ! ƒ>     >  ˜  ! ˜ examples and applications,” in Proceedings of the IEEE Virtual Reality Conference (VR ‘04), p. 265, Chicago, Ill, USA, March 2004. Delta3D, Naval Postgraduate School, MOVES Institute, September 2008, http://www.delta3d.org/. A. Geist, A. Beguelin, J. Dongerra, J. Weicheng, R. Mancheck, and V. Sunderam, PVM: Parallel Virtual Machine: A Users’ Guide and Tutorial for Network Parallel Computing, The MIT Press, Cambridge, Mass, USA, 1994. R. Yadav, “Client/Server programming with TCP/IP sockets,” Technical Article, DevMentor, September 2007, http://www.devmentor.org/. ƒ> ’ || '>    š    ! ‡——‚! >˜žž >  ž> ¯ ž> žƒ> ’||'>  > W. F. Milliken and D. L. Milliken, Race Car Vehicle Dynamics, SAE International, Warrendale, Pa, USA, 1995. SAE Vehicle Dynamics Standards Committee, “Steady-state directional control test procedures for passenger cars and light trucks,” SAE Standard J266, SAE International, Warrendale, Pa, USA, 1996. P. van Valkenburg, Race Car Engineering and Mechanics, HP Trade, New York, NY, USA, 2004.

94

Computer Games Technology

42. W. Milliken and D. Milliken, Chassis Design: Principles and Analysis, SAE International, Warrendale, Pa, USA, 2002. 43. Calspan, “Tire Research Facility (TIRF),” December 2008, http:// www.calspan.com/pdfs/Tire_Research.pdf. 44. E. Kasprzak and D. Gentz, “The formula SAE tire test consortium— tire testing and data handling,” SAE Paper 2006-01-3606, Society of Automotive Engineers, Warrendale, Pa, USA, 2006. 45. R. J. Nadolski, H. G. K. Hummel, H. J. van den Brink et al., “EMERGO: a methodology and toolkit for developing serious games in higher education,” Simulation & Gaming, vol. 39, no. 3, pp. 338–352, 2008.

CHAPTER 4

Development of a Driving Simulator with Analyzing Driver’s Characteristics Based on a Virtual Reality Head Mounted Display Seyyed Meisam Taheri, Kojiro Matsushita, Minoru Sasaki Department of Human and Information Systems Engineering, Gifu University, Gifu, Japan.

ABSTRACT Driving a vehicle is one of the most common daily yet hazardous tasks. One of the great interests in recent research is to characterize a driver’s behaviors through the use of a driving simulation. Virtual reality technology is now a promising alternative to the conventional driving simulations Citation: Taheri, S. , Matsushita, K. and Sasaki, M. (2017), Development of a Driving Simulator with Analyzing Driver’s Characteristics Based on a Virtual Reality Head Mounted Display. Journal of Transportation Technologies, 7, 351-366. doi: 10.4236/ jtts.2017.73023. Copyright˜°‡—| '   š   ‹ * {  ^censed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0

96

Computer Games Technology

since it provides a more simple, secure and user-friendly environment for data collection. The driving simulator was used to assist novice drivers in learning how to drive in a very calm environment since the driving is not taking place on an actual road. This paper provides new insights regarding a driver’s behavior, techniques and adaptability within a driving simulation using virtual reality technology. The theoretical framework of this driving simulation has been designed using the Unity3D game engine (5.4.0f3 version) and programmed by the C# programming language. To make the driving simulation environment more realistic, the HTC Vive Virtual reality headset, powered by Steamvr, was used. 10 volunteers ranging from ages 19 - 37 participated in the virtual reality driving experiment. Matlab R2016b was used to analyze the data obtained from experiment. This research results are crucial for training drivers and obtaining insight on a driver’s behavior and characteristics. We have gathered diverse results for 10 drivers with different characteristics to be discussed in this study. Driving simulations are not easy to use for some users due to motion sickness, difficulties in adopting to a virtual environment. Furthermore, results of this study clearly show the performance of drivers is closely associated with individual’s behavior and adaptability to the driving simulator. Based on our findings, it can be said that with a VR-HMD (Virtual Reality-Head Mounted Display) Driving Simulator enables us to evaluate a driver’s “performance error”, “recognition errors” and “decision error”. All of which will allow researchers and further studies to potentially establish a method to increase driver safety or alleviate “driving errors”. Keywords: Driving Simulator, Virtual Reality, Head Mounted Display, Driver Behavior, Safety, Driving Errors, Unity3D

INTRODUCTION Driving simulation has become an important tool in the field of car industry. Researchers are providing an environment which is safer for drivers because there are no physical obstacles or potential for harm in the form of driving simulation using VR [1] . The way that people look around while they’re driving, and control of the vehicle (speed control, braking and lane changing) can manifest as driver behavior errors [2] . Recognition errors, decision making errors and performance errors, in particular, are 3 of the most common errors. Driver error has been identified as an important factor in road traffic accidents. For instance, if a driver accelerates instead of

Development of a Driving Simulator with Analyzing Driver’s ...

97

braking, it is considered an error. Also, it is considered an error if a driver performs an incorrect action intentionally or unintentionally such as passing a yellow light when they can stop, or braking hard on an icy or slippery road. Most errors occur when someone drives at a high speed in a situation that requires quick decision making within a small window of time. According to McLean, there are four types of collisions occurring at intersections: RightAngel, Right-turn-Against, Rear-End and crashes involving pedestrians [3] . Driving a vehicle is one of the most common daily and hazardous tasks. Studies have also shown the accident rate among senior drivers in Japan has been increasing. Because Japan is a rapidly aging society [4] . There are many factors that cause driver errors, such as weak eyesight, visual observation problem, physical conditions and fatigue, etc. Each driver, on the road has different characteristics and behaviors; the advancement of analyzing these factors by use of driving simulation has become a great interest in recent research. Novel virtual reality technology is now capable of helping researchers gather data more easily and precisely. This technology provides an easy, secure and user-friendly environment for data collection for researchers [1] . Driving simulators are meant to be used to assist novice drivers to learn how to drive in a very safe environment. In our study, we provide new insights regarding driver behavior, techniques and adaptability in driving simulation using virtual reality technology. We also measured driver tendencies (head movements), speed control (accelerating and braking), position of the vehicle, control of the vehicle (steering wheel rotation), Gaze information (The eye movement of user is tracked and         ^€{ >   tasks examined, during the simulation included: adjusting speed when approaching the bridge, turning and passing intersections and keeping left and driving between right lanes, controlling the car while accident occurs and when other cars on the road are suddenly stopping. In summary, the theoretical framework of this driving simulation has been designed using the Unity3D game engine and programmed by C# programming language. Previously, research has been done with the use of traditional monitors and projectors. In past research, 3 monitors were used to simulate the inside of a frontal view within the inside of a car. One, main screen, in the middle represents the view of out of the windshield while 2 other monitors to the left and right of the center monitor represent the left and right window views. This method has been used for a decade. For the sake of creating a more realistic and immersive environment during a simulation, researchers have tried to provide users with a 3D environment which is

98

Computer Games Technology

     ‡; {         >   >   due to the system being unable to completely mimic a true 3D environment, which makes data gathering often imprecise. Conventional research usually concentrates on analyzing driver behavior based on acceleration, breaking and steering wheel data, and visual data. But, through the traditional method   " >               prevents them gathering precise data. In our study, to make a driving environment more realistic, the HTC Vive headset was used, which was given to our volunteer drivers providing them with a wide 3D view of the virtual world around him/her. By using HTC (Head Mount Display) HMD we could gather and analyze behavior data for each driver. The Fove HMD was used to gather behavior and gaze data for each driver while they were driving in driving simulation. Not to mention, the Unity engine gives us the freedom to design new scenarios, we can modify and develop codes anytime during the experiment and create. Unity can also update itself through the support of its creators to be compatible with new hardware such as HMDs and so on. Volunteers ranging from the ages of 19 - 37 participated in this virtual reality driving experiment. 3 different driving courses (routes) on a single city map were assigned to each driver. The routes were determined by us,                          and explained to each volunteer. In other words, despite the map being open world, volunteers were aware of exactly of what the boundaries were during each of the courses within this large driving space. For this experiment, we used only 1 of the 3 courses to obtain our data. In course 1 (refer to Figure 3) driver fatigue, patience and adaptively to the system was examined. The results show, some of the drivers grew tired of driving with a static speed on the same course. So, sometimes volunteers accelerated more to gain speed, moved sporadically around the course and stopped paying attention to what they were doing. Some drivers maintained their speed, which did not require them to brake so much. Some drivers had a rough start because they had never tried VR in the past, but as time passed they improved their techniques and got used to the driving simulator and the course, causing them to gradually improve. The results of this study can suggest that the performance of the drivers is closely associated with the individual’s behavior and adaptability to the VR system. In addition, the analysis results show that an individual’s behavior and characteristics such as maintaining and controlling speed, braking

Development of a Driving Simulator with Analyzing Driver’s ...

99

when needed, remaining calm, paying attention to the environment changes around them, while driving, can affect their performance in a simulated environment.

ADVANTAGES AND DISADVANTAGES OF DRIVING SIMULATIONS Driving simulators have been utilized to help novice drivers, for racing training, road safety scientific research, and as entertainment at home. Driving simulators offer various advantages compared to real vehicles, including [5] .

Advantages of Driving Simulators 1)

2)

3)

They are easy to control and easy to make, reproduce and normalization }       !   ! and the road design can be easily adjusted based on the needs of the researcher or designer. It will also give the developer more room to design different scenarios and enables the user to try and practice freely for any amount for time. By using simulators, drivers can experience driving under the exact same conditions and the same scenarios decided by researchers. This is an important factor for producing research results [5] . Gathering data will be easier and more precise. With a driving simulator, measuring performance will be more accurate and    ƒ     !             is problematic when collecting corresponding, accurate data. *             real time because the vehicles can’t easily be manipulated by researchers. While in the simulation, data can be recorded more easily and precisely because everything about the simulation can be manipulated at any time with coding from a VR program [6] . It helps a researcher evaluate object detection and hazard awareness by using eye-tracking [7] . Experiencing and encountering accidents without being physically harmed. It is possible to study dangerous driving situations by putting the driver in dangerous driving situations, which is an ethically challenging when using a real vehicle [7] .

Computer Games Technology

100

4)

Ease in feedback and instruction. It is possible to pause, reset, or replay the same, established scenario many times. Feedback and instruction can be delivered in different ways such as speech and visual overlays used to highlight dangerous features in the environment [5] .

Disadvantages of Driving Simulators Simulators have some disadvantages and challenges which are: |€

2)

3)

’ >  !>  >!   ’ =   simulators may evoke unrealistic driving behavior and therefore >      '  ^  to affect user opinion. Participants may become demotivated by  =  >           =    €‚ Shortage of research demonstrating validity of simulation. A growing body of evidence indicates that driving-simulator measures are predictive for on-the-road driving performance [2] [8] . However, only a few studies have investigated whether skills learned in a driving simulator transfer to the road (see [9] [10] for many such studies). Simulator sickness, especially in older people or under demanding driving conditions. Simulator sickness symptoms may undermine training effectiveness and negatively affect the usability of simulators. This is a serious concern, but fortunately, useful technological and procedural guidelines are available to alleviate it [11] . Research shows that simulator sickness is less of a problem for young drivers [12] . Experience shows that limiting     ! >  > !       ±|— €         ^    ^ 

RESEARCH HYPOTHESIS The first research hypothesis is as follows: {  Š         >>    could adapt to the new VR technology and feel comfortable driving in a completely virtualized 3D environment? As we mentioned earlier, there are pros and cons of using this kind of environment, dependent on people and

Development of a Driving Simulator with Analyzing Driver’s ...

101

their human condition such as age, wellness, etc. The second reason we are     >     •    a virtual reality system to observe driver behavior. We are trying to develop a new way to gather data base of driver behavior so that other researchers can apply our new method to their own research. This new VR technology is low cost and can use multi-sensors located on a VR headset and is portable. *   > ^         We hope that other researchers will be able to adapt this technology to their own research and continue to expand the potential uses for this VR system.

EXPERIMENTAL SETUP There are limitations regarding external validity in this study, however, a driving simulator study includes some advantages compared to on-road studies: Different flexible driving scenarios can be designed and can be changed anytime based on needs of the researchers and participant status. The experiment was carried out in Sasaki & Matsushita Laboratory based on a game engine and virtual reality environment. The simulator displays the driving scene on a monitor in the form of an open city with real world traffic, AI and realistic road elements. Because of the monitor display, instructors can also view what’s happening within the simulation. Participants can of course view everything through the VR headset. The simulator recorded the data at about 30 frames per second on a CSV file. Furthermore, everything from start to finish can be recorded via a video capturing device. Two external speakers are positioned in front of the driver. Participants controlled the vehicle within the game using a Thrust master T150 steering wheel controller supplied with gas and brake pedals and gear shifting was set to automatic. The steering wheel has built in sensors that allow Unity to translate steering wheel rotation by a driver into raw data.

Experimental Design Figure 1 graphs the sensors, the hardware used in the experiment (displays, PC) and the game engine/programming language. Figure 2 represents a difference between previous experiment setups and the setup we used in our experiment.

102

Computer Games Technology

Equipment Conventional research almost always relied on having to design a Virtual reality program by using flat displays or projecting the visual images on monitors to project the view for the user. In this study, we have the advantage of using recent HMDs technologies to design and develop an immersive environment for users.

Figure 1: Layout of the proposed system.

Figure 2: Conceptual diagram of the system.

Development of a Driving Simulator with Analyzing Driver’s ...

103

Figure 3: Scenario and Roadway Design.

This experiment is driving around a circular course 10 times with all the sensors attached to our system allowed us to evaluate the behavior and characteristics of a driver in a long and repeated course. There are some factors such as performance and learning capabilities, concentration, fatigue which effect driver behavior. These results have been analyzed and studied in this research.

Scenario and Roadway Design As Figure 3 describes, the driver starts the course from the parking lot represented by the purple bar. The blue line illustrates the driving path for the course which drivers need to navigate fully 10 times. The bridge acts as an obstacle because it forces drivers to be more considerate of their actions while driving. Drivers are not allowed to leave the designated course boundaries marked by the blue line. This means that they cannot go off road. Also, they are not allowed to make a full stop. If the driver exhibits any of these actions during any of the 10 laps during the simulation, the entire experiment must be restarted from lap 1.

Participants A total of 10 participants (9 males, 1 female) with an average age of 28 participated in the experiment.

Computer Games Technology

104

EXPERIMENTAL RESULTS AND DISCUSSION By implementing an integrated driving simulator with VR-HMD with a multi-modal sensor attachment to it we were able to gather data for: Multi Sensing: ² ² ² ² ²

Steering wheel and handling; Accelerator and Brake; Vision Image; Head Movement; Eye Movement (This data has been captured and recorded and is to be analyzed for the future work).

Time-Distance Comparison Comparison involving the top 5 drivers who showed the most signs of improvement and adjustment to the simulation. Figure 4 illustrates TimeDistance comparison of individual performance based on group data. In this graph the distance and time of 5 drivers presented, which notably shows the 5th driver spent more time and was driving slower than other drivers, and 1st driver has a faster performance and has different data in each laps.

Accelerator & Brake As Figures 5(a)-(c) show, drivers slowly and smoothly increased their speed and applied pressure on the acceleration pedal. The 1st and 3rd drivers have braking data in their results which represents how they controlled their speed at curves and other situations which required them to slow down. The 2nd driver however shows a different result in his performance as he kept his speed controlled without braking. The 2nd driver smoothly used only the accelerator to maintain his speed. The 1st and 3rd drivers controlled the vehicle and speed by pushing the brake pedal smoothly as Figure 5(a) and Figure 5(c) illustrate. Figure 5(e) presents the 5th driver’s performance. This individual took a long time to finish the 10 laps around the course, which mean his speed was overall slower than other drivers through the whole simulation experiment. The 5th driver continued to control his speed by alternating between the accelerator and the break pedal at a steady, slower rate when compared to the other drivers. Because he was driving more slowly he did not need to brake as hard as other drivers. As a result of that he has fewer brake pressure data in his results.

Development of a Driving Simulator with Analyzing Driver’s ...

105

Figure 4: Time-Car distance traveled.

Figure 5: Acceleration and braking data. (a) 1st Driver; (b) 2nd Driver; (c) 3rd Driver; (d) 4th Driver; (e) 5th Driver.

106

Computer Games Technology

Figure 5(d) shows the 4th driver’s performance results. This driver suddenly started increasing their speed as well as suddenly braking hard. This result shows that participants who drive at lower speed have an easier time maintaining a more consistent relationship between speed and braking pressure. In this case, consistent can mean staying near the 0.5 on any of the graphs.

Time Lap Figures 6(a)-(e) are representing the time which each driver spent to finish each lap, the laps are selected from 2nd to 9th to be more precise in the results and avoid any unnecessary braking and accelerating data in the first and the last lap. As Figures 6(a)-(c) show, drivers improved their skills and they adopted to the system and they got better in driving. The 5th driver drove constantly slow and Figure 6(e) presents that the driver was constantly driving slow and did not improve his driving style and skill. In the other hand, Figure 6(d) shows that this driver increased and decreased his speed sometimes, few laps slow and again faster in other laps. This result can show that this driver was bored of driving and did not improve his performance, but it does not necessarily means this dirver is not a good driver according to other data from other figures. The result of these graphs and the time record indicates drivers’ proficiency and motivation in driving.

Development of a Driving Simulator with Analyzing Driver’s ...

107

Figure 6: Time record at each Lap (2nd to 9th). (a) 1st Driver; (b) 2nd Driver; (c) 3rd Driver; (d) 4th Driver; (e) 5th Driver.

Steering Angle/Car Angle (Yaw) Following graphs are representing the steering wheel angle and car angle (Yaw), the results of these graphs will indicate that adaptively to the course, individual recognition ability and it also shows how stable the drivers were driving in the driving course. Figures 7(a)-(e) show that drivers have a Stable driving performance and show that driver are memorized the course and getting better each lap      > ‘  € ^             driver has an unstable driving performance and it is not an Intuitive driving performance. As we discussed about the 5th driver in other sections and

      >>  !in terms of controlling the car he had shown a good result as he was able to control the vehicle easily due to his low speed.

108

Computer Games Technology

Figure 7: Steering angle/car angle (Yaw). (a) 1st Driver; (b) 2nd Driver; (c) 3rd Driver; (d) 4th Driver; (e) 5th Driver.

{      >>             the same way in each lap or it shows the correlation between speed and controlling a vehicle.

Driver’s Head Position (Yaw) The following graphs shows that the faster a person drives, the wider the variation of a driver’s head movements become. Figure 8(a) illustrates the head movements of the first diver, which is Approximately ± 30 degree the highest degree.

Development of a Driving Simulator with Analyzing Driver’s ...

109

Figure 8: Driver’s head angle (Yaw). (a) 1st Driver; (b) 2nd Driver; (c) 3rd Driver; (d) 4th Driver; (e) 5th Driver.

Both 4th and 5th drivers have a low rotation rate in their performance approximately +15 degž«‚ >> ´|‚  >        >      >          ! speed, controlling the vehicle and head movements data are somehow connected. Figure 8(e) and Figure 8(d) show the lower head movements data, which indicates that head movements data in driver helps the individual recognition ability.

110

Computer Games Technology

The result of this head movements data represents the individual recognition ability. It shows how relative the speed and head movements are in terms of controlling the car at corners and curves. Looking back at the data, it is clear that a driver who has better performance has completely different data based on his/her head-movements. As [13] represents as well, the elder drivers have less tendency to look around while they are overtaking other cars on the road. This shows how visual abilities in making a quick decision and head movements are closely related.

Discussion The results of experiment show the relationship between speed, individual driving ability, performance and individual recognition. The results indicate that learning the course and controlling the speed will help a driver control the vehicle easier. This study shows that head movements and decision making are different in different individuals, and it shows how that effects driving performance. In this study, we used a modal multi sensor head-mounted display and used it in an immersive environment for the driver. Except for the head- mounted display there were not any other sensors attached to the driver. Table 1 shows the summary of the results of the experiment which divided into two parts, analysis of the data and the results of the analysis. Table 1 listed the relationship between the analyses and a driver’s characteristics: (1) the driver’s eye and head movement information can be extracted with the VR-HMD vision image; (2) individual performance on driving adaptability can be evaluated through acceleration and braking; (3) the graph on the time/car distance traveled (Figure 4) demonstrates individual performance between all the drivers; (4) the time record transition indicates individual performance on driving adaptability; (5) the individual performance on turning can be evaluated with the combination of steering, car speed and driver’s horizontal head angle; (6) the individual performance of each driver can be evaluated with a driver’s vertical head angle. Overall, the sensors in the multi-model system contribute to the evaluation of each individual driver’s performance and recognition. As for the gaze analysis, different driving scenarios were designed: 2. The driver must tailgate a car which unexpectedly stops, challenging drivers to adjust their speed and keep their distance while controlling the brake and accelerator. In addition, drivers needed to be careful by continuing to monitor the car they were tailgating as well as the surrounding area. 3. The

Development of a Driving Simulator with Analyzing Driver’s ...

111

driver had to follow a car and that car would crash with another car, so the driver behavior, reaction and gaze data would help us to analysis the behavior of each driver when a sudden event would occur. Table 1: Summary of the results of the experiment

We are going to analyze and study driver behavior and errors. '>  !       !   of the driver. In order to do that, we are going to use two different eye tracking technology Fove head- mounted display. We will gather the data of the driver and analyze and compare two different eye tracking technologies results as well as compare and analyze the driver behavior. There are some signs of dizziness and nausea in female drivers who could not complete their tasks. By gathering electroencephalogram (EEG) signals we can measure muscular and nerve responses for each participant.

CONCLUSIONS Driving a vehicle is one of the most common daily activities, and requires stable driving, concentration and recognition of one’s surroundings. As Japan is an aging country, recently driving violations caused by elderly drivers have been increasing [14] . It has become an important social issue to evaluate their ability, recognize and evaluate their environmental awareness while driving. The major, general evaluation method has been based on the driving simulator with 3 monitor displays (i.e., frontal view, left side window view with side view mirror included and right side window view the right-side view mirror included). However, this is not a true 360-degree view which can be provided with a VR headset. Driving simulators provide various driving scenes in a safe, stationary environment such as an office

112

Computer Games Technology

room. They enable researchers to measure and analyze how a driver uses the accelerator, the brake, and the steering wheel. Therefore, the research goal of this paper is to develop a driving simulator based on a virtual reality (VR) head-mount display (HMD), and to demonstrate the driver’s performance and behavior analysis. The results of this study show a driver’s behavior and its relationship between human factors (recognition, visual inspections, decision making, and driving ability) and each individual performance (No two performances are the same). The results also indicate that drivers with better performance tend to look around more and be more attentive, and control their speed constantly. On the other hand, based on the relationship between human factors which mentioned in this study, drivers who were inattentive while driving, show less attention to head movements and maintaining their speed. ?    !           ~š= ‡—|€ *   {    š     ; Analysis. http://www.itarda.or.jp

SECTION 2: GAMES HARDWARE

CHAPTER 5

Fast and Reliable Mouse Picking Using Graphics Hardware

Hanli Zhao1 , Xiaogang Jin1 , Jianbing Shen2 , and Shufang Lu1 1

State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027, China School of Computer Science & Technology, Beijing Institute of Technology, Beijing 10008, China

2

ABSTRACT Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using Citation: Hanli Zhao, Xiaogang Jin, Jianbing Shen, Shufang Lu, “Fast and Reliable Mouse Picking Using Graphics Hardware”, International Journal of Computer Games Technology, vol. 2009, Article ID 730894, 7 pages, 2009. https://doi. org/10.1155/2009/730894. Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

118

Computer Games Technology

graphics hardware for 3D triangular scenes. Our approach uses a multilayer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects) are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

INTRODUCTION Mouse picking, as the most intuitive way to interact with 3D scenes, is ubiquitous in many interactive 3D graphics applications, such as mesh editing, geometry painting and 3D games. In many Massive Multi-player Role Playing Games (MMRPGs), for instance, thousands of players compete against each other, and the picking operation is frequently applied. Such applications require picking to be performed as fast as possible in order to respond to players with a minimum time delay. In recent years, programmable graphics hardware is getting more and more powerful. How to make full use of the co-processors in the picking operation becomes important. The WYSIWYG method, which takes advantage of graphics hardware      _   !  >>  by Robin Forrest in the mid-1980s and used in 3D painting by Hanrahan and Haeberli [1]. In their method, each polygon is assigned a unique color value     ™   >    the id buffer, the picked position on the surface can be found by retrieving data from the frame buffer. However, this approach has weaknesses for complex scenes in that all objects in the view frustum must be rerendered. This may take a long time for complex scenes and therefore lower the picking performance. By integrating the WYSIWYG method and hardware bilinear interpolation [2], Lander presented a method to calculate the exact intersection information, that is, the barycentric coordinate in the intersected triangle. By setting additional color values with (1, 0, 0), (0, 1, 0), (0, 0, 1)   "=>> €       respectively, he calculated the barycentric coordinate by interpolation after the rasterization stage. However, the computed barycentric coordinate is in the projected screen-space but not in the object-space, which may restrict its application.

Fast and Reliable Mouse Picking Using Graphics Hardware

119

In this paper, we propose a simple, fast and reliable picking algorithm (FRMP) using graphics hardware for 3D triangular scenes. By combining the multi-layer culling approach of Govindaraju et al. [3] with a GPU-based implementation of Möller and Trumbore’s ray-intersection test [4], the picking can be performed in linear time complexity. Our approach has the following features. (1)It is fast—our approach is 2 to 14 times as fast as the traditional GPUbased picking one.(2)It is reliable—our approach performs the operation in object-space, and the exact intersection information can be computed. (3)It is parallel—the ray-triangle intersection detection is implemented as a geometry shader.(4)It is simple—our novel approach operates directly on triangular meshes and can be easily integrated into existing real-time rendering systems. The rest of the paper is organized as follows. Section 2 reviews some related work. Section 3 describes our new algorithm, whereas experimental results and discussions are presented in Section 4. We conclude the paper and suggest future work in Section 5.

RELATED WORK Intersection detection is widely used in computer graphics. The mouse picking operation can be performed by an ordinary ray-object intersection test and accelerated by lots of schemes for high efficiency. The methods for interference detection are typically based on bounding volume data structures and hierarchical spatial decomposition techniques. They are K-d trees [5], sphere trees [6, 7], AABB trees [8, 9], K-DOPs trees [10], and OBB trees [11]. The objects (triangles) are organized in clusters promoting faster intersection detection. The spatial hierarchies are often built in the preprocessing stage and should be updated from frame-to-frame when the scene changes, which is not appropriate in most cases for mouse picking. Hardware occlusion queries are also used in collision detection for large      >         Govindaraju et al. [3, 12, 13]. These GPU-based algorithms use a linear time multi-pass rendering algorithm to compute the potentially colliding set. They even achieve interactive frame rates for deformable models and breaking objects. In their method, the objects (triangles) list can be traversed from the beginning up to the end and thus no spatial organization (KD

120

Computer Games Technology

and other trees) are required. The WYSIWYG method for mouse picking,     >>   š ‘     =|Œ‰—     3D paint by Hanrahan and Haeberli [1] and further studied by Lander [2], ^  =`µ  >  increases. Motivated by the multi-layer culling approach of Govindaraju et al., we do not construct a time consuming hierarchy. Instead, we use a multilayer rendering algorithm to perform a linear time picking operation. In this paper, we perform the exact object-space-based ray-triangle intersection test [4] in a geometry shader by taking advantage of its geometric processing capability. The overall approach makes no assumptions about the object’s motion and can be directly applied to all triangulated models. Some acceleration techniques for real-time rendering need to be applied in our method. Triangle strips and view frustum culling were introduced by [17, 18], respectively. It is possible to triangulate the bounding boxes of objects as strips and to cull away objects that are positioned out of the view frustum. Hardware occlusion queries for visibility culling were studied by [19–21]. GPU-based visibility culling is also important in our algorithm.

HARDWARE ACCELERATED PICKING Our mouse picking operation takes the screen coordinate of the cursor and the scene to be rendered as input, and outputs the intersection information, such as object id, triangle id, and even the barycentric coordinate of the intersection point. In this section, we first present an overview of our algorithm and then we discuss it in detail.

Algorithm Overview Our FRMP method exploits the new features of the 4th generation of PCclass programmable graphics processing units [22]. Figure 1 illustrates the algorithm workflow. The overall algorithm is outlined as follows.

Fast and Reliable Mouse Picking Using Graphics Hardware

121

Figure 1˜  ^" 

(1)

Once the user clicks on the screen, compute the picking ray origin and direction in the view coordinate system. (2) Set the render target with one-pixel size. (3) Set the render states DepthClipEnable and DepthEnable to FALSE. (4) After the view frustum culling, render the bounding boxes of the visible objects. We issue a boolean occlusion query for each object during this rendering pass. (5) Render the bounding boxes of all sub-objects whose corresponding occlusion query returns TRUE. Again we issue a boolean occlusion query for each subobject during this rendering pass. (6) Reset the states DepthClipEnable and DepthEnable to TRUE. (7) Render the actual triangles whose corresponding occlusion query returns TRUE. Now we only issue one occlusion query for all triangles. (8) If the occlusion query returns TRUE, trivially read back the picking information from the one-pixelsized render target data; otherwise, no object is picked. The novel multi-layer rendering pass on programmable graphics shaders is outlined below: (1) (2)

Transform the per-vertex position to the view coordinate system in the vertex shader. Perform the object-space-based ray-triangle intersection test in the geometry shader, output a point with picking information if the triangle is intersected. The x- and y-components of the intersection point are set to 0, and the z-component is assigned as the depth value of the point. Then the point is passed to the rasterization stage.

Computer Games Technology

122

(3)

Output the picking information directly in the pixel shader.

New Features in the Shader Model 4.0 Pipeline The Shader Model 4.0 fully supports 32-bit floating-point data format, which meets the appropriate precision requirement for general purpose GPU computing (GPGPU). The occlusion query can return the number of pixels that pass the z-testing, or just a boolean value indicating whether or not any pixel passes the z-testing. In our case, we only need the boolean result that whether some objects are rendered or none are rendered. The Geometry Shader!         4.0 pipeline, takes the vertices of a single primitive (point, line segment, or triangle) as input and generates the vertices of zero or more primitives. The >>> >         shader program. We use a triangle as the input primitive, as the ray-triangle intersection detection needs to be implemented here. We get a point as output. If the intersection test is passed, a point primitive with intersection information is returned. If the test is failed, no point is output.

Intersection Test in the Geometry Shader In this section, we present the ray-intersection test introduced by Möller and Trumbore [4]. We implement the algorithm in a geometry shader by taking advantage of its geometric processing capability. A ray, r(t), is defined by an origin point, o, and a normalized direction vector, d. Its mathematical formula is shown in (1): (1) Here the scalar, t, is a variable that is used to generate different points on the ray, where t-values of greater than zero are said to lie in front of the ray origin and so are a part of the ray and negative t-values lie behind it. Also, since the ray direction is normalized, a t-value generates a point on the ray that is t distance units away from the ray origin. When the user clicks the mouse, the screen coordinates of the cursor are transformed through the projection matrix into a view-space ray that goes from the eye-point through the point clicked on the screen and into the screen.A point, t(u, v), on a triangle is given by the explicit formula (2). (2)  !€     !   ¶—!¶—¦

Fast and Reliable Mouse Picking Using Graphics Hardware

123

±|{ >     > ^!€!  ! t(u, v), satisfies the equation r(t) = t(u, v), which yields: (3) An illustration of a ray and the barycentric coordinate for a triangle are shown in Figure 2. Denoting e1 = v1«0, e2 = v2«0!·«0, the solution to (3) can be easily obtained by using Cramer’s rule [23]:

(4) As a result, the intersection information is obtained by solving (4). As this process is independent of the triangles, we can parallelize it in graphics hardware. This equation is adapted with optimizations since the determinant of a matrix is an intrinsic function in the High Level Shading Language (HLSL). The intersection test is conducted in the view space and if it is passed, we output a point primitive. The x- and y-components of its position coordinate are 0 because the render target used in our algorithm is only onepixel in size. The z-component is the depth value which is obtained by transforming the distance value into the projection space. The GPU  >      *> Assembler Stage. In addition, the barycentric coordinate value (u, v) and the object id are also obtained from the picking information. The pseudo-code in the geometry shader is presented in Algorithm 1.

Figure 2: (Left) a simple ray and its parameters. (Right) barycentric coordinate for a triangle, along with some example point values.

124

Computer Games Technology

Multi-Layer Visibility Queries We use a multi-layer rendering algorithm to perform linear time intersection tests, taking advantage of the 4th generation of PC-class programmable graphics processing units. The overall approach makes no assumption about the object’s motion and is directly applicable to all triangulated models. First of all, we set a 1 × 1 sized texture as a render target after the view frustum culling. Instead of rendering the actual triangles, we then render the bounding boxes of the visible objects. We issue a boolean occlusion query for each object during this rendering pass. As we know, the render state DepthClipEnable controls whether to clip primitives whose depth values are not in the range of [0, 1] or not; the render state DepthEnable determines whether to perform the depth testing or not. After the view frustum culling, there are some objects intersected with the nearplane or the far-plane of view frustum. The depth values of some vertices may not be in the range of [0, 1]. In order to collect all the possible intersected objects for the next layer, we set DepthClipEnable and DepthEnable to FALSE. If any occlusion query is passed, the corresponding object may intersected with the picking ray and thus its actual triangles will be rendered; otherwise, it is pruned. Since a large number of objects are not intersected during this step, we can greatly reduce the rendering time compared with the WYSIWYG method, which requires us to render all the objects. Second, we render the bounding boxes of all sub-objects whose corresponding occlusion query returns TRUE. Again we issue a boolean occlusion query for each sub-object during this rendering pass. Since some           !           GPU memory, we group adjacent local triangles to form a sub-object and prune the potential regions considerably as suggested in [3]. Next, the actual triangles of the unpruned sub-objects are rendered. We only issue one occlusion query for all the triangles during this step. We would like to get the exact intersection result after this step. Triangles outside the view frustum are discarded, and only the closest triangle is needed. Thus the render states DepthClipEnable and DepthEnable are reset to TRUE. Lastly, if the occlusion query passes, the triangle with the minimal distance from the eye-point is picked and its intersection information can be retrieved from the 1×1 sized render target texture. This causes an additional delay while reading back data from the graphics memory to the system memory. In the WYSIWYG method, we need to lock the window-sized texture to get the picking information but this is slow when the window size

Fast and Reliable Mouse Picking Using Graphics Hardware

125

is large. Actually our novel algorithm only needs to store the information in the smallest sized texture. If the occlusion query fails, we need not read the data from the render target because we know that nothing has been picked. In the WYSIWYG method, however, one cannot know if anything has been picked until one reads the corresponding data from the texture.

EXPERIMENTAL RESULTS AND DISCUSSION Our algorithm takes the screen coordinates of the cursor and the scene to be rendered as the input, and outputs intersection information, such as object id, triangle id, and even the barycentric coordinate of the intersection point. Now our algorithm can be used with platforms which support Direct3D 10 APIs. We have incorporated our FRMP method into a Direct3D 10-based scene graph library and tested it on four scenes in order to evaluate its efficiency for different scene types. All tests were conducted on a PC with a 1.83 GHz Intel Core 2 Duo 6320 CPU, 2 GB main memory, an NVIDIA Geforce 8800 GTS GPU, 320 MB graphics memory, and Windows Vista 64bit Operating System.

The Test Scenes The four test scenes comprise of an arrangement of a toy elk model (3290 polygons), a Venus model (43 357 polygons), 2000 randomly rotated teapots (12.64 M polygons) and 10 000 randomly rotated tori (8 M polygons), all are in resolution of 1024×768 pixels. The test scenes are depicted in Figure 3.

(a)

126

Computer Games Technology

(b)

(c)

(d) Figure 3: The four test scenes: the toy elk (upper left), Venus (upper right) the teapots (lower left) and the tori (lower right). Note that the picked objects are shown in wireframe and the picked triangles are shown in black, whereas other objects are shaded normally.

Fast and Reliable Mouse Picking Using Graphics Hardware

127

The toy elk scene only has 3290 triangles, while the Venus scene consists of large number of triangles. Both are simple cases to handle for the picking operation as only one object is used and is not occlusion culled. These two                >   '  cases may occur in mesh editing or geometry painting applications. The teapots scene with 12.64 M triangles and the tori scene with 8 M triangles are complex cases and are designed to rotate randomly from frame-to-frame. They can offer good occlusions as most of their objects are occluded in most instances.

Comparison of the Results For each test scene, we report the processing times of our fast and reliable mouse picking (FRMP) algorithm in comparison to the CPU implementation of our algorithm, and to the traditional GPU method (WYSIWYG) (see Figure 4). Note that in our tests we have picked an object. Had we not done so, our algorithm would have performed even better than the competition. This is because when no bounding box intersects with the picking ray, our approach will not render the actual triangles and return FALSE directly.

(a)

128

Computer Games Technology

(b)

(c)

Fast and Reliable Mouse Picking Using Graphics Hardware

129

(d) Figure 4: Processing time comparisons for the toy elk (upper left), the Venus (upper right), the teapots (lower left) and the tori (lower right). Note that the lower two scenes use a logarithmic scale to capture the high variations in processing times. Note that if no object is picked, the processing times of our method will even be faster because we picked one object on purpose to perform these tests.

As we can see from a number of scene statistics shown in Table 1, our method can produce a speedup of more than two as compared to the traditional WYSIWYG method. In the toy elk scene, our method was 2469 miliseconds faster than the CPU method, while the WYSIWYG method was 3554 miliseconds slower than the CPU method. That is because the whole window-sized texture data needs to be read back to the main memory to check the intersection even for small models. In the Venus scene, as the triangle number is increased, our method and the WYSIWYG method produce a speedup of 22.831 and 3.549, respectively. Even in the teapot scene and in the torus scene, our method maintained a good speedup over the WYSIWYG method. If a very large model cannot be loaded into the

130

Computer Games Technology

video memory in its entirety, then our GPU-based algorithm seems to be slower than the CPU-based approach. Fortunately such occurrences are rare in many real-time applications. Table 1: Statistics for the four test scenes. The processing times are in (miliseconds) Model name

Toy elk

Venus

Teapot

Torus

Triangles per Modelnumber Method model

3290

43 357

6320

800

1

1

2000

10 000

CPU

Longest time (miliseconds)

Shortest time Average time Speedup (miliseconds) (milisecond)

3.064

2.852

2.910

1.000

WYSIWYG 6.993

6.390

6.464

0.450

FRMP

0.891

0.314

0.441

6.599

CPU

42.497

37.824

38.859

1.000

WYSIWYG 11.974

10.010

10.948

3.549

FRMP

2.249

1.521

1.702

22.831

CPU

4500.598

4320.855

4387.013

1.000

WYSIWYG 165.392

160.398

163.254

26.872

FRMP

83.334

78.293

80.959

54.188

CPU

1720.887

1696.976

1706.358

1.000

WYSIWYG 47.918

38.016

42.411

40.234

FRMP

14.120

17.651

96.672

19.636

CONCLUSIONS AND FUTURE WORK We have presented a novel algorithm for intersection tests between a picking ray and multiple objects in an arbitrarily complex 3D environment using some new features of graphics hardware. The algorithm in this paper is fast, more reliable, parallelizable, and simple. Our algorithm is applicable to all triangulated models, making no assumptions about the input primitives and can compute the exact intersection information in object-space. Furthermore, our FRMP picking operation can achieve high efficiency as compared with traditional methods. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering applications. Our FRMP picking approach is of relevance to interactive graphics applications.The presented approach still leaves some room for improvement and for extensions. For instance, alternative acceleration techniques for real-time rendering may be

Fast and Reliable Mouse Picking Using Graphics Hardware

131

applied to our FRMP method. Moreover, additional hardware features will be useful with the progress of the graphics hardware. In the future, we would like to extend and to apply our technique to the generic collision detection field. Algorithm 1: Object-based intersection test.

132

Computer Games Technology

ACKNOWLEDGMENTS The authors would like to thank the Cybergames ‘08 conference and special issue reviewers for their dedicated help in improving the paper. Many thanks also to Xiaoyan Luo, Charlie C. L. Wang, and Feifei Wei for their help and their valuable advice. The models used for the test in our paper can be downloaded from http://shapes.aim-at-shape.net/. This work was supported by the National Natural Science Foundation of China (Grant nos. 60533080 and 60833007) and the Key Technology R&D Program (Grant no. 2007BAH11B03).

Fast and Reliable Mouse Picking Using Graphics Hardware

133

REFERENCES 1.

P. Hanrahan and P. Haeberli, “Direct WYSIWYG painting and texturing on 3D shapes,” ACM SIGGRAPH Computer Graphics, vol. 24, no. 4, pp. 215–223, 1990. 2. J. Lander, “Haunted trees for halloween,” Game Developer Magazine, vol. 7, no. 11, pp. 17–21, 2000. 3. N. K. Govindaraju, S. Redon, M. C. Lin, and D. Manocha, “CULLIDE: interactive collision detection between complex models in large environments using graphics hardware,” in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware (HWWS ‘03), pp. 25–32, Eurographics Association, San Diego, Calif, USA, July 2003. 4. T. Möller and B. Trumbore, “Fast, minimum storage ray-triangle intersection,” in Proceedings of the 32nd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ‘05), Los Angeles, Calif, USA, July-August 2005. 5. F. P. Preparata and M. I. Shamos, Computational Geometry: An Introduction, Springer, New York, NY, USA, 1985. 6. I. J. Palmer and R. L. Grimsdale, “Collision detection for animation using sphere-trees,” Computer Graphics Forum, vol. 14, no. 2, pp. 105–116, 1995. 7. P. M. Hubbard, “Approximating polyhedra with spheres for timecritical collision detection,” ACM Transactions on Graphics, vol. 15, no. 3, pp. 179–210, 1996. 8. ™ ?  !\      >    models using AABB trees,” Journal of Graphics Tools, vol. 2, no. 4, pp. 1–13, 1997. 9. T. Larsson and T. Akenine-Möller, “Collision detection for continuously deforming bodies,” in Proceedings of the Annual Conference of the European Association for Computer Graphics (EUROGRAPHICS ‘01), pp. 325–333, Manchester, UK, September 2001. 10. J. T. Klosowski, M. Held, J. S. B. Mitchell, H. Sowizral, and K. Zikan, \                k-DOPs,” IEEE Transactions on Visualization and Computer Graphics, vol. 4, no. 1, pp. 21–36, 1998. 11. S. Gottschalk, M. C. Lin, and D. Manocha, “OBBTree: a hierarchical

134

12.

13.

14. 15. 16. 17.

18.

19.

20.

21.

Computer Games Technology

structure for rapid interference detection,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ‘96), pp. 171–180, ACM, New Orleans, La, USA, August 1996. N. K. Govindaraju, M. C. Lin, and D. Manocha, “Quick-CULLIDE: fast inter- and intra-object collision culling using graphics hardware,” in Proceedings of IEEE Virtual Reality Conference (VR ‘05), pp. 59– 66, IEEE Computer Society, Bonn, Germany, March 2005. N. K. Govindaraju, M. C. Lin, and D. Manocha, “Fast and reliable collision culling using graphics hardware,” in Proceedings of the 11th ACM Symposium on Virtual Reality Software and Technology (VRST ‘04), pp. 2–9, ACM, Hong Kong, November 2004. T. Akenine-Möller and E. Haines, Real-Time Rendering, AK Peters, Natick, Mass, USA, 2nd edition, 2002. Microsoft Corporation, DirectX Software Development Kit, Microsoft Corporation, Redmond, Wass, USA, 2007. D. Shreiner, Ed., OpenGL® 1.4 Reference Manual, Addison Wesley Longman, Redwood City, Calif, USA, 4th edition, 2004. F. Evans, S. Skiena, and A. Varshney, “Optimizing triangle strips for fast rendering,” in Proceedings of the 7th IEEE Visualization Conference, pp. 319–326, IEEE Computer Society Press, San Francisco, Calif, USA, October-November 1996. U. Assarsson and T. Möller, “Optimized view frustum culling algorithms for bounding boxes,” Journal of Graphics Tools, vol. 5, no. 1, pp. 9–22, 2000. H. Zhao, X. Jin, and J. Shen, “Simple and fast terrain rendering using graphics hardware,” in          Existence, vol. 4282 of Lecture Notes in Computer Science, pp. 715– 723, Springer, Berlin, Germany, 2006. J. Bittner and V. Havran, “Exploiting temporal and spatial coherence in hierarchical visibility algorithms,” in Proceedings of the 17th Spring Conference on Computer Graphics (SCCG ‘01), p. 156, IEEE Computer Society, Budmerice, Slovakia, April 2001. J. Bittner, M. Wimmer, H. Piringer, and W. Purgathofer, “Coherent hierarchical culling: hardware occlusion queries made useful,” Computer Graphics Forum, vol. 23, no. 3, pp. 615–624, 2004.

Fast and Reliable Mouse Picking Using Graphics Hardware

135

22. D. Blythe, “The direct3D 10 system,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 724–734, 2006. 23. E. W. Weisstein, “Cramer’s Rule,” http://mathworld.wolfram.com/ CramersRule.html.

CHAPTER 6

Ballooning Graphics Memory Space in Full GPU Virtualization Environments

Younghun Park, Minwoo Gu, and Sungyong Park Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of Korea

ABSTRACT Advances in virtualization technology have enabled multiple virtual machines (VMs) to share resources in a physical machine (PM). With the widespread use of graphics-intensive applications, such as two-dimensional (2D) or 3D rendering, many graphics processing unit (GPU) virtualization solutions have been proposed to provide high-performance GPU services in Citation: Younghun Park, Minwoo Gu, Sungyong Park, “Ballooning Graphics Memory '> ‘™‹†~\ !'   ‹!‡—|Œ! Article ID 5240956, 11 pages, 2019. https://doi.org/10.1155/2019/5240956. Copyright: © 2019 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

138

Computer Games Technology

a virtualized environment. Although elasticity is one of the major benefits in this environment, the allocation of GPU memory is still static in the sense that after the GPU memory is allocated to a VM, it is not possible to change the memory size at runtime. This causes underutilization of GPU memory or performance degradation of a GPU application due to the lack of GPU memory when an application requires a large amount of GPU memory. In this paper, we propose a GPU memory ballooning solution called gBalloon that dynamically adjusts the GPU memory size at runtime according to the GPU memory requirement of each VM and the GPU memory sharing overhead. The gBalloon extends the GPU memory size of a VM by detecting performance degradation due to the lack of GPU memory. The gBalloon also reduces the GPU memory size when the overcommitted or underutilized GPU memory of a VM creates additional overhead for the GPU context switch or the CPU load due to GPU memory sharing among the VMs. We implemented the gBalloon by modifying the gVirt, a full GPU virtualization solution for Intel’s integrated GPUs. Benchmarking results show that the gBalloon dynamically adjusts the GPU memory size at runtime, which improves the performance by up to 8% against the gVirt with 384 MB of high global graphics memory and 32% against the gVirt with 1024 MB of high global graphics memory.

INTRODUCTION Running graphics-intensive applications that include three-dimensional (3D) visualization and rendering in a virtualized environment creates a new challenge for high-performance graphics processing unit (GPU) virtualization solutions. GPU virtualization is a technique that allows multiple virtual machines (VMs) to share a physical GPU and run highperformance graphics applications with performance guarantees. Among a wide range of GPU virtualization techniques, application programming interface (API) remoting [1–11] is a method of intercepting API calls and passing them to the host. This method is easy to implement,  Š        ‹*™‹† driver changes. This method also cannot provide all GPU functions. Direct pass-through [12, 13] allocates a GPU exclusively to a single VM and allows it to directly use the GPU with no intervention by the hypervisor. This method provides a high performance that is similar to the native environment, but GPU sharing among VMs is impossible. Currently, AWS [14] and Azure [15] provide GPU services to VMs through the direct pass-through method

Ballooning Graphics Memory Space in Full GPU Virtualization ...

139

or through the NVIDIA GRID [16] which is a GPU virtualization solution at the hardware level. To solve the problems of the approaches mentioned above, GPU virtualization solutions at the hypervisor level, such as gVirt [17], GPUvm [18], and VMCG [19], have been proposed. The gVirt is a full GPU virtualization technology for Intel’s integrated GPUs. Unlike dedicated or discrete GPUs in which dedicated graphic cards have independent graphics memory, integrated GPUs share a portion of the system RAM for graphics memory (or GPU memory). The original gVirt divides the graphics memory into chunks and allocates them exclusively to each VM. As a result, a single host could create up to only a maximum of three VMs. The gScale [20, 21] solved this scalability problem by dividing the graphics memory into multiple small slots and letting VMs share the slots. Private and physical graphics translation tables (GTTs) are used to translate the virtual address generated by each VM into the physical address for the graphics memory. When a VM is scheduled to run, the private GTT entries of the corresponding VM are copied to the physical GTT. Every time entries are updated in the physical ™{{!          > ™{{    VM has. However, the GPU memory size, which is set during the initial creation of the VM, cannot be changed dynamically. This causes the following problems. First, a VM must be restarted to change the allocated GPU memory size. The user must restart the VM to execute the GPU application that requires GPU memory larger than the current setting. As a result, the VM stops, and the service interruption is expectable. Second, the VM that occupies GPU memory larger than necessary can degrade the performance of other CPU and GPU applications running on other VMs. Whenever a GPU context switch occurs, the CPU copies the private GTT entries to a physical GTT, which also takes up the CPU time of other VMs. If the memory utilization is low, this unnecessary copying overhead may degrade the performance of other VMs. In addition, the GPU context switch time increases as the GPU memory size of each VM gets larger [22]. Third, as we reported in a previous study [22], small GPU memory size affects the performance of GPU workload, especially when VMs run with graphics operations for rendering or high-resolution display environments. Although several studies [23–29] dynamically adjusted the memory allocation of existing VMs, in these studies, memory was taken from a >  ~`   ~`  >    

140

Computer Games Technology

       >>            ~`  >   > ! >>   techniques directly to the full GPU virtualization environment in which the same virtual GPU memory space is shared. In this paper, we propose a dynamic GPU memory ballooning scheme called gBalloon which dynamically increases or decreases the GPU memory size allocated to each VM at runtime. The gBalloon detects performance degradation due to the lack of GPU memory and increases the GPU memory size allocated to each VM. In addition, the GPU memory size of each VM can be reduced when the overcommitted or underutilized GPU memory of a VM creates additional overhead for the GPU context switch or the CPU load due to GPU memory sharing among the VMs. We implement the gBalloon using the 2016Q4 version of gVirt. As the gScale’s GPU memory-sharing technique is also implemented, the gBalloon can scale up to 15 Linux VMs. Using various CPU and GPU benchmarks, we also show that the gBalloon dynamically adjusts the GPU memory size at runtime and outperforms the gVirt   gVirt in which the gScale’s features are added) by up to 8% against the gVirt with 384 MB of high global graphics memory and 32% against the gVirt with 1024 MB of high global graphics memory. Although current gBalloon is mainly targeted at Intel’s integrated GPU, its design principle can be applied to other architectures as well. For example, the proposed idea can be easily applied to other integrated GPUs from AMD and Samsung, where the system memory is used as GPU memory. In addition, we believe that other discrete GPUs with dedicated      ¢~*;*        gBalloon since they also use graphics translation table for address translation. However, special care has to be taken to reduce the memory copy overhead across the system bus if the gBalloon is implemented over discrete GPUs. As the gVirt is open source and the access to the source codes for the NVIDIA driver and runtime is limited, we decided to use the gVirt as a software platform to verify our proposed idea. The rest of the paper is organized as follows. In Section 2, we outline the structure of gVirt and present the motivations behind the design of gBalloon. In Section 3, we explain the design and implementation issues of gBalloon. In Section 4, we evaluate the performance of gBalloon and compare it with that of gVirt  *' ‚!     works, and in Section 6, we conclude with suggestions for future work.

Ballooning Graphics Memory Space in Full GPU Virtualization ...

141

BACKGROUND AND MOTIVATION In this section, we provide an overview of the gVirt and discuss the motivations for the proposed approach.

Overview of gVirt The gVirt is a high-performance GPU virtualization solution that provides mediated pass-through capability [17]. The gVirt allows VMs to directly access resources that have a large effect on performance and make other privileged operations be intervened through a hypervisor. Due to the restriction on the number of simultaneous VMs in the gVirt, we modified the original gVirt over Xen hypervisor (XenGT) and added the gScale’s scalability features [20]. Throughout this paper, we consider the modified gVirt as gVirt. In the gVirt, the mediator located in Dom0 emulates the virtual GPU (vGPU) for each VM. The mediator schedules the vGPU in a round-robin fashion for fair scheduling among the VMs. Considering the high cost of the GPU context switch, each vGPU is switched at approximately 16 ms interval, which is a speed at which people cannot recognize an image change [17]. When the time quantum allocated to each vGPU is fully used, the mediator saves the registers and memory information of the corresponding vGPU and restores the GPU state for the next vGPU. Because the gVirt does not support preemption (although NVIDIA starts to provide preemption capability at the hardware level from Pascal architecture, the feature is not exposed to user’s control at the time of this writing) at the time of this writing, it traces the submitted GPU commands and waits for the results until each vGPU can

 _‘ > !™‹†   ™‹†^     16 ms, it runs without preemption. To prevent each vGPU from overusing the time quantum, the gVirt places a limit on the number of GPU kernels a vGPU is allowed to run within the time quantum. Intel’s global graphics memory is divided into two parts as shown in Figure 1: low global graphics memory and high global graphics memory. Only the GPU can access high global graphics memory, but the CPU and the GPU can access low global graphics memory. The CPU can access low global graphics memory through the aperture mapped to the GPU memory. Thus, the amount of low global graphics memory that can be accessed depends on the aperture size. The maximum aperture size currently supported by the motherboard is 512 kB, which is mapped up to 512 MB of the low global graphics memory.

142

Computer Games Technology

Figure 1: Global graphics memory structure of gVirt.

Figure 1 shows the memory mapping and management structure between global graphics memory and system memory. In the gVirt gVirt in which the gScale’s features are added), part of the low global graphics memory is shared by all vGPUs, and the high global graphics memory is divided into 64 MB slots that can also be shared among the vGPUs. The virtual address of the global graphics memory is converted into a physical address through the physical GTT. Each vGPU has a private GTT, which is a copy of the physical GTT corresponding to the allocated low global graphics memory and high global graphics memory. The private GTT entries of the ™‹†        ™‹†  >  ™{{ entries. To activate the vGPU in a GPU context switch, if the entry does not exist in the physical GTT, the state is restored by copying the private GTT to the physical GTT. However, if the vGPU is scheduled out, the CPU cannot access the vGPU through aperture. To solve this problem, the gScale allows the CPU to access the vGPU space at all times through the fence memory space pool, which ensures the proper operation of ladder mapping for mapping the guest physical address to the host physical address [20]. The gVirt framework focuses on the acceleration of graphics-intensive applications such as 2D or 3D rendering over a virtualized environment, rather than general-purpose computing on graphics processing units (GPGPU) computing over clouds.

Ballooning Graphics Memory Space in Full GPU Virtualization ...

143

Motivation In current gVirt, 64 MB low global graphics memory and 384 MB high global graphics memory are recommended for Linux VM [20] because those memory sizes are enough to support most GPU workloads without performance degradation and crash from the experiments. However, we showed in a previous study [22] that large high global graphics memory can sometimes increase the performance of GPU workloads. We also observed in the study that large high global graphics memory can increase the possibility of overlapping address spaces, which may incur large overhead in a GPU context switch and thus degrade the performance of other VMs. Furthermore, in an environment where the VMs require large global graphics memory, it is highly likely that their GTT entries do not exist in the physical GTT in a GPU context switch because the GPU memory space is shared among the VMs. If a large amount of GPU memory is allocated although it is not fully utilized, unnecessary copies of the GTT entries can occur in a GPU context switch. This increases the time for the GPU context switch, which also decreases the time for each vGPU to occupy the GPU per unit time. As a result, not only the performance of the GPU applications running on all VMs is degraded but also the time for the CPU to copy the GTT entries increases. Therefore, the performance of the CPU applications running on the VMs can be degraded as well. {    !       >         effects of copying GTT entries on the performance of GPU application due to excessive occupation of high global graphics memory. For the two experiments, 384 MB and 1024 MB are used for the high global graphics   *>       than 384 MB and smaller than the size of physical GPU memory. Furthermore, the size should be multiple of slot size. Figure 2 shows the sum of the frames per second (FPS) for each VM by executing Nexuiz 3D benchmarks from Phoronix Test Suite [30] as we increase the number of VMs from 3 to 15. As shown in Figure 2, when the size of the high global graphics memory is small (384 MB), the VMs start to share the GPU memory from the point when the number of VMs reaches around 10. Then, the performance of the VMs degrades as we increase the number of VMs to 12 or 15. However, when the size of the high global graphics memory is large (1024 MB), the VMs start to share GPU memory from the relatively small number of VMs. This means that the copying of the GTT entries in the GPU context switch causes a very large performance degradation. When the number of VMs is

144

Computer Games Technology

6 or 9, the performance at 1024 MB degraded by approximately 3.5 times compared with the performance at 384 MB.

Figure 2: Performance degradation due to GPU memory sharing.

Overall, the performance of the VMs is highly affected by the size of the GPU memory allocated to each VM, and the memory size must be adjusted at runtime to optimize the performance.

DESIGN OF gBalloon In this section, we describe the design and implementation of the gBalloon that adjusts the GPU memory size of VMs at runtime. As we identified in the previous section, the performance of a GPU application can be degraded due to the static allocation of GPU memory. The gBalloon monitors the lack of GPU memory in VMs and then allocates the required amount of GPU memory to the corresponding VM. Furthermore, the gBalloon also checks the performance of CPU and GPU applications and decreases the GPU memory size when the performance of each VM degrades due to the GPU memory sharing among the VMs. The gBalloon is implemented by modifying the gVirt 2016Q4 release [31]. In the following, we present the GPU memory expansion and reduction strategies implemented in the gBalloon in detail.

GPU Memory Expansion Strategy When a VM is created, the GPU driver of the VM obtains the available range of GPU memory from the GPU driver in Dom0 and balloons the requested memory area excluding the space that has been allocated to the VM. Then, the GPU driver of the VM searches for memory space excluding

Ballooning Graphics Memory Space in Full GPU Virtualization ...

145

the ballooned area when allocating a new memory object. If the space for allocating an object is insufficient, the GPU driver creates an empty space by returning the existing memory objects. As a result, the performance of a GPU application that requires graphics-intensive operations, such as rendering operations, is degraded because the same objects are frequently returned. To reduce this overhead, the gBalloon detects the VMs’ lack of GPU memory by tracing the number of memory object returns at runtime and reduces the ballooned area of other VMs for the required amount of memory space so that the VM with the lack of memory can use additional GPU memory. Figure 3 shows the process in which the gBalloon allocates additional GPU memory to the guest. When a guest GPU driver must return an existing object due to the lack of GPU memory, the following four steps take place. Step 1: the guest requests additional GPU memory space from the host. Step 2: to expand the GPU memory space with the requested size, the host chooses the optimal strategy that can minimize the GPU memory-sharing overhead based on the GPU memory adjustment algorithm which will be explained later. Step 3: based on the results of the GPU memory adjustment algorithm,     •> ™{{   ' >ª˜ !   receives information about the GPU memory expansion from the host and shrinks the existing ballooned area so that it can be used to allocate objects. For example, as shown in Figure 3, the high global graphics memory area of vGPU1 is expanded to the right, and the shared memory areas of vGPU1 and vGPU3 are increased accordingly.

Figure 3: GPU memory expansion process.

146

Computer Games Technology

GPU Memory Reduction Strategy As the GPU memory is expanded by the GPU memory expansion request of a VM, the size of the GPU memory shared among the VMs can also be increased. Consequently, the probability that the entry will not exist in the physical GTT during the GPU context switch is increased, resulting in more GTT entry copies. This degrades the performance of the GPU application. In addition, as the number of entries to be copied is also increased, the CPU consumes more time copying the GTT entries, thus degrading the performance of the CPU application. e gBalloon monitors the CPU cycles consumed for copying GTTentries during the GPU context switch to check the performance degradation of  ™‹†>> ‹  >       “ ¢  the number of CPUs in the host and C is the CPU cycles consumed for copying GTT entries, the rate of time Rcopy consumed by the CPU to copy the GTTentries for unit time t can be expressed as follows: (1) Because one CPU processes the copying of the GTT entries, a total of N CPUs consume the cycles for the unit time. us, the number of CPUs must be re£ected in Rcopy. To check the performance degradation of the CPU application due to the competition among the vCPUs, the gBalloon uses the steal time. The steal time is the time when the vCPU of a VM exists in the ready queue. Assume that wij is the steal time of the vCPUj of VMi and sij is the time when the vCPUj of VMi exists in other queues. Then, the rate W of the steal times in all VMs can be expressed as follows:

(2) A large value of W means that there is severe competition among the VMs. Therefore, the state of a physical machine (PM)      values of W and Rcopy. If both W and Rcopy are large, then the performance of the CPU application is being degraded by the copying of the GTT entries. In this case, the host must prevent the performance degradation of the CPU and GPU applications by rejecting the GPU memory expansion requests of the VMs and reducing the GPU memory sharing among the VMs. Whereas,

Ballooning Graphics Memory Space in Full GPU Virtualization ...

147

if W is large, but Rcopy is small, although there is severe competition among the VMs, it is not caused by the copying of the GTT entries. In this case, the performance of the CPU application could be degraded due to the increase in Rcopy. Thus, the GTT size should be reduced if possible. However, if W is small, but Rcopy is large, there is no competition among the VMs, but many copies of the GTT entries are occurring. In this case, if the VMs perform CPU applications, then the competition among the vCPUs can become more severe due to the copies of the GTT entries, and thus, the performance of the CPU application can degrade. Therefore, the host should try to reduce the sharing of GPU memory as much as possible. Finally, if both W and Rcopy are small, there is no overhead in the current PM, and the host does not need to take any action. ?  >     ! ?    degree of overhead in the host using the values of W and Rcopy. f(W, Rcopy), which represents the degree of overhead incurred due to the copying of the GTTentries, can be expressed as follows: (3)  »¼   > “šcopy. Those parameters normalize the overall value by giving the same weight to W and Rcopy. The maximum W and Rcopy are dependent upon a particular hardware platform and are generally determined through experiments. The gBalloon calculates f(W, Rcopy€   —‚      the value with two thresholds Thresholdlow and Thresholdhigh. Based on the  !   ? >>     ™‹†   policies. The threshold values are experimentally determined from 0 to 2 as the value of f(W, Rcopy) ranges from 0 to 2. For the experiments, we use 0.5 (25% of the maximum value) and 1 (50% of the maximum value) for Thresholdlow and Thresholdhigh, respectively. If f(W, Rcopy) is smaller than Thresholdlow, the gBalloon approves all requests for GPU memory expansion and does not reduce the GPU memory of the VMs. If f(W, Rcopy) is larger than Thresholdlow, the gBalloon starts to decrease the size of the GPU memory allocated to each VM to reduce the overhead of the GPU memory sharing among the VMs. Instead of reducing the GPU memory of all VMs, the gBalloon reduces the GPU memory of the VM that occupies a large amount of GPU memory and has the lowest GPU memory utilization. If f(W, Rcopy) is larger than Thresholdhigh, the gBalloon rejects all requests for GPU memory expansion from the VMs and reduces a space corresponding to two 64 MB

148

Computer Games Technology

slots regardless of the GPU memory usage of each VM. The method for reducing GPU memory is to minimize the GPU memory sharing by using Algorithm 1 described in the next subsection. Algorithm 1: GPU memory adjustment algorithm.

Ballooning Graphics Memory Space in Full GPU Virtualization ...

149

GPU Memory Adjustment Algorithm The GPU memory is adjusted to minimize the GPU memory sharing with the existing VMs. When the gBalloon decides the number of GPU memory slots and the target vGPU to adjust based on the GPU memory expansion and reduction strategies discussed earlier, the spaces at both sides of the GPU memory space allocated to VMs are increased or decreased. For example, let us assume that vGPU4 initiated a GPU memory expansion request for two slots when there are four vGPUs from vGPU1 to vGPU4 that require two, two, two, and one slots, respectively. Also assume       ! ™‹†    ‡—`? shown in Figure 4. In this case, there are three possible methods for expanding the two slots as requested by vGPU4: (1) expanding two slots to the left side, (2) expanding one slot each to the left and right sides, and (3) expanding     *    !™‹†ª  |‡   vGPU1, resulting in two shared slots in total. In the second case, vGPU4 shares slot 2 with vGPU1 and slot 4 with vGPU2 and vGPU3, resulting in three shared slots in total. In the third case, vGPU4 shares slot 4 with vGPU2 and vGPU3 and slot 5 with vGPU3, resulting in three shared slots in total. Therefore, a strategy of expanding two slots to the left minimizes the number of shared slots. The gBalloon measures the number of shared slots and expands the GPU memory allocated to the VM to minimize the sharing overhead among the VMs.

Figure 4: Example of GPU memory expansion.

The method for reducing GPU memory is the same as that for expanding it. In Figure 4, when the GPU memory of vGPU3 should be reduced by one slot, reducing slot 4 rather than slot 5 can minimize the GPU memory sharing among the VMs. Thus, the policy for minimizing the GPU memory sharing can maximize the effect of the predictive-copy technique [21] that copies the GTT entries in advance by predicting the next scheduled vGPU before

150

Computer Games Technology

the GPU context switch. The detailed algorithms for memory expansion and reduction are presented below.

PERFORMANCE EVALUATION In this section, we compare the performance of the gBalloon with that of the gVirt using various workloads. Table 1 shows the experimental environment for the performance evaluation. The global graphics memory size of the host is set to 4 GB, which consists of 256 MB of low global graphics memory and 3840 MB of high global graphics memory. Dom0 does not share the global graphics memory with other domains, but other guest VMs share 64 MB of the low global graphics memory and 3456 MB of the high global graphics memory excluding the Dom0 area. The low global graphics memory size of every guest VM is set to 64 MB as recommended in [20], whereas the high global graphics memory size is set differently depending on the experiments. Table 1: Evaluation environment

The experiments use four 3D benchmarks and four 2D benchmarks. To measure the 3D performance, Lightsmark, Openarena, Nexuiz, and Urbanterror of Phoronix Test Suite [30] and Unigine Valley (valley) [32] that requires many rendering operations are used. To measure the 2D

Ballooning Graphics Memory Space in Full GPU Virtualization ...

151

performance, Firefox-asteroids (Firefox-ast), Firefox-scrolling (Firefoxscr), gnome-system-monitor (gnome), and Midori of Cairo-perf-trace [33] are used. The performance of the 3D benchmarks is measured by the average number of FPS, and the performance of the 2D benchmarks is measured by the execution time. Furthermore, the NAS Parallel Benchmark (NPB) [34] is used to measure the overhead for the CPU due to the GPU context switch.

Performance Comparison Using a Single GPU Application In this subsection, the performance of the gVirt and the gBalloon is compared when valley that requires many rendering operations is executed by multiple VMs. For the gVirt, the gVirt-384 (a gVirt version with the high global graphics memory set to 384 MB) and the gVirt-1024 (a gVirt version with the high global graphics memory set to 1024 MB) are used for the performance comparison. For the 2D and 3D benchmarks that do not demand many rendering operations, additional GPU memory is not allocated because they require a small amount of high global graphics memory. Therefore, valley is used to compare the performance of the dynamic GPU memory expansion policy of the gBalloon with that of the gVirt. To observe the performance variations due to the increase in the number of VMs and the change in the degree of GPU memory sharing, experiments were performed in which the number of VMs was increased by three. Figure 5 depicts the performance of the gVirt-384, the gVirt-1024, and the gBalloon normalized to the performance of the gVirt-384 by the sum of the FPS values of all VMs when there are 3, 6, 9, 12, or 15 VMs. When the number of VMs is six or fewer, the gVirt-1024 shows better performance than the gVirt-384 because the overhead from the GPU memory sharing is small. The gBalloon also shows a similar performance as the gVirt-1024 because the gBalloon allocates the required amount of GPU memory to VMs. In particular, as the number of VMs is increased, the performance of the gVirt-1024 and the gBalloon becomes more similar because the effect of overhead on the performance degradation becomes small, and the effect of performance degradation due to the GPU memory sharing becomes large. For this reason, all implementations show similar performance when the number of VMs is nine or higher.

152

Computer Games Technology

Figure 5: Performance of a single GPU benchmark.

Performance Comparison Using Multiple GPU Applications In this subsection, the performance of gVirt-384, gVirt-1024, and gBalloon is compared by running various types of GPU applications on 15 VMs. As shown in Table 2, 15 VMs run randomly selected 2D and 3D benchmarks. Because various benchmarks are mixed, it is possible to compare the degree of the performance degradation of the GPU applications due to the GPU memory-sharing overhead that may occur as the requirements for the GPU memory change. Table 2: The benchmark sequence that each VM performs

Ballooning Graphics Memory Space in Full GPU Virtualization ...

153

Figure 6 shows the performance comparison when the randomly selected 2D and 3D benchmarks shown in Table 2 are executed by 15 VMs. All performance values are normalized to that of the gVirt-384. Strangely, valley in the gVirt-1024 shows a better performance than the gVirt-384 although the overhead is large due to the GPU memory sharing. However, the performance of the other 3D benchmarks is decreased by 50% or higher and the performance of the 2D benchmarks by 25%. Thus, the performance of the total VMs drops by 24%, on average, compared with that of the gVirt-384. In the case of the gBalloon, the performance of valley is guaranteed because the GPU memory size of each VM expands as the required amount of GPU memory increases. Moreover, the gBalloon minimizes the overhead due to the GPU memory sharing by dynamically adjusting the GPU memory size according to the GPU memory usage. As a result, the GPU context switch time is decreased, and the performance of valley is increased by up to 28%. The performance of the other benchmarks is similar to that of the gVirt-384. Figure 7 shows a summary of the performance in all benchmarks. The performance of the gBalloon is higher by 8% than that of the gVirt-384 and higher by 32% than that of the gVirt-1024.

Figure 6: Performance of each GPU benchmark.

154

Computer Games Technology

Figure 7: Performance summary of all benchmarks.

Performance Comparison Using CPU and GPU Applications In this subsection, the effect of performance degradation in CPU applications caused by the copying of the private GTT entries is analyzed. Among the 15 VMs, 7 VMs run with CPU workloads, whereas the remaining 8 VMs run with GPU workloads. The CPU workload uses cg of the NPB benchmark, which is a workload to find the smallest eigenvalue of the matrix using the conjugate gradient method. Figure 8 shows the performance of the CPU and GPU benchmarks normalized to that of the gVirt-384. In the case of the gVirt-1024, the performance of cg is decreased by 19% compared with that of the gVirt-384. This is because the CPU consumes a great deal of time copying the private GTT entries due to the large amount of GPU memory shared among the VMs. Furthermore, the performance of valley is increased slightly by approximately 4% due to this overhead. In contrast, the gBalloon limits the increase in the GPU memory size of the VMs by detecting the overhead of the CPU and the overhead due to the copying of the GTT entries. As a result, the performance of cg is decreased by approximately 1.5%, and the performance of valley is increased by approximately 2% compared with that of gVirt-384.

Ballooning Graphics Memory Space in Full GPU Virtualization ...

155

Figure 8: Performance when CPU and GPU benchmarks are performed at the same time.

Overhead and Sensitivity Analysis In this subsection, the performance of the gBalloon and the gVirt in a single VM environment is compared. High global graphics memory sizes of 384 MB and 1024 MB are set for the VMs of the gBalloon and the gVirt, respectively. Figure 9 shows the adaptive behavior in the GPU memory slots of the VMs over time when valley is performed with the dynamic GPU memory expansion policy of the gBalloon. Valley is composed of 18 scenes in total, and the amount of GPU memory required is different for each scene. When valley is executed first, the required amount of GPU memory is increased sharply, and the gBalloon expands three slots in 2 second intervals. Then, one slot is expanded for several scenes. The number of slots is increased up to 13, and the size of the high global graphics memory of the VMs is increased to 832 MB.

Figure 9: Changes in GPU memory size allocated to VMs when GPU benchmarking requiring a lot of GPU memory is executed.

156

Computer Games Technology

Figure 10 shows the performance of gBalloon for each scene and the overall performance, which is normalized to that of the gVirt. As shown in Figure 10, the overall performance of the gBalloon is lower by approximately 1.9% than that of the gVirt. This is because the slots are increased one by one, resulting in performance degradation due to the temporary lack of GPU memory despite the sharp increase in the amount of GPU memory required                From the 10th scene when the number of slots becomes 12, the performance degradation disappears due to the frequent expansion requests and the lack of GPU memory. Thus, the FPS values of the gVirt and the gBalloon are similar.

Figure 10: Performance comparison of VMs when GPU benchmarking requiring a lot of GPU memory is executed.

RELATED WORKS Kato et al. [35], Wang et al. [36], Ji et al. [37], and Becchi et al. [38] proposed technologies for solving the problem of insufficient GPU memory when compute unified device architecture (CUDA) is performed in the NVIDIA GPU environment. When the amount of GPU memory is insufficient, the data in the GPU memory are moved to the system memory to secure space in the GPU memory, which is allocated to the applications. However, this copy operation has large overhead when performed at runtime, and the user must use a modified API. Kehne et al. [39] and Kehne et al. [40] proposed a swap policy for reducing the overhead at runtime and improving the resource fairness among GPU applications and the utilization of GPU memory. GPUswap     ^   !    ^ an application that occupies the largest amount of GPU memory, and moves

Ballooning Graphics Memory Space in Full GPU Virtualization ...

157

the chunk to the system memory    ™‹†       GPUswap randomly selects chunks from applications that occupy the largest amount of memory. However, because the chunk to be removed from the system memory is randomly selected, the performance of the corresponding applications may be degraded if highly reusable data are removed from the GPU memory. To reduce this overhead, GPrioSwap determines the >   ^>  ™‹†     GPU applications and moves the chunk with the lowest priority when the ™‹†    Studies have also been conducted to prevent program crashes when the GPU is shared between containers. Kang et al. [41] proposed a solution that proposes the amount of GPU memory that can be allocated to each container. When a container asks to use more than the limited GPU memory size, ConVGPU rejects the request. In contrast, when the GPU memory is   !}~™‹†     ™‹†    available even if the requested amount of memory is less than the limited memory size. However, these studies were targeted at discrete GPUs whose data are transferred through PCIe bus and cannot be directly applied to the heterogeneous system architecture. In this architecture, the system memory is used as the GPU memory, and the data copying between the CPU and the GPU is carried out through a zero-copy buffer. {>         >   !  memory overcommitment technique that decreases or increases the memory allocated to VMs is used. Waldspurger [23], Zhou et al. [24], Zhao et al. ‡‚!™‡–!½ ‡!’' ‡‰>  >   the access frequencies of pages by nullifying the translation look-aside buffer (TLB) entries of randomly selected pages. Based on this, the least    >                  However, this method has a problem because performance degradation may      {’?  ‘  !‡–!‡! [28], and [29], the problem of an inability to respond to sudden changes in VMs’ memory demands due to the cyclic overcommitment exists. To solve this problem, the memory pressure aware (MPA) ballooning [27] applies different memory return policies by distinguishing the degree of memory >    >   > { `‹    performance degradation caused by page return by setting the page with a high probability of becoming the least accessed as the object of return using Linux active and inactive lists. Furthermore, the MPA responds to the VMs’

158

Computer Games Technology

unexpected memory requests by immediately reallocating the memory to sudden memory requests and returning the memory slowly. Recently, Park et al. [22] proposed a dynamic memory management technique for Intel’s integrated GPU called DymGPU that provides two memory allocation policies: size- and utilization-based algorithms. Although DymGPU improves the performance of VMs by minimizing the overlap of the graphics memory space among VMs and thus reduces the context switch overhead, DymGPU’s allocation is still static, and the memory size cannot be changed at runtime.

CONCLUSION AND FUTURE WORKS In GPU virtualization, due to the static allocation of GPU memory, the performance of VMs that require more GPU memory can be degraded or the GPU application can crash. The gBalloon, proposed in this paper, improves the performance of VMs due to the lack of GPU memory by dynamically adjusting the GPU memory size allocated to each VM. Moreover, the gBalloon detects the increase in overhead due to GPU memory sharing and reduces the GPU memory size of the VMs that unnecessarily occupy a large amount of GPU memory. Consequently, the GPU context switch time is decreased, and the performance of the GPU applications is increased. Furthermore, the performance of the CPU applications is also guaranteed because the CPU load is reduced. The study demonstrated through experiments that the performance of the gBalloon improved by up to 32% when compared with the performance of gVirt with 1024 MB of high global graphics memory. Currently, the gBalloon increases or decreases only the spaces at both sides to adjust the GPU memory space allocated to VMs. This problem can be solved by allocating non-consecutive spaces of small slot units rather than consecutive GPU memory spaces to VMs. We are currently investigating this issue.

ACKNOWLEDGMENTS This research was supported by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (2017M3C4A7080245).

Ballooning Graphics Memory Space in Full GPU Virtualization ...

159

REFERENCES 1.

C. Reaño, F. Silla, G. Shainer, and S. Schultz, “Local and remote GPUs >     \;š |——™ * ?!  Proceedings of the Industrial Track of the 16th International Middleware Conference, Vancouver, BC, Canada, December 2015. 2. J. Duato, A. J. Pena, F. Silla, R. Mayo, and E. S. Quintana-Orti, “rCUDA: reducing the number of GPU-based accelerators in high performance clusters,” in Proceedings of International Conference on High Performance Computing & Simulation, Caen, France, June 2010. 3. L. Shi, H. Chen, J. Sun, and K. Li, “vCUDA: GPU-accelerated highperformance computing in virtual machines,” IEEE Transactions on Computers, vol. 61, no. 6, pp. 804–816, 2012. 4. Z. Qi, J. Yao, C. Zhang, M. Yu, Z. Yang, and H. Guan, “VGRIS: virtualized GPU resource isolation and scheduling in cloud gaming,” ACM Transactions on Architecture and Code Optimization (TACO), vol. 11, no. 2, pp. 1–25, 2014. 5. S. M. Jang, W. Choi, and W. Y. Kim, “Client rendering method for desktop virtualization services,” ETRI Journal, vol. 35, no. 2, pp. 348– 351, 2013. 6. G. Giunta, R. Montella, G. Agrillo, and G. Coviello, “A GPGPU transparent virtualization component for high performance computing clouds,” in European Conference on Parallel Processing, Springer, Berlin, Heidelberg, 2010. 7. R. Montella, G. Giunta, and G. Laccetti, “Virtualizing high-end GPGPUs on ARM clusters for the next generation of high performance cloud computing,” Cluster Computing, vol. 17, no. 1, pp. 139–152, 2014. 8. R. Montella, G. Giunta, G. Laccetti et al., “On the virtualization of CUDA based GPU remoting on ARM and X86 machines in the GVirtuS framework,” International Journal of Parallel Programming, vol. 45, no. 5, pp. 1142–1163, 2017. 9. S. Xiao, P. Balaji, Q. Zhu et al., “VOCL: an optimized environment for transparent virtualization of graphics processing units,” in Proceedings of Innovative Parallel Computing (InPar), San Jose, CA, USA, May 2012. 10. C. Zhang, J. Yao, Z. Qi, M. Yu, and H. Guan, “vGASA: adaptive scheduling algorithm of virtualized GPU resource in cloud

160

11.

12. 13.

14. 15. 16.

17.

18.

19.

20.

21.

22.

Computer Games Technology

gaming,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 11, pp. 3036–3045, 2014. C. Lee, S.-W. Kim, and C. Yoo, “VADI: GPU virtualization for an automotive platform,” IEEE Transactions on Industrial Informatics, vol. 12, no. 1, pp. 277–290, 2016. D. Abramson, “Intel virtualization technology for directed I/O,” Intel Technology Journal, vol. 10, no. 3, 2006. C.-T. Yang, J.-C. Liu, H.-Y. Wang, and C.-H. Hsu, “Implementation of GPU virtualization using PCI pass-through mechanism,” Journal of Supercomputing, vol. 68, no. 1, pp. 183–213, 2014. Amazon high performance computing cloud using GPU, http://aws. amazon.com/hpc/. R. Jennings, Cloud Computing with the Windows Azure Platform, John Wiley & Sons, Hoboken, NJ, USA, 2010. A. Herrera, NVIDIA GRID: Graphics Accelerated VDI with the Visual Performance of a Workstation, Nvidia Corp, Santa Clara, CA, USA, 2014. K. Tian, Y. Dong, and D. Cowperthwaite, “A full GPU virtualization solution with mediated pass-through,” in Proceedings of USENIX Annual Technical Conference (USENIX ATC 14), Philadelphia, PA, USA, June 2014. Y. Suzuki, S. Kato, H. Yamada, and K. Kono, “GPUvm: why not virtualizing GPUs at the hypervisor?” in Proceedings of 2014 USENIX Annual Technical Conference (USENIX ATC 14), Philadelphia, PA, USA, June 2014. H. Tan, Y. Tan, X. He, K. Li, and K. Li, “A virtual multi-channel GPU fair scheduling method for virtual machines,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 2, pp. 257–270, 2019. M. Xue, “gScale: scaling up GPU virtualization with dynamic sharing of graphics memory space,” in Proceedings of 2016 USENIX Annual Technical Conference (USENIX ATC 16), Denver, CO, USA, June 2016. M. Xue, J. Ma, W. Li et al., “Scalable GPU virtualization with dynamic sharing of graphics memory space,” IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 8, pp. 1823–1836, 2018. Y. Park, M. Gu, S. Yoo, Y. Kim, and S. Park, “DymGPU: dynamic memory management for sharing GPUs in virtualized clouds,”

Ballooning Graphics Memory Space in Full GPU Virtualization ...

23.

24.

25.

26. 27.

28.

29.

30. 31. 32. 33. 34.

35.

161

in Proceedings of 2018 IEEE 3rd International Workshops on Foundations and Applications of Self Systems (FASW), Trento, Italy, September 2018. C. A Waldspurger, “Memory resource management in VMware ESX server,” ACM SIGOPS Operating Systems Review, vol. 36, pp. 181– 194, 2002. P. Zhou, V. Pandey, J. Sundaresan, A. Raghuraman, Y. Zhou, and S. Kumar, “Dynamic tracking of page miss ratio curve for memory management,” ACM SIGOPS Operating Systems Review, vol. 38, no. 5, p. 177, 2004. W. Zhao, Z. Wang, and Y. Luo, “Dynamic memory balancing for virtual machines,” ACM SIGOPS Operating Systems Review, vol. 43, no. 3, pp. 37–47, 2009. F. Guo, Understanding Memory Resource Management in VMware vSphere 5.0, VMware, Inc., Palo Alto, California, USA, 2011. J. Kim, V. Fedorov, P. V. Gratz, and A. L. Narasimha Reddy, “Dynamic memory pressure aware ballooning,” in Proceedings of the 2015 International Symposium on Memory Systems, Washington DC, USA, October 2015. P. Lu and K. Shen, “Virtual machine memory access tracing with hypervisor exclusive cache,” in Proceedings of Usenix Annual Technical Conference, Santa Clara, CA, USA, June 2007. D. Magenheimer, C. Mason, D. McCracken et al., “Transcendent memory and linux,” in Proceedings of the Linux Symposium, Montreal, QC, Canada, July 2009. Phoronix Test Suite, http://phoronix-test-suite.com. Intel GVT-G (XENGT) public release-Q4’2016, https://01.org/igvt-g/ blogs/wangbo85/2017/intel-gvt-g-xengt-public-release-q42016. UNIGINE valley, https://benchmark.unigine.com/valley. Cairo-perf-trace, http://www.cairographics.org. D. H. Bailey, E. Barszcz, J. T. Barton et al., “The nas parallel benchmarks,” International Journal of Supercomputing Applications, vol. 5, no. 3, pp. 63–73, 1991. '½!`` {  !}` !'?!™ ˜ =  GPU resource management in the operating system,” in Proceedings of 2012 USENIX Annual Technical Conference (USENIX ATC 12), Boston, MA, USA, June 2012.

162

Computer Games Technology

36. K. Wang, X. Ding, R. Lee, S. Kato, and X. Zhang, “GDM,” ACM SIGMETRICS Performance Evaluation Review, vol. 42, no. 1, pp. 533–545, 2014. 37. F. Ji, H. Lin, and X. Ma, “RSVM: a region-based software virtual memory for GPU,” in Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques, Edinburgh, UK, 2013. 38. M. Becchi, K. Sajjapongse, I. Graves, A. Procter, V. Ravi, and S. Chakradhar, “A virtual memory based runtime to support multi-tenancy in clusters with GPUs,” in Proceedings of the 21st International Symposium on High-Performance Parallel and Distributed Computing, Delft, Netherlands, June 2012. 39. J. Kehne, J. Metter, and B. Frank, “GPUswap: enabling oversubscription of GPU memory through transparent swapping,” ACM SIGPLAN Notices, vol. 50, no. 7, ACM, 2015. 40. J. Kehne, M. Hillenbrand, J. Metter, M. Gottschlag, M. Merkel, and F. Bellosa, “GPrioSwap: towards a swapping policy for GPUs,” in Proceedings of the 10th ACM International Systems and Storage Conference, Haifa, Israel, May 2017. 41. D. Kang, T. J. Jun, D. Kim, J. Kim, and D. Kim, “ConVGPU: GPU management middleware in container based virtualized environment,” in Proceedings of 2017 IEEE International Conference on Cluster Computing (CLUSTER), Honolulu, HI, USA, September 2017.

CHAPTER 7

Platform for Distributed 3D Gaming

A. Jurgelionis1 , P. Fechteler2 , P. Eisert2 , F. Bellotti1 , H. David3 , J. P. Laulajainen4 , R. Carmichael5 , V. Poulopoulos6,7 , A. Laikari8 , P. Perälä4 , A. De Gloria1 , and C. Bouras6,7 1

Department of Biophysical and Electronic Engineering, University of Genoa, Via Opera Pia 11a, 16145 Genoa, Italy 2 Computer Vision & Graphics, Image Processing Department, Heinrich-Hertz-Institute Berlin, Fraunhofer-Institute for Telecommunications, 10587 Berlin, Germany 3 R&D Department, Exent Technologies Ltd., 25 Bazel Street, P.O. Box 2645, Petach Tikva 49125, Israel 4 Converging Networks Laboratory, VTT Technical Research Centre of Finland, 90571 Oulu, Finland 5 Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6N, UK 6 Research Unit 6, Research Academic Computer Technology Institute, N. Kazantzaki, Panepistimioupoli, 26504 Rion, Greece

Citation: A. Jurgelionis, P. Fechteler, P. Eisert, F. Bellotti, H. David, J. P. Laulajainen, R. Carmichael, V. Poulopoulos, A. Laikari, P. Perälä, A. De Gloria, C. Bouras, “Platform for Distributed 3D Gaming”, International Journal of Computer Games Technology, vol. 2009, Article ID 231863, 15 pages, 2009. https://doi.org/10.1155/2009/231863. Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

164

Computer Games Technology

7

Computer Engineering and Informatics Department, University of Patras, 26500 Patras, Greece 8 Software Architectures and Platforms Department, VTT Technical Research Centre of Finland, 02044 VTT, Espoo, Finland

ABSTRACT Video games are typically executed on Windows platforms with DirectX API and require high performance CPUs and graphics hardware. For pervasive gaming in various environments like at home, hotels, or internet cafes, it is beneficial to run games also on mobile devices and modest performance CE devices avoiding the necessity of placing a noisy workstation in the living room or costly computers/consoles in each room of a hotel. This paper presents a new cross-platform approach for distributed 3D gaming in wired/ wireless local networks. We introduce the novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple end devices enable the access of games on low cost set top boxes and handheld devices that natively lack the power of executing a game with high-quality graphical output.

INTRODUCTION Computer games constitute nowadays one of the most dynamic and fastest changing technological areas, both in terms of market evolution and technology development. Market interest is now revolving around capitalizing on the rapid increase of always-on broadband connectivity which is becoming ubiquitous. Broadband connection drives a new, digital “Future Home” as part of a communications revolution that will affect every aspect of consumers’ lives, not least of which is the change it brings in terms of options for enjoying entertainment. Taking into account that movies and music provided by outside sources were at home long before the internet and broadband, the challenge is to invent new content consumption patterns of existing and new types of content and services [1]. At the same time, mobility and digital home entertainment appliances have generated the desire to play games not only in front of a home PC but also everywhere inside the house and also on the go. As a result of TV digitalization, set top boxes (STBs) have entered homes and, as a new trend,

Platform for Distributed 3D Gaming

165

mini-laptops are gaining popularity. Several low-cost consumer electronics end devices (CE) are already available at home. Although these devices are capable of executing software, modern 3D computer games are too heavy for them. Running an interactive content-rich multimedia applications (such as video games) requires the high performance hardware of a PC or a dedicated gaming device. Other devices such as set top boxes (STBs) or handheld devices lack the necessary hardware and adding such capabilities to these devices will cause their prices to become prohibitive [1]. A system which enables rendering of PC games on next-generation STB and personal digital assistant (PDA)       increase in their price is a solution for future networked interactive media. This approach enables a pervasive accessibility of interactive media from devices that are running on different platforms (architecture and operating system), thus facilitating users to enjoy video games in various environments (home, hotel, internet café, elderly home) without the need to attach to a single device or operating system, for example, a Windows PC. This paper describes the Games@Large (G@L) pervasive entertainment architecture which is built on the concept of distributed remote gaming [2] or Virtual Networked Gaming (VNG). It enables pervasive game access on devices (set top boxes and handheld devices) that typically do not possess a full set of technical requirements to run video games [1]. In general, the system executes games on a server PC, located at a central site or at home, captures the graphic commands, streams them to the end device, and renders the commands on the end device allowing the full game experience. For end devices that do not offer hardware accelerated graphics rendering, the game output is locally rendered at the server and streamed as video to the client. Since computer games are highly interactive, extremely low delay has to be achieved for both techniques. This interactivity also requires the game controllers’ commands to be captured on the end device, streamed to the server, and injected into the game process [3]. The described functions are implemented by an application which is running on the client and a “return cannel” which is constructed between the clients and the server. The application on the client is responsible for recording any input command arriving from every existing input device while the return channel is utilized in order to send the commands from the clients to the server for execution. On the server side the commands are injected into the proper game window.

166

Computer Games Technology

In order to ground our research and system developments, we have performed a thorough analysis of state of the art in gaming platforms available in today’s market, presented in Section 2. The rest of the paper is organized as follows: Section 3 describes the Games@Large framework; Section 4 its components and operation fundamentals; Section 5 presents some experimental results on tests of the initial system and its components demonstrating multiple game execution on a PC and Quality of Service (QoS) optimized transmission of the games’ graphics to the end devices via a wireless network; Section 6 presents the conclusions.

GAMING PLATFORMS ANALYSIS: STATE OF THE ART IN CONSOLES, PC AND SET TOP BOXES We have conducted an overview of state of the art in common gaming platforms such as consoles, PCs and set top boxes. One recent development in gaming market activities which has implications for new consumption patterns is technology based on distributed-cross-platform computing (or cloud computing); we introduce and overview this relatively new concept of Virtual Networked Gaming Platforms (VNGP) in Section 2.4.

Consoles The home console system enables cheap hardware and guarantees product quality. Unlike the past, console functionality is being continuously upgraded post-release (e.g., web-browser and Wii channels on the Wii; high-definition video-on-demand downloading for Xbox 360; and PlayStation Home for PS3).

Xbox 360 (Microsoft) Microsoft were the first to release their next generation console, the Xbox 360, followed by the Xbox 360 Elite designed to store and display high definition video with a 120 GB hard drive. The Xbox 360 has perhaps the strongest list of titles of the three next gen consoles, including Halo 3 in 2007, though the style and content of each console’s titles differ from the others and personal preferences play a role in which catalogue, and therefore which platform, appeals most to a certain gamer/user. In online functionality Microsoft is the most well established with its Xbox Live/Live Anywhere/ Games for Windows-LIVE gaming services, Live Marketplace (used to distribute television and movies), and online Xbox Live Pipeline.

Platform for Distributed 3D Gaming

167

PlayStation 3/PS3 (Sony) As the most powerful games console ever made, it is the most future-proof in terms of where games development can go and its built-in Blu-Ray player. Expert reviews on the console have improved since its initial reception and commentators have remarked that the first PS3 games only use about 30– 40% of the platform’s capacity, so the gaming experience it offers should improve as developers use more of its capacity. PS3 functionality includes streaming movies or PS3 games to PSP over LAN. In Europe the PS3 is backwards compatible with most of the massive PS2 games catalogue (with all in US and Japan). Online Functionality/Support: The PS3 has a web browser based on NetFront; Home is a free community-based gaming service.

Wii (Nintendo) Nintendo’s Wii features gesture recognition controllers allowing intuitive control and more physical play which must take much credit for the Wii’s successful appeal to many consumers who had not been gamers before. The Wii also has a large back-catalogue of GameCube titles and developing games is cheaper and easier than for other platforms, suggesting a rapid proliferation of titles. Online functionality: Opera web browser software; a growing number of Wii Channels; the Message Board supports messaging with Wii users around the world via WiiConnect24, handles the Wii email messaging, and logs all play history, facilitating parental supervision; some titles now support multiplayer online play.

PCs The PC is an open system which can be exploited by virtually any game manufacturer. It is also the broadest of gaming platforms—catering to casual games and casual gamers but also through to the top end of digital gaming in specialised gaming PCs. The PC has by far the highest install base of all gaming platforms (discounting simple mobiles) with rising broadband connections and very well-developed online games services, such as Games for Windows-LIVE, PlayLinc, and many casual games sites (e.g., Verizon, DishGames, RealArcade, Buzztime). Though relatively expensive, it is bought and used for many things besides gaming but is often not equally accessible to all members of the household. This multifunctional nature of the PC is being eroded by nongaming functionality being added to consoles. Game Explorer is a new one-stop application within Vista designed to make

168

Computer Games Technology

game installation far simpler and also allows parents to enforce parental controls.

Input Devices The PC and the games based on it use keyboard and mouse as the input device, which allows more complex games to be played but does not travel well into the living room where the large-screen TV, 10-foot viewing experience, comfy chairs, and social gaming are enjoyed by console gamers. There are some existing living room-friendly PC-game controllers though they have not been widely taken up. Microsoft’s wireless game-pad is compatible with both the PC as well as the Xbox 360. A gamepad is, however, not suited for playing some game genres associated with the PC (notably MMOGs and real-time strategy/RTS) but viable alternative control devices do exist which could allow PC games of all genres to successfully migrate to the TV (e.g., Microsoft’s qwerty keyboard add-on for the Xbox 360/PC wireless gamepad, trackball controllers such as the BodieLobus Paradox gamepad, or the EZ Commander Trackball PC Remote). They could also help the development of new games and peripherals and support web features on TV (such as Intel/Yahoo’s planned Widget Channel).

Set Top Boxes The Set Top Box is emerging as a platform for casual games and some service providers are offering games-on-demand services (e.g., the long-established Sky Gamestar). The fast growth of digital terrestrial television (DTT) in Europe also suggests the STB install base will rise steadily, potentially greatly increasing its role. With a potentially large mainstream audience, support from advertising revenues could be significant for STB gaming. Wi-Fi-enabled set top boxes (e.g., Archos TV+) are starting to emerge which combine a Wi-Fi Media Player with a high-capacity personal video recorder (PVR) for enjoying movies, music, photos, podcasts, web video, the full internet and more on widescreen TV. Several companies are committed to enabling gaming services for the STB platform, including Zodiac Interactive, PixelPlay, Buzztime, TV Head, and PlayJam in the US, and Visiware, G-Cluster, and Visionik (part of NDS) in Europe. These companies provide their content and technology solutions to a few TV service providers currently deploying gaming services, including BSkyB, Orange, EchoStar, and Cablevision. Several Telco TV and DBS TV service providers in the US are actively exploring 3D STB gaming and

Platform for Distributed 3D Gaming

169

their demos make many of today’s cable STB games look antiquated (see Section 2.4 for details of NDS Xtreamplay technology). User uptake of gaming platforms and choice of console depend on           >   functionality. PC games are effectively tied to the desktop/laptop and console gaming is seen by many as expensive or for dedicated gamers only. The Wii has broadened the console user base but there remains a massive potential for mainstream gaming on TV given the right technology solution, content/ services offerings and pricing. The open PC platform is supported by much programming expertise and is powerful and ubiquitous but PC games need to make the transition to the more comfortable and social TV-spaces with a wide range of low-cost, accessible, digitally distributed games-on-demand.

State of the Art in Virtual Networked Media Platforms Of great relevance to Games@Large are developments in technology aimed at putting PC gaming onto TV screens. Service providers and web-based services are moving into the PC-gaming value chain and several commercial solutions for streaming games over the network exist already. These allow game play on smaller end-devices like low-cost PCs or set top boxes without requiring the games to be installed locally. Most of these systems are based on video streaming. A server executes the game, the graphical output is captured, and then transmitted as video to the client. For an interactive experience, such a system requires low end-to-end delay, high compression efficiency, and low encoding complexity. Therefore, many solutions have adapted standard video streaming and optimized for the particular graphical content. For example, t5 labs announced a solution for instant gaming on set top boxes via centralized PC based servers, which are hosted by cable TV or IPTV operators. In order to reduce the encoding complexity at the server which has to execute the game and the video encoder, particular motion prediction is conducted exploiting information about the graphical content. Reductions of 50–80 % in encoding complexity are reported. In contrast, StreamMyGame from Tenomichi Limited also offers a server solution which enables the user to stream his/her own PC games to another PC in the home, record the game play or broadcast the games for spectators. The streaming is based on MPEG-4 and typical bit-rates of 4 Mbit/s at XGA resolution are reported. Besides PCs, multiple different end devices are supported such as PlayStation 3, set top boxes and networked media devices. Similar to the other two approaches, G-Cluster’s server client system also offers MPEG-

170

Computer Games Technology

based compression of game content and its streaming to end devices for remote gaming applications. Currently, this system has been employed by operators mainly for casual games. A system that offers high-definition (HD) resolution is the Xtremeplay technology of NDS. They enable the high resolution streaming of computer games to set top boxes, but adaptations of the game code to the Xtreamplay framework are required. High resolution streaming of game content is also provided by the Californian company Dyyno. However, their application is not interactive gaming over networks but the distribution of game output to remote displays. Another somewhat different approach is AWOMO from Virgin Games. In contrast to the other approaches, they do not stream the game output but the game code. The game is downloaded and installed locally on a PC for execution. However, the technology offers a progressive game download, such that the user can start playing the game after only a small part of the data has been received. The remaining data is continuously fetched during game play. A similar approach is also used by the InstantAction system from GarageGames. Users can play 3D games in their web browser. InstantAction uses a small plug-in and an initial download of the game which are required to allow play. Another streaming solution is offered by Orb (http://www.orbnetworks. com). Downloading Orb’s free remote-access software, MyCasting 2.0, onto a PC (Windows only) transforms it into a ‘broadcast device’, the content of which can now be accessed from any web-enabled device (PC, mobile phone, etc.) with a streaming media player. MyCasting 2.0 now works with gaming consoles, enabling Xbox 360/Wii/PS3-owners to stream PC content onto the TV. Orb’s software has enabled 17 million households (according to ABI Research) to bridge the PC-to-TV divide, at no cost, using what is essentially existing technology. However, streaming of video games is not supported. Advances in wireless home entertainment networks and connectivity— which stream content between devices within the home—also present potentially important solutions for playing PC games on TV screens. For example, Airgo Networks’ faster-than-wired True MIMO Media technology      =            *ª! > !  client has been presented, that uses high-performance H.264 video encoding for streaming the graphical content of an application to a weaker end device. In this work, the buffering scheme at the client has been optimized in order to achieve minimal delay necessary for interactive applications, but encoding is based on standard video encoding. In contrast, [5] exploits information from the graphics scene in order to directly compute the motion vectors       > >  `‹\™=ª encoding process. The work in [6] also uses MPEG-4 as codec but goes one step further by using more information from the graphics state. For example, different quantizer settings are used dependent on the z-buffer content. Thus, objects that are further away in the scene are encoded with lower quality than foreground objects closer to the camera. Both approaches, however, require an application that passes the necessary graphics information to the codec and does not work with existing game programs. If encoding complexity should be reduced even more, simple encoding techniques can be used. In [7], a nonstandard compliant codec is presented that allows the streaming  >              }     ! however, also much lower than when using highly sophisticated codecs like H.264. The reviewed systems offer a variety of possibilities though all of them have limitations for interactive media such as video games, and especially existing game titles. For some of these formats games would need to be specially made or expensive hardware purchased, other formats provide moderate visual quality, unlike Games@Large’s aim of being able to run all or most standard PC games including newly developed ones with high visual quality (in Sections 4 and 5 we will present some criteria for titles to  >> ™ ¾’ €™ ¾’     wider stakeholders too (service providers, games developers/publishers, CE manufacturers, and advertisers) enabling business models which ensure that           products and services at low cost.

GAMES@LARGE FRAMEWORK The Games@Large framework depicted in Figure 1 enables interactive media streaming from a PC-based machine to other CE, computer and mobile

172

Computer Games Technology

devices in homes and enterprise environments such as hotels, internet cafés and elderly homes.

Figure 1: Games@Large framework.

{  ^     >    " introduced below and described in detail in Section 4.

Server Side The Local Storage Server (LSS) is responsible for storage of games. The Local Processing Server (LPS) a Windows PC runs games from LSS and streams to clients. It is responsible for launching the game process after client-side invocation, managing its performance, allocating computing resources, filing system and I/O activities, and capturing the game graphic commands or already rendered frame buffer for video encoding, as well as managing execution of multiple games. The LPS is further responsible for receiving the game controller commands from the end device and injecting them into the game process. The LPS is also responsible for streaming game audio to the client.

Platform for Distributed 3D Gaming

173

Graphic Streaming Protocol Stack The Graphics Streaming Protocol is intended to become a standard protocol used for streaming 3D commands to an end device allowing lower performance devices such as STBs to present high performance 3D applications such as games without the need to actually execute the games on this device. The video streaming scenario is intended for devices lacking hardware accelerated rendering capabilities. H.264 [8] is exploited for low-delay video encoding. Synchronisation and transmission is realised via UDPbased RTP/RTCP in a standard compliant way. HE-AACv2 [9] is used for audio streaming. Again, synchronisation and transmission is realised via UDP-based RTP/RTCP in a standard compliant way.

Client Side devices Notebook (NB); Enhanced Multimedia Extender (EME), which is a WinCE or Linux set top box; Enhanced Handheld Device (EHD)—a Linux-based handheld. The client module is responsible for receiving the 3D commands and rendering them on the end device using local rendering capabilities (OpenGL or DirectX). For the video streaming approach, H.264 decoding must be supported instead. The client is also responsible for capturing the controller (e.g., keyboard or gamepad) commands and transmitting them to the processing server [3].

GAMES@LARGE FRAMEWORK COMPONENTS 3D Graphics Streaming Today, interfaces between operating system level libraries, such as DirectX and OpenGL, and the underlying 3D graphics cards, occur in the operating system driver and kernel level and are transmitted over the computer bus. Simultaneous rendering of multiple games and encoding their output can overload a high-performance server. For that purpose DirectX, and/or OpenGL graphics commands, has to be captured at the server (LPS/PC) and streamed to the client (e.g., STB or a laptop) for remote rendering. This is similar to the 2D streaming of an X server in UNIX-based systems. Extensions for streaming 3D graphics also exist, for example, the OpenGL Stream Codec (GLS) that allows the local rendering of OpenGL commands.

174

Computer Games Technology

These systems usually work in an error-free TCP/IP scenario, with best effort transmission without any delay constraints. The 3D streaming and remote rendering developed for Games@Large are achieved by multiple encoding and transmission layers shown in Figure 2. First of which is the interception and the very last one is the rendering on the client machine. All layers in between these two are independent of any >  >  ‹*{  >   >  >;            >    >      ; ¥ or OpenGL, but rather utilises higher-level concepts common to all 3D graphics.

Figure 2: 3D Streaming—detailed block diagram. '         ; ¥ ‹*  to ƒ> ™’  !         these APIs, a set of common generic concepts may be of assistance. In general, a 3D scene consists of multiple objects that are rendered separately. Before rendering an object, several parameters (states) must be set and these include lighting, textures, materials, the set of 3D vertices that make a scene, and further various standard 3D transforms (e.g., translate, scale, rotate).

Platform for Distributed 3D Gaming

175

Figure 2 depicts the detailed block diagram of the components involved in the 3D streaming. First, the 3D commands issued by the game executable to the graphic layer API used by the selected game (e.g., DirectX v9) need to be captured. The same technique used for capturing the DirectX v9 can also be used for capturing other versions of DirectX (and also the 2D version of DirectX-DirectDraw). This is implemented by providing to the game running on the LPS a pseudo-rendering environment that intercepts the DirectX calls. The proxy Dynamic Link Library (DLL) is loaded by the game on its start-up and runs in the game context. This library forms the server part of the pipeline which passes the 3D commands from the game executable to the client’s rendering module. In our implementation, we have implemented delegates objects for each of the 3D objects created by the game. Each such delegates object uses the 3D streaming pipeline for processing the command and its arguments. For many commands, a delegate’s object can answer the game executable immediately without interaction with the client—this is done in many cases in order to avoid synchronized commands. For example, when the game needs to       € >   !  ^! then it changes the buffer and then unlocks the texture. Originally, those commands must be synchronized. But in our implementation, the delegate object for texture does not interact with the client when the game tries to lock the texture on the graphic card but postpone the call for the unlock call. When the game issues an unlock call, the delegate object checks what parts of the texture were changed and sends a single command to the client with the changes. The client implementation, which is aware of this logic,   ^   >     •>   !   the texture and unlock it. This is one example of commands virtualization that allows avoiding synchronous commands, and reducing the number of commands—typically such a set of commands is called hundreds of times per frame. The Serialization Layer serializes various structures describing the >         '  •         buffers until certain criteria is met (theoretically it can pass the buffer to >       !  !        ^€{  >  •>>      = party compression library (e.g., zlib or LZO compression) to compress the 3D stream before sending it to the network. The Network Layer is responsible for maintaining the connection with the client and for sending the buffers. After each sent buffer, an

176

Computer Games Technology

ACK (acknowledgement) is sent back by the client. The purpose of this }½        "  network buffers. The nature of the data requires that no buffer will be lost in transmission (which, in the current implementation, implies the use of TCP). A possibility to use or develop a transport protocol (e.g., UDP based) which could replace TCP is investigated. On Microsoft Windows clients the renderer is using DirectX to render the commands, while in Linux clients the renderer is using OpenGL commands. There is a certain overhead in OpenGL rendering because some data (especially colour and vertex data) must be reorganised or rearranged in the processing stack before it can be given to OpenGL for rendering. This may result in increased demand of Central Processing Unit (CPU) processing and memory transfer between system memory and the Graphics Processing Unit (GPU) [3]. Although the graphic streaming approach is the preferable solution since it offers lower latency and enables execution of multiple games on one server, it cannot be used for some small handheld devices like PDAs or smart phones. These end-devices typically lack the hardware capability for accelerated rendering and cannot create the images locally for displaying them. Therefore, the alternative solution using video streaming techniques is described in the next section.

Video Encoding The alternative approach to 3D Graphics Streaming in the Games@Large framework is Video Streaming. It is used mainly for end devices without a GPU, like handheld devices, typically having screens of lower resolution. Here the graphical output is rendered on the game server and the framebuffer is captured and transmitted encoded as video stream. For video encoding, the H.264 video coding standard is used [8], which is the current state of the art in this field and provides the best compression efficiency. But in comparison to previous video coding standards, the computational complexity is significantly higher. However, by selecting appropriate encoding modes, the encoding complexity for the synthetic frames can be significantly reduced while preserving high image quality. In order to keep the effort moderate for integrating new client end devices into the Games@Large framework, the video streaming subsystem has been developed in a fully standard-compliant way. Nevertheless, the server side encoding and streaming is adapted to the characteristics of the present end device. This means that end device properties such as display

Platform for Distributed 3D Gaming

177

 >>  >      >>>   server (e.g., optional H.264 encoding with CABAC [10], which typically      >           > load of decoding at the client). Similarly, the proportion of IDR frames in the resulting video stream, which are used to resolve the dependence on previous frames, can be set under consideration of the network properties. The delay between image generation on the server side and presentation on the client side is crucial and has to be as small as possible in order to achieve interactive gaming. To reduce this delay a H.264 decoder for end devices has been developed which is implemented with a minimum of buffering. As soon as a video frame has been received it will be decoded and displayed. This is quite different to TV streaming where large buffering is used to remove the effects of network jitters. H.264 video encoding is computationally quite demanding. In Figure 3 the encoding times for a game scene are depicted for streams encoded with different quantizer settings which results in different qualities. It is clearly visible that for increased image quality the encoding time increases. Since the video encoding is executed in parallel to the actual game both are competing for the processor time. Aside from that, the desire to execute and stream several games simultaneously from a single game server increases the need for reduction in computational complexity in the video streaming system.

Figure 3: Comparison encoding timings for different quantize settings.

178

Computer Games Technology

One method for reducing the complexity at the server is the removal of the scaling of the games output to the required resolution of the client device. For that purpose, the render commands of the game are intercepted and  !            • . Besides the reduction in complexity, an advantage of this technique is that the quality of the images achieved is much better, because the images are already rendered at the desired resolution without any scaling artefacts. An example is depicted in Figure 4.

(a)

(b)

Platform for Distributed 3D Gaming

179

Figure 4: Rendering in resolution adapted to particular end device.

Current research is focused on reducing the computational complexity of the H.264 encoder itself by incorporating enhancements based on the available rendering context information. The main idea is adapted from [11]. The motion prediction in video encoding, which is realized in common encoders as a computationally very demanding trial and error search, can be calculated directly by using the current z-buffer as well as projection parameters available in the games rendering context of OpenGL/ DirectX. The encoding complexity can be reduced further by predicting the macroblock partitioning on the basis of discontinuities in the z-buffer. This is also usually realized in common encoders as a computationally demanding trial and error search. The key difference to [11] is that in [11] the authors assume to have full access to the rendering applications source code. In   ™ ¾’   ^   >         commercial games, which use quite sophisticated rendering techniques. The challenge here is to capture the appropriate information of the rendering context in order to correctly perform the motion prediction. In order to transmit the encoded video stream in real-time, the RTP Packetization (Real Time Protocol [12]) is utilized. The structure of the š{‹>  |‘   =  streaming and synchronization are discussed in Section 4.4.

Audio Encoding Besides the visual appearance computer games also produce sounds. In order to deliver this audio data to the client in an efficient manner, an audio streaming sub-system has been developed. Since computer games typically produce their audio samples in a block-oriented manner, the current state-ofthe-art audio encoder in this field has been integrated: the High Efficiency Advanced Audio Coding version 2 (HE AAC-v2) [9]. Our HE AAC-v2 implementation is configurable so that it can encode mono or stereo, 8 or 16 bits per sample and at several sample rates, for example, 22.05, 44.1, or 48 kHz. In order to stream the encoded audio data in real-time the RTP packetization (Real Time Protocol [12]) is utilized. The structure of HE AAC-v2 payload for RTP is specified in [14]. Further details about real-time streaming and synchronization are discussed in Section 4.4.

180

Computer Games Technology

Synchronized Real Time Streaming Since the performance of the system is highly dependent on the delay between content generation on the server side and its play back on the client, the video streaming as well as the audio streaming are based on the UDPbased RTP (Real Time Protocol [12]). Every RTP network packet contains a time stamp as well as a well defined structure of payload data. In order to prevent errors of different timings among the video and audio channels and to overcome different kinds of network jitters, the RTP channels are explicitly synchronized. For this purpose the RTCP (Real Time Control Protocol [12]) has been integrated. The content-generating server periodically sends a so-called Sender Report RTCP Packet (SR) for each RTP channel. This SR contains a mapping from the timestamps used in the associated RTP channel to the global NTP (Network Time Protocol [15]). With this synchronization of each RTP channel to NTP time, all the RTP channels are synchronized implicitly with each other.

Client Feedback to the Game Server The return channel on the server side is responsible for receiving the commands from each connected client and injecting them to the appropriate game; the one that the user is playing. The return channel is constructed by two communicating modules; the server side module and the client side module.

Server Side The server side module that implements the return channel is part of the core of the system and more specifically the Local Processing Server. The return channel on the server side is responsible for receiving commands from each connected client and transforming them in such a form that they will be readable by the OS (Windows XP/Vista) and more specifically by the running instance of the game. The method utilizes a proxy of the DirectInput dynamic library and injects the commands directly to the DirectInput functions used by each game. A crucial part of the server and client side return channel is the socket communication. The HawkNL [16] library is used for the communication between the server and the clients. This assures that the implementation of the socket is based on a system that is tested by a large community of users and

Platform for Distributed 3D Gaming

181

that no major bugs exist on that part of the code. For faster communication between client and server we disable the Nagle Algorithm [17] of the TCP/IP communication protocol. Having done so, the delivery times of the packets are almost instantaneous as we omit any buffering delays.

Keyboard The server side of the return channel receives the keyboard commands that originate from the client dedicated socket connection. The communication between the server and the client follows a specific protocol in order to (a) be successfully recognized by the server and (b) preserve the loss of keyboard input commands. An important aspect of the return channel infrastructure is the encryption of keyboard commands which is described in the following section. For the case of a game that uses a DirectInput keyboard, we implement >  ‘   !     >‰ own, modifying only the function that is used for passing data to the virtual DirectInput keyboard device that is created when the game launches in order to read data from the original keyboard.

Encryption The encryption procedure is needed only for the keyboard commands that the client transmits, since sensitive user data, such as credit card numbers or passwords, are only inserted using the keyboard. RSA encryption was selected as it fulfils the demands of our specific environment.

Start-Up Phase When both the client and the server start, some local initializations take place. The client then launches a connection request to the server which is advertised to the network neighbourhood through the UPnP module. The server accepts the new client generating a unique RSA public-private key combination.

Transfer of Encrypted Keyboard Input The idea that lies beneath the communication command channel architecture is depicted in Figure 5.

182

Computer Games Technology

Figure 5: Encrypted command channel.

Each end device consists of many possible input devices for interacting with the server. When the client program starts, it initiates the device discovery procedure, which may be offered either by a separate architectural module, for example, the device discovery module which uses UPnP. The next step of the procedure is to capture the input coming from the controllers. This is achieved by recording the key codes coming from the input devices. Mice or keyboards are interrupt-driven while with joysticks or joy pads the polling method is used for reading. If the command that is to be transferred is originating from a keyboard device, the client uses the server’s public key to encrypt the data after it has been suitably formatted adhering to a certain communication protocol. The encrypted message is transmitted to the server using the already existing socket connection. Once the encrypted message has arrived at the server side, the server decrypts it using its private key, obtaining the initial keyboard commands that the client has captured. If the received message is not from a keyboard, the server bypasses the decryption stage, delivering the commands at the running game instance. The algorithm procedure of this step is described in the following sections.

Platform for Distributed 3D Gaming

183

Mouse The server side of the return channel receives the mouse commands that originate from the client using the already open socket connection. The communication between the server and the client follows a specific protocol in order to be successfully recognized by the server and is exactly the same as the keyboard apart from the encryption part and the resolution part that follows. An issue that arises when using the mouse input device is how the commands are executed correctly if the client has a different resolution to the server. This is because what is sent from the client to the server is the absolute mouse position. We realized that when a game is running on the client, the rightmost bottom position of the mouse equals the resolution of the game when running in 3D streaming, and it is equal to the screen resolution when running in Video streaming. On the server side, we observed that the matching of the resolutions should not be done with the resolution of the screen but again with the resolution of the game running on the server because every command is injected into the game window. The mouse positions have to be normalized on the client and the server side.

Joypad/Other The server side of the return channel receives mouse and keyboard commands that originate from the client’s Joypad/Other input via the already open socket connection. This means that any Joypad/Other input is firstly translated into suitable keyboard and mouse commands on the client side (using XML mapping files) and it is then transmitted to the server for execution at the game instance. The execution of these commands falls to the previously described cases.

Quality of Service Optimized Transmission The Games@Large gaming architecture is based on streaming a game’s 3D or video output to the client running on a separate device. This kind of distributed operation sets high requirements for the network in terms of bit rate and latency. A game stream with sufficient quality is targeted to require a bit rate of several megabits per second and the latencies have to be minimized to maximize the gaming quality. The same network which is used for gaming is also assumed to be available to other applications such as web surfing or file downloading. If the network did not have any kind of

184

Computer Games Technology

QoS support, these competing applications would have a negative effect on the gaming experience. Thus, the network has to implement QoS to satisfy the requirements of gaming regardless of other applications using the same network. As presented in Figure 1, the network connection to the game client can be wireless. This is a further challenge for providing QoS for the gaming application. Our platform is based on IEEE 802.11 standard family [18] wireless LAN (WLAN) technologies. Currently, the most used WLAN technology is IEEE 802.11g which could provide the bandwidth needed for four simultaneous game sessions in good conditions. The near future IEEE 802.11n will enhance the maximum bit rate, but still shares the same basic medium access (MAC) method which does not support QoS. Priority-based QoS can be supported in IEEE WLANs with the Wi-Fi `  “``€   |Œ >    “=‘  “``  *\\\‰—‡|| ‡—    ^  into four access categories which receive different priority for the channel access in competition situations. In this way applications with high QoS requirements can be supported with better service than others with less strict requirements. Our platform is based on IEEE 802.11 (either g or n) and WMM. As presented later in the results section, WMM can be used to enhance the gaming experience substantially compared to the case of basic WLAN MAC. In addition to MAC layer QoS support, there is a need for QoS management solutions in a complete QoS solution. Our platform relies on †‹‹”'>  ‡|{ >      >  management and network resource allocation. In practice, it acts as a middleware between the applications and the network devices performing the QoS provisioning. The experimental results presented later in this paper prove that our standard-based solution enhances the game experience and gives superior performance compared to reference system without QoS support.

UPnP Device Discovery To ensure easy system setup and operation as well as flexibility in dynamic home networks, various system components need to find each other automatically and be able to exchange information about their capabilities.

Platform for Distributed 3D Gaming

185

In the Games@Large system, we have selected to use the UPnP Forum ‡‡       †‹‹           >   > ==>  network connectivity of intelligent appliances, wireless devices, and PCs of all form factors. The technologies leveraged in the UPnP architecture include common internet protocols such as IP, TCP, UDP, HTTP and XML [23]. The required functionality of device discovery is to allow a Games@ ’      ™ ¾’          ^       ;                      the larger Games@Large network. For example, in a large system a Local `   '   ’`'€      ’''      ^¿    home version, the logical servers are usually located in a single PC, but in an enterprise version, such as a hotel environment, there might be several physical server machines. {      >    services provided by the found devices. In the discovery phase the devices are also exchanging capability information, for example, an end device will inform the server of its capabilities, like screen resolution, connected input devices and so on. Servers can also advertise their capabilities to other servers and end devices.

System Integration The local servers of Game@Large consist of three separate servers: LPS (Local Processing Server), LMS (Local Management Server), and LSS (Local Storage Server). In the (intended for the use in home environment) version, the main server of the system, is the Local Processing Server and at this stage it has (virtually) the core functionality which includes LPS, LMS, and LSS.

! $!    The “virtual” Local Processing Server is the core of the Games@Large System HOME version. It handles every communication with the clients while being responsible for every internal communication in parallel. The following Figure 6 represents the general server architecture.

186

Computer Games Technology

Figure 6: General server architecture.

At this stage of the implementation everything is manipulated within the server application. This web server is an Apache [24] server with support of PHP [25] and sqLITE [26] (as a PHP module) which is the database used in the HOME version of the system. The LPS incorporates the implementations of 3D and Video Streaming, the Return Channel and the Quality of Service modules. In parallel it has a Web Server for serving the Web UI (user interface) to the clients and a Database Transaction Layer for the communication with the Database and the File System (game installations). The basic procedure of the Processing Server is depicted in Figure 7.

Figure 7˜™  " 

Platform for Distributed 3D Gaming

187

When a client wants to connect to the system, it tries to locate the LPS that is running the G@L HOME system. The UPnP daemon that runs on the LPS “helps” each end device to locate the server’s IP. The application that runs on each client launches a web browser with the given IP address and the LPS’s Web Server starts interacting with the clients. The client is served with the corresponding web UI (different UI for each end device). The server is informed which UI has to be sent by a parameter that is passed together with the IP of the server in the web browser. After the log-in procedure of the end user, the game selection phase is launched. When the user selects a game to play the main client application is launched and the main communication procedures between the client and the server begin. The client is informing the LPS about its request to >>   { ’‹'>    •   >    >   During the decision procedure the server, with the help of the UPnP and QoS modules, observes the current system status and network utilization. If the game’s Software, Hardware, and Network demands are met, then the game initialization procedure begins. The client is also informed that the launching of the game is imminent and thus it will be able to begin its  >              procedure, the game is launched with the 3D commands or video of the game streamed to the client. Additionally, the client is streaming the user’s input commands to the server. The commands coming from the client are furthermore processed on the server side and they are delegated to the window of the game.

EXPERIMENTAL RESULTS In order to demonstrate multiple game execution and system performance analysis we designed a testbed [27] in which we could monitor the performance of network, devices and Games@Large system processes while running simultaneous game sessions. Figure 8 shows our testbed setup.

188

Computer Games Technology

Figure 8: Games@Large testbed.

We performed our experiments with two client notebooks of which one was running Sprill (Casual game) and the second one Red Faction Demo  >    €{ ™ ¾’   * ‡™      }‹†    multiple games at once it can run them. Since the games’ graphics are not rendered on the LPS (when 3D streaming), there is no competition between games for the GPU and neither for the full-screen mode. The most important hardware requirement for the client device is the video adapter. It should have hardware acceleration capabilities to enable fast rendering of 3D scenes. As on the server, the graphic resources that the game stores in the video memory should be available in the system memory to enable manipulation prediction and cashing. So memory requirements for the client should be 200–300 MB available to the client application for fairly heavy games. Besides the frame rate and technical requirements, such as hardware and network bandwidth, for a game to run playable on the end device in the Games@Large system it has to be compatible with the end device screen size and controller capabilities (e.g., some games cannot be played on small displays, other games cannot be controlled with the gamepad). The above mentioned tests were performed over a Wi-Fi network   ”'      > '¢`‹ž“`*> ^  for system monitoring, but these crate a very small network load) present on the network than the one produced by the two game sessions of the client  ^ {                    assert during the tests, for example, measured mean round trip time (we sent an SNMP Ping of 30 Bytes from the server, every second 30 times) for both clients was >!};!         ^  to the network when testing the QoS capabilities of the solution. Similar to the game laptops, Laptop C was connected using a wireless connection and Laptop D with a wired connection. The laptops used were standard PC

192

Computer Games Technology

laptops equipped with IEEE 802.11g and WMM enabled wireless interfaces or 100 Mbps Ethernet interfaces. The AP was a normal WLAN AP with an addition of priority queuing in the AP kernel buffers in case of WMM queue  " 

Figure 10: Test setup.

In each of the test cases, playing the game (Sprill, using 3D streaming)           ^       ‘             ! >         from Laptop D to Laptop C. This stream was generated by an open source    ‹  {}‹   !    download between D and C using FTP or HTTP. The tests were performed   ”'  *   !      >!  !     > and the background best effort priority. Downlink and uplink throughput,  !_ !> ^          QoSMeT tool [31] and the game client’s realized frame rate was measured with Fraps [29]. The downlink performance is visualized in Figure 11 in terms of throughput and delay for the case with equal priority, and in Figure 12 for the case where gaming has higher priority. The effect of introducing the  ^       ‘ ||    in Figure 12.

Platform for Distributed 3D Gaming

193

Figure 11: Downlink throughput and delay when both the game and the back     >

Figure 12: Downlink throughput and delay when the game has higher priority    ^ 

The complete results are presented in Tables 3 and 4 for both cases respectively. In the case without prioritization, the game really suffers    >       “’¢ {   ^       up to around 20 times as high as in uncongested conditions. This causes the realized frame rate at the client to decrease almost 90 percent which, together with the increased delay, practically destroys the game experience. “        >     ^  !    >   {  ^  

194

Computer Games Technology

an acceptable level and the realized frame rate of the game at the client decreases only less than 10 percent. Table 3˜         ^    the same priority

Table 4: Average values when the game has higher priority than the background 

CONCLUSIONS In this paper, we have presented a new distributed gaming platform for cross-platform video game delivery. An innovative architecture, transparent to legacy game code, allows distribution of a cross-platform gaming and entertainment on a variety of low-cost networked devices that are not able to run such games. This framework enables easy access to the game catalogue via the web based interface adapted for different end devices. A generalized protocol supports end devices with both OpenGL and DirectX API’s. We have shown that it is feasible to use a single PC for multiple game executions and stream them with a high visual quality to concurrently connected clients via a wireless network using the QoS solution. The developed technology enables putting PC gaming onto TV screens which is a rapidly emerging trend in gaming market. Apart from that it also enables a pervasive video game access on handheld devices.

Platform for Distributed 3D Gaming

195

Future work is to support wider range of titles, we will need to implement the interception layer for all the graphic libraries used by the games which can supported by Game@Large. A possibility is investigated to use or develop a transport protocol (e.g., RTP), which could replace TCP for 3D streaming for the improvement of its performance over a wireless network. For video streaming current research is focused on reducing the computational complexity of the H.264 encoder itself by incorporating enhancements based on the available rendering context information using the motion prediction and by predicting the macroblock partitioning. In > !           environment in order to gather knowledge about users’ perceptions and investigate the subjective expectations of gamers.

ACKNOWLEDGMENTS The work presented in this paper has been developed with the support of the European Integrated Project Games@Large (Contract IST-038453) which is partially funded by the European Commission.

196

Computer Games Technology

REFERENCES 1.

Y. Tzruya, A. Shani, F. Bellotti, and A. Jurgelionis, “Games@Large—a new platform for ubiquitous gaming and multimedia,” in Proceedings of the Broadband Europe Conference (BBEurope ‘06), Geneva, Switzerland, December 2006. 2. S. Cacciaguerra and G. D’Angelo, “The playing session: enhanced playability for mobile gamers in massive metaverses,” International Journal of Computer Games Technology, vol. 2008, Article ID 642314, 9 pages, 2008. 3. I. Nave, H. David, A. Shani, A. Laikari, P. Eisert, and P. Fechteler, “Games@Large graphics streaming architecture,” in Proceedings of the 12th Annual IEEE International Symposium on Consumer Electronics (ISCE ‘08), pp. 1–4, Algarve, Portugal, April 2008. 4. D. De Winter, P. Simoens, L. Deboosere et al., “A hybrid thinclient protocol for multimedia streaming and interactive gaming applications,” in Proceedings of the International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV ‘06), Newport, RI, USA, May 2006. 5. L. Cheng, A. Bhushan, R. Pajarola, and M. El Zarki, “Real-time 3d graphics streaming using mpeg-4,” in Proceedings of the IEEE/ ACM Workshop on Broadband Wireless Services and Applications (BroadWise ‘04), pp. 1–16, San Jose, Calif, USA, July 2004. 6. Y. Noimark and D. Cohen-Or, “Streaming scenes to MPEG-4 videoenabled devices,” IEEE Computer Graphics and Applications, vol. 23, no. 1, pp. 58–64, 2003. 7. S. Stegmaier, M. Magallón, and T. Ertl, “A generic solution for hardware accelerated remote visualization,” in Proceedings of the Symposium on Data Visualisation (VISSYM ‘02), pp. 87–94, Barcelona, Spain, May 2002. 8. MPEG-4 AVC, “Advanced video coding for generic audiovisual services,” ITU-T Rec. H.264 and ISO/IEC 14496-10 AVC, 2003. 9. MPEG-4 HE-AAC, “ISO/IEC 14496-3:2005/Amd.2”. 10. P. Eisert and P. Fechteler, “Low delay streaming of computer graphics,” in Proceedings of the International Conference on Image Processing (ICIP ‘08), pp. 2704–2707, San Diego, Calif, USA, October 2008. 11. D. Marpe, H. Schwarz, and T. Wiegand, “Context-based adaptive binary arithmetic coding in the H.264/AVC video compression

Platform for Distributed 3D Gaming

12. 13. 14. 15.

16. 17. 18. 19. 20.

21. 22. 23. 24. 25. 26. 27.

28.

197

standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 620–636, 2003. RFC 3550, “RTP: A Transport Protocol for Real-Time Applications”. RFC 3984, “RTP Payload Format for H.264 Video”. RFC 3640, “RTP Payload Format for Transport of MPEG-4 Elementary Streams”. D. L. Mills, “Network time protocol version 4 reference and implementation guide,” Tech. Rep. 06-6-1, Department of Electrical and Computer Engineering, University of Delaware, Newark, Del, USA, June 2006. Hawk Software, Hawk Network Library, http://www.hawksoft.com/ hawknl. J. Nagle, “Congestion control in IP/TCP internetworks,” RFC 896, January 1984. IEEE Standard 802.11-1999, “Part 11: Wireless LAN Medium Access }`}€‹  ’ ‹ †Š™ !* tional Journal of Computer Games Technology, vol. 2009, Article ID 323095, 9 pages, 2009. https://doi.org/10.1155/2009/323095. Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

200

Computer Games Technology

Card, and proposes a Java API to integrate Smart Cards in the development of MUGs. This user centric approach brings new forms of gameplay, allowing the player to interact with the game or with other players any time and anywhere. Smart Cards should also help improve the security, ubiquity, and the user mobility in traditional MUGs.

INTRODUCTION We deeply believe that the next step for the gaming industry will be Multiplayer Ubiquitous Games (MUGs). In this type of game, users play simultaneously in the real world and in the virtual world [1]. To manage an MUG system which supports social interactions among interconnected users in both worlds, the system has to manage the equipments that are deployed in the real world, and to compute the state of the virtual world. Our purpose here is to enhance the mobility and the ubiquity in MUGs by using a user-centric approach. This might give rise to new kinds of user interactions. Various technologies, such as RFID tags, networked objects or environmental sensors, can be used to help the user interact with his/her physical environment. Moreover, the players can have access to hand-held device, biomedical sensors, interaction devices, virtual reality glasses, and so forth. Then, various network connectivities are used to link all these devices: Wi-Fi, Bluetooth, ZigBee, or cellular phone networks. Finally, an MUG server could run the global game logic, centralize the game data, and bring the players together. A proper way to support this technological heterogeneity is to use a middleware, like uGASP [2, 3], which is an OSGibased [4] open-source middleware dedicated to MUGs. On the gameplay level, MUG systems introduce the concept of Real world Gaming system Interaction (RGI). It is based on the following properties. Firstly, the gameplay relies on the player’s physical mobility and often requires a context and a user adaptation. Secondly, the game interacts with the player in an ubiquitous way (at nondedicated locations through nondedicated objects), and proactively (at uncontrolled times, e.g., through email or phone). Finally, the game leads to social interactions which can be effective in the real world or in the virtual world. '!`†™    "  >      respond to these complex and uncertain relations between the real world and the game world, and between the player and the real world. Furthermore,

‹ ‹  `  ¢‘}'}`> 

201

the player should be able to interact with the game despite a network disconnection, for example, to interact with a smart toy in a nonnetworked area. On the design level, like all games, and, more generally, like all entertainment applications, an MUG system should include a user model. An MUG system can be seen as an information system requiring some user personal data in order to integrate the user’s real life into the game, for example, his/her phone number or his/her real life social relations. Natkin ®‚>> > >   > >   experience to the player. ƒ          >  >     `†™   is to let  >   >     ž     > device, such as NFC Smart Card. The Near Field Communication ( NFC, [6]) Smart Cards are a fast growing member of the large Smart Card family. Today, Smart Cards are widespread devices with cryptographic and storage capabilities, and tamper-resistant properties. This makes those    >>   ^   !>! telecommunication, or banking domains. Their non-self-powered essence implies the use of a reader/Card Acceptance Device (CAD) that can power up the card and interact with it. The NFC technology enables them to interact with their environment in a contactless manner, most primarily with mobile phones. ¢>  > `†™> >  '} has been provided so far. Besides, it appears that many game systems tend               in any networking environment while some personal data are involved. The work undertaken here is part of the PLUG research project [7]. PLUG is led by the CNAM-CEDRIC computer science research laboratory in collaboration with Musée des Arts et Métiers, Orange Labs, Institut Sud Telecom, L3i lab from University La Rochelle, and a game studio: TetraEdge. It aims at creating an MUG inside the CNAM museum that takes into account the player characteristics. This MUG is built on top of the uGASP middleware. This paper introduces a user centric approach dedicated to MUG systems. Our approach consists in using an NFC Smart Card to store the `†™> >  !>!  >    { >        with the surrounding NFC devices. In addition, the Smart Card provides the >          *  ^! >  

202

Computer Games Technology

> =    `†™> >  `†™‹‹€ˆ= based devices (card, reader, and server levels): the MUGPPM API (the API `†™‹ ‹  `  €' ‡    `†™‹‹ '   >             >       '}' ª       > >   on an NFC Smart Card for MUGs and the kinds of new interactions it could bring to the user and the MUG system. In Section 5, the general architecture of the system is presented and the security issues related to the protection of the data in the card are discussed. Section 6 describes the new kinds of interactions using our API. The last section concludes and gives our perspectives for future work.

PLAYER PROFILE DEFINITION FOR MUG (MUGPP) The essence of gameplay is designing a game with regards to the user point of view. This point of view is implicitly or explicitly coded in the game system: all games and all entertainment applications include a user model. In single player games, it starts from a rough classification of the target players and a limited memory of player actions in the game, but it can also be a complex cognitive model. In multiplayer games, the model contains social attributes and behaviors. In multiplayer ubiquitous games, the model has to be cognitive, social, and related to the history and to the current situation of the player in both the virtual and the real world. Considering that the user’s space of activity embeds computing devices and that information systems become more and more ubiquitous and pervasive, there is a need to consider the interaction between the real and the virtual world in a mixed reality mode, and the possible actions of the user in both universes. So the user model will not only take into account the state and behavior of the user as in classical online gaming situations but also in augmented outdoor or mobile gaming environments. ƒ    >   ! `†™‹ ‹   (MUGPP), to gather and classify distinctive information about the player. This information will be the deductive basis for the game decision mechanism. The MUGPP guides the game decision engine to offer diverse game experiences to players. The game quests adapt the game scenario to the personal context of the player, which leads to an action that is executed both

‹ ‹  `  ¢‘}'}`> 

203

in the game and in the real world. The main goal in the use of the MUGPP along with the automatic generation of the narration is to decide which type Š      > >    ! so as to promote social relations between players. In this way, the playability of the game is augmented: the game is persistent and adaptable. Each player can have a unique experience. The MUGPP depends on a set of parameters that can be either statically        _     time changes in the user’s physical states or even in the user’s social features. It implies a personalized level of parameters in the user model [5]. Since the player is represented in both the real world and the virtual system, we have to consider his/her knowledge of the gameplay from several different points of view. It is very useful to distinguish the user’s general information  ž = ! ž   >     = >  different game mechanisms. The following three groups describe the kinds          {  >       !•• ! unrelated to his/her game practice: civil status, preferences, and so forth. Most of this data can only be provided directly by the player during the creation of the game account. Since this data changes infrequently, it has to be accessible by any MUG on the game platform, so the player does not need to register his/her civil status every time he/she plans to play a new game. {    >     ^             a player.’’ It includes some exact information corresponding to the basic choices of the player: the type of account, distribution of the duration of play in each location, and so forth. It includes also statistical data or some real-time data gathered during the play: his/her physical location, his/her interaction with the various interactive devices in the real environment, and so forth. {    >          > •       from both a statistical and a real-time point of view, such as the standard information of his/her avatar, his/her equipment and inventory, or his/her social relations in the game. This data could be used by the game server to propose some special customized game events to the players, such as  >    Š   Š  >  _     > • inventories.

204

Computer Games Technology

This user model has been experimented in the prototype MugNSRC [8]. The original game, NSRC, is based on cartoon type wheelchair races in   ˆ>  }>`¢'š}                 > •  >        engine as a mean to manage and develop a community through cooperative and competitive goals assigned to the players. { Š        >       on the global architecture of the game. Generally, multiplayer games follow   ž     {  >      ! MugNSRC. Initial values of each class of data in the MUGPP are computed     > >    Š   {  >        form used to set the initial values of MUGPP parameters before the creation of his/her game account. These values could be changed according to a feedback loop related to the player choices and actions in the game. The user   ž  !    ž >  ƒ  

!‹‡‹>    > >       The disadvantages of such an architecture are that the user has to manage

 ž   ž >    ž     ! that the players can cheat easily.

SMART CARDS IN THE MANAGEMENT OF USER PROFILES Our work focuses on finding a way to manage efficiently player profiles in MUGs in order to provide a more personalized game to the gamers. Since network coverage and network connections are potentially unreliable, an interesting approach to carry out the game in a continuous manner would be to let the player carry his/her player profile along with him, so that the user is still able to play despite disconnection. This assessment leads to build a distributed and persistent information system for game data, and especially what we called MUGPP information. To manage this information, wearable devices are appropriate. The list of such devices includes mobile phones, PDA, Smart Cards, game consoles, memory cards, and so forth. Among those, Smart Cards are a good compromise in terms of wearability, security mechanisms, and costs. Smart Cards are the most secure and widespread portable computing device today. They have been used successfully around the world in various applications involving money, proprietary data, and personal data (such as ^! >={~  ™'`      ! !  =  !

‹ ‹  `  ¢‘}'}`> 

205

insurance, etc.). The Java Card [9] and the Microsoft.Net framework for Smart Cards are platforms that support a multiapplication environment, and in their modern versions, tend to go multithread. One of the key elements of Smart Cards is to improve on basic magnetic stripe cards with dynamically programmable microcontrollers, cryptographic Coprocessors, and means to protect the embedded data. Furthermore, Java Card platforms usually embed      !^          "  >         € their low cost (to be sold in large volumes), this makes them ideal for any ubiquitous security-sensitive environment. Today Smart Cards are small computers, providing 8, 16, or 32 bits CPU with clock speeds ranging from 5 up to 40 MHz, ROM memory between 32 and 128 KB, EEPROM memory (writable, persistent) between 16 and 64 KB and RAM memory (writable, nonpersistent) between 3 and 5 KB. Smart Cards communicate with the rest of the world through Application Protocol Data Units (APDUs, ISO 78164 standard). The communication is done in client-server mode, the Smart Card playing the role of the server. It is always the terminal application that initiates the communication by sending a command APDU to the card and then the card replies by sending back a response APDU (possibly with an empty content). Smart Cards can be accessed through a reader. The access has traditionally meant inserting the Smart Card in the reader. However, the trend is to interact in a contactless manner, to improve the Human Computer Interface (HCI) aspects. The Near Field Communication (NFC) technology provides devices with the ability to interact within a short range (less than 10 centimeters) by radio signal. This technology stems from the recent RFID market development. It works at a 13.56 MHz frequency, provides a 424 kbit/s bandwidth, and supports a half-duplex communication between devices. NFC Smart Cards combine the two previous technologies, so they are easily accessible in a contactless manner. Since these cards are non-selfpowered, the radio signal from a reader is used to power the Smart Cardintegrated circuit, in the same manner as RFID tags. In the context of ubiquitous systems, the user can either carry an NFC Smart Card, which is readable within a short range by an NFC reader, or carry a reader, which is able to interact with the NFC devices disseminated over an area. As far as we know, there is no MUG that makes use of a Smart Card. However, there are some similarities between using a Smart Card for an MUG and using a Smart Card that is dedicated to commercial

206

Computer Games Technology

applications like public transportation systems and banking applications. Today, numerous cities in the world use contactless Smart Card-based systems to manage their public transportation system. For instance, the Paris commuters can use their contactless Smart Card (Navigo) as a mean to access transportation facilities (trains, buses, etc.) as well as the public bicycles network (Velib). The later involves a network of bicycle stations, which are equipped with an NFC readers, and a central authority, to help     { '}    =  ! for example, the log of the stations he/she went through. The core of this type of distributed information system is the management of user data on Smart Cards. There has been some effort to manage a health >    ‹ ;?`'|—‹ ;?`';     dedicated to Smart Cards. PicoDBMS has also been used in some work  ^   ’   †  ||      *        ' }=    >   {      >     (the user can specify his/her preferences). The security approach is that of the P3P (Platform for Privacy Preferences) [12] normalization group. The framework offers two security levels, the less secure being the less Smart Card intensive. The approach leaves out any gaming/ubiquitous > !     ž     information. Ubiquitous systems should introduce a middleware to support this distributed information system. There are three essential components for these systems which are the users and their Smart Cards, the readers, and a central authority server.

PLAYING MUGS WITH NFC SMART CARDS The game system of some existing MUGs, such as [13–15], relies on the capability to control all the physical objects, which are integrated in the game, their impacts on the player, and all the various real-world embedded sensors, which take part in a hierarchy of networks. The participants of MUGs often experience the heavy load of physical wearable devices, or they have to deal with network disconnection problems [16]. Our proposal consists in using a Smart Card as an add-on interface for the interactions between the player, the virtual world, and the real world. ƒ   > •  !   `†™‹‹    >     ' }! which enables the player to have access to some of his/her game-related information. The player can monitor his/her game process, manage his/her

‹ ‹  `  ¢‘}'}`> 

207

game objects, and even visualize or being informed with the game progress          >      or by using a mobile terminal. In the context of an NFC Smart Card-based >  >  !          >         '}    ¢‘}   or with his/her mobile phone integrated reader. The update of the MUGPP is executed automatically by the system and manually by the player. Firstly the MUGPP could be renewed by the player’s physical interaction, that is to say, the player’s physical movement and behavior in the real environment (outdoor and indoor). As the real environment is embedded with tangible objects, the player’s physical location could be “tracked’’ as he/she walks through the game zones. The interaction between a Smart Card and a smart object using NFC readers can be performed without any connection to the game server. Every time the player comes close to a Smart Card reader, some of the MUGPP information can be updated and used in any way by dealing with the “as a player’’ data. Secondly, the MUGPP is updated following the communication or social interaction among players in the real world. The players should be able to sell and buy the game items they own to other players even while they are " {  >! !  ž •• data can be updated dynamically. The social dimension of the gameplay is extended to the spatial and temporal dimension of the game. Therefore, the game system could trigger and control some game events in real time and real space for a group of players in the same game zone. Thus, the MUGPP can be updated during the real time interactions between the players, the game, and the physical space. Playing MUGs with a Smart Card is a relatively new experience for the user, which will bring new forms of interaction to the players, new contents, and new security features. Using a Smart Card gives the players new ways to interact with the game, potentially without any display device. This means that an automatic tangible interaction between the NFC Smart Card and the NFC reader can take place by bringing them close to one another. For the user, the most accessible and affordable mobile terminal is the mobile phone. Also, some are able to integrate the NFC technology, like the Nokia 6131 NFC and the Sagem My700x. Therefore, we suggest to use an NFC mobile phone to run a client application in our MUG system.

208

Computer Games Technology

An ideal MUG is a digital environment with smart objects surrounding the user. This would allow him/her to interact with the game anywhere. Therefore, we can embed NFC readers in smart objects, such as Nabaztag [17], which could interact with each user’s Smart Card. To enrich the user experience, a television decoder may also integrate an NFC reader so that the player could gain access to the multimedia content related to the game. ƒ   ' }!         `†™‹‹ which might help maintain decentralized user data from the game server. This MUGPP allows the user’s personal information to be reused by several game mechanisms and to be completed by several applications. The interest of having an MUGPP on a Smart Card is not only that users have a more “wearable’’ computing device but also that the game designers can provide each individual with a personalized gaming experience. In the mechanism of a MUG, the MUGPP can take a central role rather than being a peripheral       "           ^       customized service to the end user. }    > !   >    >  >     >                   individual’s private information and the related service. For example, it could be possible to register the information of the player’s bank account on the Smart Card which allows the player to have access to a paying service. In “World of Warcraft’’ (Blizzard Entertainment, 2004), the user can register his/her bank account on the game server, which can be unsafe despite the login/password protection, in order to obtain some special services from the game editor. From a perspective point of view, this will enlarge the possibility of license management such as biological or vocal based identity. As a consequence, there is a need to support Smart Card, NFC reader in the MUG system architecture. We will describe an API which provides this service in more details in the following sections.

ARCHITECTURE TO MANAGE MUGPP ON AN NFC SMART CARD The NFC interactions in MUGs (see Section 4) and of the MUG player profile (see Section 2) are key issues of our proposal. The main component of this architecture is the service that manages the MUG player profile on the external NFC Smart Card. We have implemented a library which enables Java 2 Micro Edition [18] (J2ME) Mobile Information Device

‹ ‹  `  ¢‘}'}`> 

209

Profile- (MIDP-) based mobile phones to exchange data with Smart Cards and game server logic. The server is itself implemented in J2SE and the Smart Card part of the application is a Java Card cardlet. Finally, we use the security mechanisms to ensure the privacy of the player profile data. Figure 1 presents an overview of the MUGPPM architecture.

Figure 1: MUGPPM architecture overview.

Oncard Service Our card-side implementation aims the card applications based on Java Card platform which complies with the ISO 14443 [19] standard part 1, 2, and 3 type A. An oncard Java applet is dedicated to the MUGPPM. It implements a set of instructions to handle communication with an NFC reader. These instructions are built using the Application Protocol Data Unit (APDU) protocol defined in the ISO/IEC 7816 standard. Besides, it maintains the player profile data model with some added security features.

APDU Instructions The APDU instruction set used in MUGPPM allows the following: €

  > >  !€     ! > !   `†™> >   ! example, the username, age, or playtime !€   object entries, for example, entries corresponding to game data objects, like inventory items,(iv)to manage the private and public key entries. “    !     >     '}! ž  >    >  ! ž      _ ƒ_           > 

210

Computer Games Technology

Nevertheless, it is the MUG game designer who has to decide if a game object is sharable or not. Table 1 shows the instructions used by the MUGPPM. It details the parameters of each instruction and the corresponding response of the Smart Card. Table 1: APDU instructions used in MUGPPM

Data Model The field lengths have been bounded due to the memory limitation that characterizes the Smart Card platforms. We tested our implementation on an Oberthur Cosmo Dual card which offers only 72 KB of memory space (EEPROM). ¢       >         !     }‹   containing our oncard application uses around 6 KB. We also use a 4 KB       *žƒ{   ™ ‹    themselves include a number of byte arrays (264 bytes), and a couple of OwnerPIN objects to manage the user password and the game provider password (the object size depends on the Java Card Virtual Machine (JCVM) implementation, but the password itself is limited to 8 bytes). Furthermore,  >        ‡—ª‰š'^ –‰ €'  >>   Š ‰½?   >    Š  less than 2 KB (depending on the JCVM implementation). Therefore, we     —   >     ' Card.

NFC Reader Side Service The main functionalities of the NFC reader API are to access the MUG player profile stored on the Smart Card and to communicate with the profilebased services that are hosted on the MUG server.

‹ ‹  `  ¢‘}'}`> 

211

The APDUDataManager class is used to establish the NFC communication toward the card and to some send APDU formatted   { ™ ‹      > >    >  >   ‘! ¢  ^}  handles object-oriented HTTP communications with the server based on the MooDS protocol [20]. We have prototyped a J2ME version of our MUGPPM service to have a Java mobile phone access to the oncard MUGPPM service. This choice is obvious considering the mobile phone is the most widespread mobile terminal for end-users. Moreover, in 2007, some J2ME mobile phones embedding NFC readers, such as the Nokia 6131 NFC or the Sagem my700X, were placed on the market. An API to help establish a contactless communication between a J2ME mobile phone and an NFC Smart Card has been released the same year: the JSR257 [19].  >   ‹*         ‹;†=  communication on J2ME mobile phones: the JSR177 [21]. However, the use of this API is not mandatory in the case of an external NFC Smart Card. In fact, it offers essential mechanisms enabling the mobile phone to communicate with its embedded SIM card. Thus, our prototype uses the JSR257 functionalities to initiate a communication between the mobile phone and the Smart Card. *   `†™‹‹`  !   > >  is creating his/her MUGPP on the Smart Card. So, he/she has to enter a login >        ž >  *   >! he/she has to enter his/her personal information, for example, the user “by

 ••> `†™‹‹{ ! ‹*  > >  

    >  !   >  _  representation. Afterward, the MUG client game engine can start using the > >     `†™  ‘ ‡  the architecture used to provide the MUG client game engine with an access   `†™‹ ‹  `    . It is important to notice that the interaction between the mobile phone and the NFC Smart Card depends on the player since, he/she has to draw the card near the mobile phone during all the process, for example, during   { `†™   ^   >   Human Computer Interaction (HCI).

212

Computer Games Technology

Figure 2: MUGPPM architecture for J2ME devices.

Besides, our prototype can communicate with an MUG HTTP server. It uses the MooDS protocol to communicate with the MUG server in an object-oriented manner. The developer can creates objects which represent the messages used during the client-server communication. { !    `†™      >  =          !  has to instantiate the corresponding message object and send it through the MooDS encoder. We have created a message to invoke a server side   ! ‹  ? '  š Š    *    the server response using the MooDS decoder and handle the decoded message objects. If the service requires data from the Smart Card, the client receives a CardDataRequest message from the server which contains a list  Š  ^ { ! `†™‹‹`‹*       

     ' }        ;}š >  object to the server. Finally, it receives the service response, for example, a player list from a lobby service.

Server Side Service The MUGPPM server API offers a Java based MUG server the ability to create a profile based service. It helps create personalized services, for example, profile based lobby or profile based quest provider. The server API and the client API have a similar class to handle MUGPP contents: theGameProfile class. For example, if the server requires the player nationality, it has to request the corresponding field key from the Smart Card and to handle the card response. The communication part of the API is also based on the MooDS protocol.

‹ ‹  `  ¢‘}'}`> 

213

{ Š   !     ‹  ? '  š Š  message with the name of the service needed. Then, if the service requires personal data stored on the Smart Card, it sends back a CardDataRequest              Š   ^   ! it receives from the client a DataCardResponse message which contains the required data. Finally, the service computes the response based on the   > >  >   >     client.

MUGPP and Security Some of the MUGPP data deal with the user private life. Furthermore, the lack of a sound and secure authentication procedure typically makes cheating in MUGs an easy feat [22, 23]. There is a need to use improved security mechanisms to act against those threats. The players and the terminal the players use (in our case a mobile phone)    !  >>      reliably developed using Java Card. In order to insure the security of the player’s private data, the card requires an authentication from the reader. This authentication process is based on a personal login/password chosen by the player during his/her account creation. We use the OwnerPIN class on the card to safely store the user password. The login procedure needs to be performed to authorize the access to the smart card cryptographic functionalities. When the user is not playing any more, the user is logged out from the Smart Card. The application provider uses another PIN code to block/unblock the user from    We chose to use a public key infrastructure to help the MUG system designers ensure the security of the application. Yet, the management of the keys on a Smart Card is a non trivial issue. The Smart Card requires a personalization phase during which a key pair is created and stored on the card static memory. The server side also requires a key pair, and an X.509         > ^ { > ^  infrastructure guarantees the privacy of communications between the server and the Smart Card. Thus, when user logs in, he/she does so, not with a Smart Card, and not with a server. He/she can then have access to a higher security level than just a password-based protocol. When the application needs to interact with the server, the server sends > ^     { '}    

214

Computer Games Technology

validity of the key. If the key happens to be valid, the Smart Card can keep the public key. The Smart Card can send its public key to the server. All subsequent interactions between the server and the client can then use an encryption/decryption using one’s private key and the other’s public key. {           just a login/password and might help thwart some common online games cheats. One advantage here is that no critical data is transmitted in plain text format over the network. A common cheat is the replacement of code or data concerning the game. The simple fact of using a Smart Card to manage the MUGPP makes    >     >  !           ^    >  ž       {  game designer might want to check an additional server signature for any >        >   An other cheat consists in abusing the game procedures. For instance, a player can log out before he/she loses a game. Making the signing of some game procedures by the server necessary can be used as a countermeasure against such cheats. The mobile aspect of our framework implies that some interactions between two players can occur out of a connection with the game server. For instance, in a role-playing game, the players might want to exchange an item. This operation could take place without a server while still guarantying the nonrepudiation property.

MUG PPM USE CASES Our library can be used for different types of interactions, connected or disconnected interactions from an MUG server point of view. For example, the secure architecture of the MUGPPM can only be used safely with a network connection in order to validate public keys with a signing authority through a registered MUG server. So, secure interactions have to be carried out in a connected way. However, disconnected interactions are possible without strong security mechanisms, particularly for local interactions. Thus, an MUG can introduce NFC checkpoints or local object exchange mechanisms between players using this API.

‹ ‹  `  ¢‘}'}`> 

215

Connected MUG Interaction Examples Our framework can be used to provide various profile-based connected services in a secure way, like providing players with personalized quests or locating players who speak a common language in a game area. Via mineralia [24] is a pervasive search and quizz game in the museum of Terra Mineralia in Freiberg. The goal of the game is to realize quests in the context of the mineral exposition. Each point of interest is represented by an RFID tag on the mineral. The MUGPPM can be used in this application to check the visitor card at the museum entry (with an NFC reader) to adapt      ž  >  >   ‘ > !          ^                    ! expert, etc.) and to propose personalized quests. Moreover, regular visitors could resume a quest undertaken previously.        !    >     >  =   service on top of the MUGPPM secure architecture. This service uses the player’s age and the languages he/she knows. The server asks for the user’s required personal data while using the security part of MUGPPM. Finally,   >         >         >  matching the required age and spoken languages and returns it to the client. That type of service could have been used in games like the item hunt game “Mogi Mogi’’ [15]. In this game, some users have been using a lobby-like application to spy on other younger players. Bypassing the game rules this way can be controlled using our API. Indeed, as the private data is stored on a secure decentralized device (unlike a game server), fraudulent use of >      !     >   that type of behavior.

Disconnected MUG Interaction Examples MUG game designers can integrate disconnected interactions in their game by using the MUGPPM API. Paranoia Syndrome [25] is a classic strategic game that integrates some location based interactions, and RFID tangible objects. One of the perspectives of the game, is that multimedia content and basic AI will be added to the tangible objects to serve different content by regarding the player type (doctor, scientist, alien, etc.). With MUGPPM, the interactive objects

216

Computer Games Technology

(with an embedded NFC reader) could adapt their content and interaction to  >     > >       ‘  !`†™        >  player’s age in order to assign a course to the player in the game area. This interaction can be made between the player and a NFC checkpoint and does not necessarily require a server side resolution. In addition, MUGs can implement game object exchange mechanisms between players. Such a service should give two players in the same real world area, the ability to exchange some game items from their inventories. This interaction can be made by peering the mobile phones of the players over a local communication link. The NFCIPConnection class from the com. nokia.nfc.p2p package (available in the Nokia JSR257 implementation) allows to establish a NFC link between two phones. We have implemented a game object exchange service, on top of our API, that offers to a player to send one sharable item from his/her game inventory to another player. We consider here that each player has previously loaded his/her player >      ' } {             _   >   > >   ' ‚‡    >   €'!>    _  his/her friend has to select the item from his/her list and the sender mode, whereas the other player has to select the receiver mode. The players must approach their mobile phones in order to set up the P2P link. As soon as the connection is established, the object is sent as a byte array onto the network. Then, the receiver handles the binary data corresponding to the item and can add it to his/her inventory. Finally, the new inventories of both players will be updated in the Smart Card during their next game save. {   >  >  _  > ‹*  MUG domain: it does not require the players to be connected with the central MUG server in order to interact in the game. Thus, our library enables new interactions for MUG in a totally decentralized manner. To evaluate the performance of our application, we used the Mesure project [26], which is dedicated to measuring the performance of smart cards. The Mesure project provides detailed time performance of individual bytecodes and API calls. Given the use cases described earlier, we monitored the use of each bytecode and each API call for a regular use of our application. We then matched the list of used bytecodes and API calls with the individual performance of each feature measured on our smart card. The results show

‹ ‹  `  ¢‘}'}`> 

217

that the time necessary to perform a RSA encryption with the smart card is close to half a second, and it is by far the costliest of the operations described earlier. Login into the smart card, as a title of comparison lasts less than 20 milliseconds.

CONCLUSIONS AND PERSPECTIVES This paper presents an NFC Smart Card based approach to handle the player profile in the context of MUGs. This NFC card centric architecture allows new kinds of interactions in both centralized and decentralized ways. The main advantage of our method is to allow the players to play at any time, and anywhere, hence the ubiquitous aspect of the game. We have presented the MUGPPM API which is dedicated to the Java Card/J2ME/J2SE platforms. This enables MUG developers to implement a Smart Card based architecture to provide profile-based services. Thus, players can have a personalized game experience. Besides, this API provides the player with a secure way to ensure a certain level of data confidentiality. We will release the MUGPPM server API as an open source OSGi bundle to be integrated in the uGASP [2, 3] middleware. Thereafter, game developers could implement MUGs based on this framework, therefore offering personalized services. On the basis of our framework, it is possible to specialize and realize an authoring tool for the development of MUGs. It would be interesting to consider using the NFC Smart Card from a more conceptual point of view during the design of the game. Using Smart Cards in MUGs may also give rise to the future direction of game design by developing new forms of interaction and narration based on new technology of mobility and ubiquity. The question of “who personalizes the Smart Card’’ remains open. In traditional banking, telecom or transport applications, this is carried out by the card emitting company. However, the ongrowing multiapplication aspect of Smart Card makes it more and more questionable. For the purpose of  ‹*!          Š   for a secure application. Still, the application provider has some control over      ‹*¢   Future works include a generalization of the security architecture in terms of key sizes and algorithms, depending on the functionalities of a given Smart Card. In addition, we will generalize the API to facilitate the description of   >  > >    ƒ    !

218

Computer Games Technology

   >        > >  =   ƒ the card side, we will investigate PicoDBMS database to handle the player >     We await the results of an other project: T2TIT [27] (Things to Things in the Internet of things). This project proposes to interact with contactless object, going as far as to give them a network identity, while keeping some strong security properties. The eventual conclusion of T2TIT can be helpful to us, for instance, we can expect to use some encrypted channels. We intend to use the T2TIT security mechanisms in our work. The newly published Java Card — >   Œ         ' } {               >  ! which were not considered in this paper. *      ! ^^    "  (see [28]) might also provide us with some strong on card inter-application >   “       >    !     >   “    >         * clearly complementary to the traditional cryptographic schemes, and as the Smart Card industry is integrating more and more of those, so should we.

‹ ‹  `  ¢‘}'}`> 

219

REFERENCES 1.

2.

3. 4. 5.

6. 7. 8.

9. 10.

11.

12. 13.

S. Björk, M. Börjesson, P. Ljungstrand et al., “Designing ubiquitous computing games—a report from a workshop exploring ubiquitous computing entertainment,” Personal and Ubiquitous Computing, vol. 6, no. 5-6, pp. 443–458, 2002. R. Pellerin, E. Gressier-Soudan, and M. Simatic, “uGASP: an OSGi based middleware enabling multiplayer ubiquitous gaming,” in Proceedings of the International Conference on Pervasive Services (ICPS ‘08), Sorento, Italy, July 2008, Demonstration Workshop. GASP/uGASP project, http://gasp.ow2.org. OSGi alliance, http://www.osgi.org/Main/HomePage. S. Natkin and C. Yan, “User model in multiplayer mixed reality entertainment applications,” in Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE ‘06), Hollywood, Calif, USA, June 2006. NFC Forum, http://www.nfc-forum.org/home. The PLUG project, http://www.capdigital.com/plug/. C. Yan, Adaptive multiplayer ubiquitous games: design principles and an implementation framework, Ph.D. thesis, Cotutelle Research Program with Orange Labs and CNAM, Paris, France, 2007, Supervisor: Stephane Natkin. Java Card platform, http://java.sun.com/javacard. P. Pucheral, L. Bouganim, P. Valduriez, and C. Bobineau, “PicoDBMS: scaling down database techniques for the smartcard,” Very Large Data Bases Journal, vol. 10, no. 2-3, pp. 120–132, 2001.  ’   ‹ † ! '*`=‘ ˜   >        >  !Proceedings of the Ubiquitous Mobile Information and Collaboration Systems (UMICS ‘03), Klagenfurt/Velden, Austria, June 2003. Platform for Privacy Preferences (P3P) Project, http://www.w3.org/ P3P. S. Jonsson, A. Waern, M. Montola, and J. Stenros, “Game mastering a pervasive larp. Experiences from momentum,” in Proceedings of the 4th International Symposium on Pervasive Gaming Applications (PerGames ‘07), Magerkurth, Carsten et al., Eds., pp. 31–39, Salzburg, Austria, June 2007.

220

Computer Games Technology

14. ƒ '!    •       ˜     = based multi-user gaming,” in Proceedings of the Computer Games and Digital Cultures Conference (CDGC ‘02), F. Mäyrä, Ed., Tampere, Finland, June 2002. 15. Mogi Mogi, http://www.mogimogi.com. 16. D. Cheok et al., “Human Pacman: a mobile, wide-area entertainment system based on physical, social, and ubiquitous computing,” Personal and Ubiquitous Computing, vol. 8, no. 2, pp. 71–81, 2004. 17. Friedrich von Borries, Steffen P. Walz, and Matthias Böttger, “Mogi: Location-Based Services—A Community Game in Japan,” in Space Time Play, vol. 2007, pp. 224–225, Birkhäuser Basel, Switzerland, 2008, http://www.springerlink.com/content/j0277056ult42551. 18. J2ME MIDP, http://java.sun.com/javame/index.jsp. 19. JSR257, http://jcp.org/en/jsr/detail?id=257. 20. R. Pellerin, “The MooDS protocol: a J2ME object-oriented communication protocol,” in Proceedings of the 4th Mobility Conference, Singapore, September 2007. 21. JSR177, http://jcp.org/en/jsr/detail?id=177. 22. ˆ®?š !       games,” in Proceedings of 4th ACM SIGCOMM Workshop on Network and System Support for Games (NetGames ‘05), New York, NY, USA, October 2005. 23. N. E. Baughman, M. Liberatore, and B. N. Levine, “Cheat-proof playout for centralized and peer-to-peer gaming,” IEEE/ACM Transactions on Networking, vol. 15, no. 1, pp. 1–13, 2007. 24. G. Heumer, F. Gommlich, B. Jung, and A. Müller, “Via Mineralia: a pervasive museum exploration game,” in Proceedings of the 4th International Symposium on Pervasive Gaming Applications (PerGames ‘07), pp. 157–158, June 2007. 25. G. Heumer, D. Carlson, S. H. Kaligiri et al., “Paranoia Syndrome: a pervasive multiplayer game using PDAs, RFID, and tangible objects,” in Proceedings of the 3rd International Symposium on Pervasive Gaming Applications (PerGames ‘06), pp. 157–158, June 2007. 26. The Mesure project, http://mesure.gforge.inria.fr. 27. P. Urien et al., “The T2TIT research project. Introducing HIP RFIDs for the IoT,” in Proceedings of the 1st International Workshop on System

‹ ‹  `  ¢‘}'}`> 

221

Support for the Internet of Things (WoSSIoT ‘07), Lisbon, Portugal, March 2007. 28. D. Ghindici, G. Grimaud, and I. Simplot-Ryl, “An information "           !  Proceedings of the International Workshop on Information Security Theory and Practices (WISTP ‘07), vol. 4462 of Lecture Notes in Computer Science, pp. 189–201, May 2007.

CHAPTER 9

Real-Time Large Crowd Rendering with Efficient Character and Instance Management on GPU Yangzi Dong and Chao Peng Department of Computer Science, University of Alabama in Huntsville, Huntsville, AL 35899, USA

ABSTRACT Achieving the efficient rendering of a large animated crowd with realistic visual appearance is a challenging task when players interact with a complex game scene. We present a real-time crowd rendering system that efficiently manages multiple types of character data on the GPU and integrates seamlessly with level-of-detail and visibility culling techniques. The character data, including vertices, triangles, vertex normals, texture coordinates, skeletons, and skinning weights, are stored as either buffer

Citation˜® ;! }  ‹ ! š ={  ’  }  š     \ cient Character and Instance Management on GPU”, International Journal of Computer Games Technology, vol. 2019, Article ID 1792304, 15 pages, 2019. https://doi. org/10.1155/2019/1792304. Copyright: © 2019 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

224

Computer Games Technology

objects or textures in accordance with their access requirements at the rendering stage. Our system preserves the view-dependent visual appearance of individual character instances in the crowd and is executed with a fine-grained parallelization scheme. We compare our approach with the existing crowd rendering techniques. The experimental results show that our approach achieves better rendering performance and visual quality. Our approach is able to render a large crowd composed of tens of thousands of animated instances in real time by managing each type of character data in a single buffer object.

INTRODUCTION Crowd rendering is an important form of visual effects. In video games, thousands of computer-articulated polygonal characters with a variety of appearances can be generated to inhabit in a virtual scene like a village, a city, or a forest. Movements of the crowd are usually programmed through a crowd simulator [1–4] with given goals. To achieve a realistic visual approximation of the crowd, each character is usually tessellated with tessellation algorithms [5], which increases the character’s mesh complexity to a sufficient level, so that fine geometric details and smooth mesh deformations can be preserved in the virtual scene. As a result, the virtual scene may end up with a composition of millions of, or even hundreds of millions of, vertices and triangles. Rasterizing such massive amount of vertices and triangles into pixels is a high computational cost. Also, when storing them in memory, the required amount of memory may be beyond the storage capability of a graphic hardware. Thus, in the production of video games [6–9], advanced crowd rendering technologies are needed in order to increase the rendering speed and reduce memory consumption while preserving the crowd’s visual fidelity. To improve the diversity of character appearances in a crowd, a common method is duplicating a character’s mesh many times and then assigning each duplication with a different texture and a varied animation. Some advanced methods allow developers to modify the shape proportion of >         [10, 11] or synthesize new motions [12, 13]. With the support of hardwareaccelerated geometry-instancing and pseudo-instancing techniques [9, 14– 16], multiple data of a character, including vertices, triangles, textures, skeletons, skinning weights, and animations, can be cached in the memory of a graphics processing unit (GPU). At each time when the virtual scene needs

š ={ ’ } š    \  }   * 

225

to be rendered, the renderer will alter and assemble those data dynamically without the need of fetching them from CPU main memory. However, storing the duplications on the GPU consumes a large amount of memory and limits the number of instances that can be rendered. Furthermore, even though the instancing technique reduces the CPU-GPU communication overhead, it may suffer the lack of dynamic mesh adaption (e.g., continuous level-of-detail). In this work, we present a rendering system, which achieves a realtime rendering rate for a crowd composed of tens of thousands of animated characters. The system ensures a fully utilization of GPU memory and computational power through the integration with continuous level-of-detail (LOD) and View-Frustum Culling techniques. The size of memory allocated for each character is adjusted dynamically in response to the change of levels of detail, as the camera’s viewing parameters change. The scene of the crowd may end up with more than one hundred million triangles. Different from existing instancing techniques, our approach is capable of rendering all different characters through a single buffer object for each type of data. The system encapsulates multiple data of each unique source characters into buffer objects and textures, which can then be accessed quickly by shader > ™‹†       =>>  GPU programming framework. The rest of the paper is organized as follows. Section 2 reviews the previous works about crowd simulation and crowd rendering. Section 3 gives an overview of our system’s rendering pipeline. In Section 4, we describe fundamentals of continues LOD and animation techniques and discuss their parallelization on the GPU. Section 5 describes how to process and store the source character’s multiple data and how to manage instances on the GPU. Section 6 presents our experimental results and compares our approach with the existing crowd rendering techniques. We conclude our work in Section 7.

RELATED WORK Simulation and rendering are two primary computing components in a crowd application. They are often tightly integrated as an entity to enable a special form of in situ visualization, which in general means data is rendered and displayed by a renderer in real time while a simulation is running and generating new data [17–19]. One example is the work presented

226

Computer Games Technology

by Hernandez et al. [20] that simulated a wandering crowd behavior and visualized it using animated 3D virtual characters on GPU clusters. Another example is the work presented by Perez et al. [21] that simulated and visualized crowds in a virtual city. In this section, we first briefly review some previous work contributing to crowd simulation. Then, more related to our work, we focus on acceleration techniques contributing to crowd rendering, including level-of-detail (LOD), visibility culling, and instancing techniques. A crowd simulator uses macroscopic algorithms (e.g., continuum   ‡‡!      ‡!    ‡ª!  

 ‡‚€    >     ! >     ‡–  socially plausible behaviors [27]) to create crowd motions and interactions. Outcomes of the simulator are usually a successive sequence of time frames, and each frame contains arrays of positions and orientations in the 3D virtual  \ >>     global status of a character at a given time frame. McKenzie et al. [28] developed a crowd simulator to generate noncombatant civilian behaviors which is interoperable with a simulation of modern military operations. ¨    ‡Œ            technologies based on the size and time scale of simulated crowds and     " !  !      !  ¨  —>       ^ on GPU to simulate the behavior of a crowd at interactive frame rates in a

 = >  ` ^ |   >  large scale simulations that resulted in tens of thousands of simulated agents. Visualizing a large number of simulated agents using animated characters is a challenging computing task and worth an in-depth study. Beacco et al. [9] surveyed previous approaches for real-time crowd rendering. They reviewed and examined existing acceleration techniques and pointed out that LOD techniques have been used widely in order to achieve high rendering performance, where a far-away character can be represented with a coarse version of the character as the alternative for rendering. A wellknown approach is using discrete LOD representations, which are a set of " >        !     selects a desired version and renders it without any additional processing cost at runtime. However, Discrete LODs require too much memory for >     !  } _ Saupe [32], discrete LODs could cause “popping” visual artifacts because       >      >    ; 

š ={ ’ } š    \  }   * 

227

al. [33] introduced a hybrid rendering approach that combines image-based and geometry-based rendering techniques. They evaluated the rendering quality and performance in an urban crowd simulation. In their approach, the characters in the distance were rendered with image-based LOD representations, and they were switched to geometric representations when they were within a closer distance. Although the visual quality seemed better than using discrete LOD representations, popping artifacts also occurred when the renderer switches content between image representations and geometric representations. Ulicny et al. [34] presented an authoring tool to create crowd scenes of thousands of characters. To provide users immediate visual feedback, they used low-poly meshes for source characters. The meshes were kept in OpenGL’s display lists on GPU for fast rendering. Characters in a crowd are polygonal meshes. The mesh is rigged by a skeleton. Rotations of the skeleton’s joints transform surrounding vertices, and subsequently the mesh can be deformed to create animations. While LOD techniques for simplifying general polygonal meshes have been studied maturely (e.g., progressive meshes [35], quadric error metrics [36]), not many existing works studied how to simplify animated characters. Landreneau and '   >     >   >       >     { >>    >  Š    !  >  criteria with the consideration of vertices’ skinning weights from the joints of the skeleton and the shape deviation between the character’s rest pose and a deformed shape in animations. Their approach produced more accurate     >          ’ƒ;= based approaches, but it caused a higher computational cost, so it may be challenging to integrate their approach into a real-time application. Willmott [38] presented a rapid algorithm to simplify animated characters. The algorithm was developed based on the idea of vertex clustering. The author mentioned the possibility of implementing the algorithm on the GPU. However, in comparison to the algorithm with progressive meshes, it did >  =>    >           appearance. Feng et al. [39] employed triangular-char geometry images to preserve the features of both static and animated characters. Their approach achieved high rendering performance by implement geometry images with multiresolutions on the GPU. In their experiment, they demonstrated a real-time rendering rate for a crowd composed of 15.3 million triangles. However, there could be a potential LOD adaptation issue if the geometry

228

Computer Games Technology

images become excessively large. Peng et al. [8] proposed a GPU-based LOD-enabled system to render crowds along with a novel texture>   >      {  >   ’ƒ; Š        >   during the runtime. However, their approach was based on the simulation of single virtual humans. Instantiating and rendering multiple types of characters were not possible in their system. Savoy et al. [40] presented a web-based crowd rendering system that employed the discrete LOD and instancing techniques. Visibility culling technique is another type of acceleration techniques for crowd rendering. With visibility culling, a renderer is able to reject a character from the rendering pipeline if it is outside the view frustum or blocked by other characters or objects. Visibility culling techniques do not               {    ª| >             >>     {  subdivided the virtual environmental map into a 2D grid and used it to build a KD-tree of the virtual environment. The large and static objects in the virtual environment, such as buildings, were used as occluders. Then, an occlusion tree was built at each frame and merged with the KD-tree. Barczak et al. [42] integrated GPU-accelerated View-Frustum Culling and occlusion culling techniques into a crowd rendering system. The system used a vertex shader to test whether or not the bounding sphere of a character intersects with the view frustum. A hierarchical Z buffer image was built dynamically in a vertex shader in order to perform occlusion culling. Hernandez and Isaac Rudomin [43] combined View-Frustum Culling and LOD Selection. Desired detail levels were assigned only to the characters inside the view frustum. Instancing techniques have been commonly used for crowd rendering. Their execution is accelerated by GPUs with graphics API such as DirectX and OpenGL. Zelsnack [44] presented coding details of the pseudo-instancing technique using OpenGL shader language (GLSL). The pseudo-instancing technique requires per-instance calls sent to and executed on the GPU. Carucci [45] introduced the geometry-instancing technique which renders all vertices and triangles of a crowd scene through a geometry shader using one call. Millan and Rudomin [14] used the pseudo-instancing technique for rendering full-detail characters which were closer to the camera. The far-away characters with low details were rendered using impostors (an image-based approach). Ashraf and Zhou [46] used a hardware-accelerated method through programmable shaders to animated crowds. Klein et al. [47]

š ={ ’ } š    \  }   * 

229

>   >>       ;      { > ¥`’; ;        in order to support an instancing-based rendering mechanism. However, their approach lacked support for multiple character assets.

SYSTEM OVERVIEW Our crowd rendering system first preprocesses source characters and then performs runtime tasks on the GPU. Figure 1 illustrates an overview of our system. Our system integrates View-Frustum Culling and continuous LOD techniques.

Figure 1: The overview of our system.

   > >   !   ==   >      >  >>       *       = >  ‰!‚! >     edges and then collapses them by merging adjacent vertices iteratively and then removes the triangles containing the collapsed edges. The edgecollapsing operations are stored as data arrays on the GPU. Vertices and triangles can be recovered by splitting the collapsed edges and are restored    >        >>  ==   > >  Vertex normal vectors are used in our system to determine a proper shading effect for the crowd. A bounding sphere is computed for each source character. It tightly encloses all vertices in all frames of the character’s animation. The bounding sphere will be used during the runtime to test an instance against the view frustum. Note that bounding spheres may be in different sizes because the sizes of source characters may be different. Other

230

Computer Games Technology

data including textures, UVs, skeletons, skinning weights, and animations are packed into textures. They can be accessed quickly by shader programs and the general-purpose GPU programming framework during the runtime. {  >>       ™‹†     parallel processing components. We use an instance ID in shader programs to track the index of each instance, which corresponds to the occurrence of a source character at a global location and orientation in the virtual scene. A unique source character ID is assigned to each source character, which is used by an instance to index back to the source character that is instantiated from. We assume that the desired number of instances is >       >         {   positions and orientations of instances simulated from a crowd simulator are passed into our system as input. They determine where the instances should occur in the virtual scene. The component of View-Frustum Culling determines the visibility of instances. An instance will be considered to be visible if its bounding sphere is inside or intersects with the view frustum. The component of LOD Selection determines the desired detail level of the instances. It is executed with an instance-level parallelization. A detail level is represented as the numbers of vertices and triangles selected which are assembled from the vertex and triangle repositories. The component of LOD Mesh Generation produces LOD meshes using the selected vertices and triangles. The size of GPU memory may not be enough to store all the          { !    >       primitive budget is passed into the runtime pipeline as a global constraint to      ’ƒ;    ™‹† {  >   `     ^        >  versions (LOD meshes) of the instances. At the end of the pipeline, the rendering component rasterizes the LOD meshes along with appropriate textures and UVs and displays the result on the screen.

FUNDAMENTALS OF LOD SELECTION AND CHARACTER ANIMATION During the step of preprocessing, the mesh of each source character is simplified by collapsing edges. Same as existing work, the collapsing criteria in our approach preserves features at high curvature regions [39] and avoids collapsing the edges on or crossing texture seams [8]. Edges are collapsed one-by-one. We utilized the same method presented in [48],

š ={ ’ } š    \  }   * 

231

which saved collapsing operations into an array structure suitable for the GPU architecture. The index of each array element represents the source vertex and the value of the element represents the target vertex it merges to. By using the array of edge collapsing, the repositories of vertices and triangles are rearranged in an increasing order, so that, at runtime, the desired complexity of a mesh can be generated by selecting a successive sequence of vertices and triangles from the repositories. Then, the skeleton-based animations are applied to deform the simplified meshes. Figure 2 shows the different levels of detail of several source characters that are used in our work. In this section, we brief the techniques of LOD Selection and character animation.

Figure 2: The examples showing different levels of detail for seven source characters. From top to bottom, the numbers of triangles are as follows: (a) Alien: 5,688, 1,316, 601, and 319; (b) Bug: 6,090, 1,926, 734, and 434; (c) Daemon: 6,652, 1,286, 612, and 444; (d) Nasty: 6,848, 1,588, 720, and 375; (e) Ripper Dog: 4,974, 1,448, 606, and 309; (f) Spider: 5,868, 1,152, 436, and 257; (g) Titan: 6,518, 1,581, 681, and 362.

LOD Selection Let us denote K as the total number of instances in the virtual scene. A desired level of details for an instance can be represented as the pair of { }, where is the desired number of vertices, and is the desired number of triangles. Given a value of V , the value of can be retrieved from the prerecorded edge collapsing information [48]. Thus, the goal of LOD Selection is to determine an appropriate value of

232

Computer Games Technology

for each instance, with considerations of the available GPU memory size and the instance’s spatial relationship to the camera. If an instance is outside the view frustum, is set to zero. For the instances inside the view frustum, we used the LOD Selection metric in [48] to compute , as shown in

(1) \Š|€    => >   ‹  and Cao [48]. It originates from the model perception method presented by Funkhouser et al. [49] and is improved by Peng and Cao [48, 50] to accelerate the rendering of large CAD models. We found that (1) is also a suitable metric for the large crowd rendering. In the equation, N refers to the total number of vertices that can be retained on the GPU, which is a user>    >     ™‹† {  value of N can be tuned to balance the rendering performance and visual quality. Zi is the weight computed with the projected area of the bounding sphere of the ith instance on the screen (Ai) and the distance to the camera (Di). D is the perception parameter introduced by Funkhouser et al. [49]. In our work, the value of D is set to 3. and tNum, the successive sequences of vertices and With triangles are retrieved from the vertex and triangle repositories of the source character. By applying the parallel triangle reformation algorithm [48, 50],       >     >                 vertices and triangles.

Animation In order to create character animations, each LOD mesh has to be bound to a skeleton along with skinning weights added to influence the movement of the mesh’s vertices. As a result, the mesh will be deformed by rotating joints of the skeleton. As we mentioned earlier, each vertex may be influenced by a maximum of four joints. We want to note that the vertices forming the LOD mesh are a subset of the original vertices of the source character. There is not any new vertex introduced during the preprocess of mesh simplification. Because of this, we were able to use original skinning weights to influence the LOD mesh. When transformations defend in an animation frame are

š ={ ’ } š    \  }   * 

233

loaded on the joints, the final vertex position will be computed by summing the weighted transformations of the skinning joints. Let us denote each of the four joints influencing a vertex V as Jnti, where i  [0, 3]. The weight of Jnti on the vertex V is denoted as WJnti . Thus, the final position of the vertex V, denoted as P`V , can be computed by using

(2) In (2), PV is the vertex position at the time when the mesh is bound to the ^  “          ! the mesh is placed in the initial binding pose. When using an animation, the inverse of the binding pose needs to be multiplied by an animated pose. Tis is  "    Š!   is the inverse of binding transformation of the joint Jnti , and TJnti represents the transformation of the joint Jnti from the current frame of the animation. G is the transformation representing the instance’s global position and orientation. Note that the transformations , TJnti , and G are represented in the form of 4×4 matrix. The weight WJnti is a  " !    |

SOURCE CHARACTER AND INSTANCE MANAGEMENT Geometry-instancing and pseudo-instancing techniques are the primary solutions for rendering a large number of instances, while allowing the instances to have different global transformations. The pseudo-instancing technique is used in OpenGL and calls instances’ drawing functions oneby-one. The geometry-instancing technique is included in Direct3D since the version 9 and in OpenGL since version 3.3. It advances the pseudoinstancing technique in terms of reducing the number of drawing calls. It supports the use of a single drawing call for instances of a mesh and therefore reduces the communication cost of sending call requests from CPU to GPU and subsequently increases the performance. As regards data storage on the GPU, buffer objects are used for shader programs to access and update data quickly. A buffer object is a continuous memory block on the GPU and allows the renderer to rasterize data in a retained mode. In particular, a vertex buffer object (VBO) stores vertices. An index buffer object (IBO) stores indices of vertices that form triangles or other polygonal types used in our system. In particular, the geometry-instancing technique requires a single copy of vertex data maintained in the VBO, a single copy

234

Computer Games Technology

of triangle data maintained in the IBO, and a single copy of distinct world transformations of all instances. However, if the source character has a high geometric complexity and there are lots of instances, the geometryinstancing technique may make the uniform data type in shaders hit the size limit, due to the large amount of vertices and triangles sent to the GPU. In such case, the drawing call has to be broken into multiple calls. There are two types of implementations for instancing techniques: static batching and dynamic batching [45]. The single-call method in the geometryinstancing technique is implemented with static batching, while the multicall method in both the pseudo-instancing and geometry-instancing techniques are implemented with dynamic batching. In static batching, all vertices and triangles of the instances are saved into a VBO and IBO. In dynamic batching, the vertices and triangles are maintained in different buffer objects and drawn separately. The implementation with static batching has the potential to fully utilize the GPU memory, while dynamic batching would underutilize the memory. The major limitation of static batching is the lack of LOD and skinning supports. This limitation makes the static batching not suitable for rendering animated instances, though it has been proved to be faster than dynamic batching in terms of the performance of rasterizing meshes. In our work, the storage of instances is managed similarly to the implementation of static batching, while individual instances can still be accessed similarly to the implementation of dynamic batching. Therefore, our approach can be seamlessly integrated with LOD and skinning techniques, while taking the use of a single VBO and IBO for fast rendering. This section describes the details of our contributions for character and instance management, including texture packing, UV-guided mesh rebuilding, and instance indexing.

Packing Skeleton, Animation, and Skinning Weights into Textures Smooth deformation of a 3D mesh is a computationally expensive process because each vertex of the mesh needs to be repositioned by the joints that influence it. We packed the skeleton, animations, and skinning weights into 2D textures on the GPU, so that shader programs can access them quickly. The skeleton is the binding pose of the character. As explained in (2), the inverse of the binding pose’s transformation is used during the runtime. In our approach, we stored this inverse into the texture as the skeletal information.

š ={ ’ } š    \  }   * 

235

For each joint of the skeleton, instead of storing individual translation, rotation, and scale values, we stored their composed transformation in the form of a 4×4 matrix. Each joint’s binding pose transformation matrix takes four RGBA texels for storage. Each RGBA texel stores a row of the matrix. Each channel stores a single element of the matrix. By using OpenGL, matrices are stored as the format of GL_RGBA32F in the texture, which is a 32-bit floating-point type for each channel in one texel. Let us denote the total number of joints in a skeleton as K. Then, the total number of texels to store the entire skeleton is 4K. We used the same format for storing the skeleton to store an animation. Each animation frame needs 4K texels to store the joints’ transformation matrices. Let us denote the total number of frames in an animation as Q. Then, the total number of texels for storing the entire animation is 4KQ. For each animation frame, the matrix elements are saved into successive texels in the row order. Here we want to note that each animation frame starts from a new row in the texture. The skinning weights of each vertex are four values in the range  —!|!        >     "   >      skinning joint. For each vertex, the skinning weights require eight data   !        _ !  four are the corresponding weights. In other words, each vertex requires two š™?    ^  {       joint indices, and the second texel is used to store weights.

UV-Guided Mesh Rebuilding A 3D mesh is usually a seamless surface without boundary edges. The mesh has to be cut and unfolded into 2D fatten patches before a texture image can be mapped onto it. To do this, some edges have to be selected properly as boundary edges, from which the mesh can be cut. The relationship between the vertices of a 3D mesh and 2D texture coordinates can be described as a texture mapping function F(x, y, z) ÄÅsi, ti)}. Inner vertices (those not on boundary edges) have a one-to-one texture mapping. In other words, each inner vertex is mapped to a single pair of texture coordinates. For the vertices on boundary edges, since boundary edges are the cutting seams, a boundary vertex needs to be mapped to multiple pairs of texture coordinates. Figure 3 shows an example that unfolds a cube mesh and maps it into a fatten patch in 2D texture space. In the figure, ui stands for a point in the 2D texture space. Each vertex on the boundary edges is mapped to more than

236

Computer Games Technology

one points, which are F(V0) = {u1, u3}, F(V3) = {u10, u12}, F(V4) = {u0, u4, u6}, and F(V7) = {u7, u9, u13}.

Figure 3: An example showing the process of unfolding a cube mesh and mapping it into a fatten patch in 2D texture space. (a) is the 3D cube mesh formed by 8 vertices and 6 triangles. (b) is the unfolded texture map formed by 14 pairs of texture coordinates and 6 triangles. Bold lines are the boundary edges (seams) to cut the cube, and the vertices in red are boundary vertices that are mapped into multiple pairs of texture coordinates. In (b), ui stands for a point (si , ti ) in 2D texture space, and Vi in the parenthesis is the corresponding vertex in the cube.

In a hardware-accelerated renderer, texture coordinates are indexed from a bufer object, and each vertex should associate with a single pair of texture coordinates. Since the texture mapping function produces more than one pairs of texture coordinates for boundary vertices, we conducted a mesh rebuilding process to duplicate boundary vertices and mapped each duplicated one to a unique texture point. By doing this, although the number of vertices is increased due to the cuttings on boundary edges, the number of triangles is the same as the number of triangles in the original mesh. In our approach, we initialized two arrays to store UV information. One array stores texture coordinates, the other array stores the indices of texture points with respect to the order of triangle storage. Algorithm 1 shows the algorithmic process to duplicate boundary vertices by looping through all triangles. In the algorithm, Verts is the array of original vertices storing 3D coordinates (x, y, z). Tris is the array of original triangles storing the sequence of vertex indices. Similar to Tris, TexInx is the array of indices of texture points in 2D texture space and represents the same triangular topology as the mesh.

š ={ ’ } š    \  }   * 

237

Note that the order of triangle storage for the mesh is the same as the order of triangle storage for the 2D texture patches. Norms is the array of vertex normal vectors. oriTriNum is the total number of original triangles, and texCoordNum is the number of texture points in 2D texture space. Algorithm 1: UV-Guided mesh rebuilding algorithm.

After rebuilding the mesh, the number of vertices in Verts` will be identical to the number of texture points texCoordNum, and the array of triangles (Tris`) is replaced by the array of indices of the texture points (TexInx).

Source Character and Instance Indexing After applying the data packing and mesh rebuilding methods presented in Sections 5.1 and 5.2, the multiple data of a source character are organized into GPU-friendly data structures. The character’s skeleton, skinning weights, and animations are packed into textures and read-only in shader programs on the GPU. The vertices, triangles, texture coordinates, and vertex normal vectors are stored in arrays and retained on the GPU. During the runtime, based on the LOD Selection result (see Section 4.1), a successive subsequence of vertices, triangles, texture coordinates, and vertex normal vectors are selected for each instance and maintained as single buffer objects.

238

Computer Games Technology

As mentioned in Section 4.1, the simplified instances are constructed in a parallel fashion through a general-purpose GPU programming framework. Then, the framework interoperates with the GPU’s shader programs and allows shaders to perform rendering tasks for the instances. Because continuous LOD and animated instancing techniques are assumed to be used in our approach, instances have to be rendered one-by-one, which is the same as the way of rendering animated instances in geometry-instancing and pseudo-instancing techniques. However, our approach needs to construct the data within one execution call, rather than dealing with per-instance data. Figure 4 illustrates the data structures of storing VBO and IBO on the GPU. Based on the result of LOD Selection, each instance is associated with a vNum and a tNum (see Section 4.1) that represent the amount of vertices and triangles selected based on the current view setting. We employed CUDA Trust [51] to process the arrays of vNum and tNum >  sum algorithm in a parallel fashion. As a result, for example, each vNum[i] represents the offset of vertex count prior to the ith instance, and the number of vertices for the ith instance is (vNum[i Num[i]).

Figure 4: An example illustrating the data structures for storing vertices and triangles of three instances in VBO and IBO, respectively. Those data are stored on the GPU and all data operations are executed in parallel on the GPU. The VBO and IBO store data for all instances that are selected from the array of original vertices and triangles of the source characters. vNum and tNum arrays are the LOD section result.

š ={ ’ } š    \  }   * 

239

Algorithm 2 describes the vertex transformation process in parallel in the vertex shader. It transforms the instance’s vertices to their destination positions while the instance is being animated. In the algorithm, charNum represents the total number of source characters. The inverses of the binding pose skeletons are a texture array denoted as invBindPose[charNum]. The skinning weights are a texture array denoted as skinWeights[charNum]. We used a walk animation for each source character, and the texture array of the animations is denoted as anim[charNum]. gMat is the global 4×4 transformation matrix of the instance in the virtual scene. Tis algorithm is developed based on the data packing formats described in Section 5.1. Each source character is assigned with a unique source character ID, denoted as c id in the algorithm. The drawing calls are issued per instance, so c id is passed into the shader as an input parameter. The function of GetLoc() computes the coordinates in the texture space to locate which texel to fetch. The input of GetLoc() includes the current vertex or joint index (id) that needs to be mapped, the width (w) and height (h) of the texture, and the number of texels (dim) associating with the vertex or joint. For example, to retrieve a vertex’s skinning weights, the dim is set to 2; to retrieve a joint’s transformation matrix, the dim is set to 4. In the function of TransformVertices(), vertices of the instance are transformed in a parallel fashion by the composed matrix (composedMat) computed from a weighted sum of matrices of the skinning joints. The function sample() takes a texture and the coordinates located in the texture space as input. It returns the values encoded in texels. The sample() function is usually provided in a shader programming framework. Different from the rendering of static models, animated characters change geometric shapes over time due to continuous pose changes in the animation. In the algorithm, f id stands for the current frame index of the instance’s animation. f id is updated in the main code loop during the execution of the program.

240

Computer Games Technology

Algorithm 2: Transforming vertices of an instance in vertex shader.

EXPERIMENT AND ANALYSIS We implemented our crowd rendering system on a workstation with Intel i7-5930K 3.50GHz PC with 64GBytes of RAM and an Nvidia GeForce GTX980 Ti 6GB graphics card. The rendering system is built with Nvidia CUDA Toolkit 9.2 and OpenGL. We employed 14 unique source characters. Table 1 shows the data configurations of those source characters. Each source character is articulated with a skeleton and fully skinned and animated with a walk animation. Each one contains an unfolded texture UV set along with a texture image at the resolution of 2048×2048. We stored those source characters on the GPU. Also, all source characters have been preprocessed by the mesh simplification and animation algorithms described in Section 4. We stored the edge-collapsing information and the character bounding spheres in arrays on the GPU. In total, the source characters require 184.50MB memory for storage. The size of mesh data is much smaller than the size

š ={ ’ } š    \  }   * 

241

of texture images. The mesh data requires 16.50MB memory for storage, which is only 8.94% of the total memory consumed. At initialization of the system, we randomly assigned a source character to each instance. Table 1: The data information of the source characters used in our experiment. Note that the data information represents the characters that are prior to the process of UV-guided mesh rebuilding

Visual Fidelity and Performance Evaluations As defined in Section 4.1, N is a memory budget parameter that determines the geometric complexity and the visual quality of the entire crowd. For each instance in the crowd, the corresponding bounding sphere is tested against the view frustum to determine its visibility. The value of N is only distributed across visible instances. We created a walkthrough camera path for the rendering of the crowd. The camera path emulates a gaming navigation behavior and produces a total of 1,000 frames. The entire crowd contains 30,000 instances spread      >     _   Figure 5 shows a rendered frame of the crowd with the value of N set to 1, 6, and 10 million, respectively. The main camera moves on the walkthrough path. The reference camera is aimed at the instances far away from the main camera and shows a close-up view of those far-away instances. Our LODbased instancing method ensures the total number of selected vertices and     >    !  >      details of instances that are closer to the main camera. Although the far      >   !          the main camera, their appearance in the main camera do not cause a loss of   ‘ ‚€  >>        

242

Computer Games Technology

from the viewpoint of the main camera (top images), in which far-away       >   €

Figure 5: An example of the rendering result produced by our system using different N values. (a) shows the captured images with N=1,6,10 million. The top images are the rendered frame from the main camera. The bottom images are rendered based on the setting of the reference camera, which aims at the instances that are far away from the main camera. (b) shows the entire crowd including the view frustums of the two cameras. The yellow dots in (b) represent the instances outside the view frustum of the main camera.

If all instances are rendered at the level of full detail, the total number of triangles would be 169.07 million. Through the simulation of the walkthrough camera path, we had an average of 15,771 instances inside the view frustum. The maximum and minimum numbers of instances          ‡Œ!Œ–  ‚!—‰!  >   “  >   different values for N. Table 2 shows the performance breakdowns with regard to the runtime processing components in our system. In the table, the “# of Rendered Triangles” column includes the minimum, maximum, and averaged number of triangles selected during the runtime. As we can see, the higher the value of N is, the more the triangles are selected to generate

š ={ ’ } š    \  }   * 

243

>   Š    Š     ƒ>>     \  N is set to a large value, such as 20 million, the crowd needs only 26.23 million triangles in average, which is only 15.51% of the original number of triangles. When the value of N is small, the difference between the averaged and the maximum    ‘ > ! N is equal to 5 million, the difference is at a ratio (average/maximum) of 73.96%. This indicates            the change of instance-camera relationships (including instances’ distance to the camera and their visibility). This is because a small value of N limits the level of details that an instance can reach up. Even if an instance is close    !   N to satisfy a desired detail level. As we can see in the table, when the value of N becomes larger than 10 million, the ratio is increased to 94%. The View-Frustum Culling and LOD Selection components are implemented together, and both are executed in parallel at an instance level. Thus, the execution time of this component does not change as the value of N increases. The component of LOD Mesh Generation is executed in parallel at a triangle level. Its execution time increases as the value of N increases. Animating Meshes and Rendering components are executed with the acceleration of OpenGL’s buffer objects and shader programs. They are time-consuming, and their execution time increases as more triangles need to be rendered. Figure 6 shows the change  ‘‹'         ¢         !   ‘‹' decreases as the value of N increases. When N is smaller than 4 million, the decreasing slope of FPS is small. This is because the change on the number of triangles over frames of the camera path is small. When N is small, many close-up instances end down to the lowest level of details due            N. When N increases from 4 to 17 million, the decreasing slop of FPS becomes larger. This is because the number of triangles over frames of the camera path varies considerably with different values of N. As N increases beyond 17 million, the decreasing slope becomes smaller again, as many instances including far-away ones reach the full level of details.

244

Computer Games Technology

Table 2: Performance breakdowns for the system with a precreated camera path with total 30,000 instances. The FPS value and the component execution times are averaged over 1000 frames. The total number of triangles (before the use of LOD) is 169.07 million

Figure 6: The change of FPS over different values of N by using our approach. The FPS is averaged over the total of 1,000 frames.

Comparison and Discussion We analyzed two rendering techniques and compared them against our approach in terms of performance and visual quality. The pseudo-instancing technique minimizes the amount of data duplication by sharing vertices and triangles among all instances, but it does not support LOD on a per-instance level [44, 52]. The point-based technique renders a complex geometry by using a dense of sampled points in order to reduce the computational cost in rendering [53, 54]. The pseudo-instancing technique does not support ViewFrustum Culling. For the comparison reason, in our approach, we ensured all instances to be inside the view frustum of the camera by setting a fixed position of the camera and setting fixed positions for all instances, so that all instances are processed and rendered by our approach. The complexity of each instance rendered by the point-based technique is selected based on its distance to the camera which is similar to our LOD method. When an

š ={ ’ } š    \  }   * 

245

instance is near the camera, the original mesh is used for rendering; when the instance is located far away from the camera, a set of points are approximated as sample points to represent the instance. In this comparison, the pseudoinstancing technique always renders original meshes of instances. We chose two different N values (N= 5 million and N= 10 million) for rendering with our approach. As shown in Figure 7, our approach results in better performance than the pseudo-instancing technique. This is because the number of triangles rendered by using the pseudo-instancing technique is much larger than the number of triangles determined by the LOD Selection component of our approach. The performance of our approach becomes better than the point-based technique as the number of N increases. Figure 8 shows the comparison of visual quality among our approach, pseudo-instancing technique, and point-based technique. The image generated from the pseudo-instancing technique represents the original quality. Our approach can achieve better visual quality than the point-based technique. As we can see in the top images of the figure, the instances far away from the camera rendered by the point-based technique appear to have “holes» due to the disconnection between vertices. In addition, the popping artifact appears when using the point-based technique. This is because the technique uses a limited number of detail levels from the technique of discrete LOD. Our approach avoids the popping artifact since continuous LOD representations of the instances are applied during the rendering.

Figure 7: The performance comparison of our approach, pseudo-instancing technique, and point-based technique over different numbers of instances. Two values of N are chosen for our approach ( N= 5 million and N = 10 million). The FPS is averaged over the total of 1,000 frames.

246

Computer Games Technology

Figure 8: The visual quality comparison of our approach, pseudo-instancing technique, and point-based technique. The number of rendered instances on the screen is 20000. (a) shows the captured image of our approach rendering result with N=5 million. (b) shows the captured image of pseudo-instancing technique rendering result. (c) shows the captured image of the point-based technique rendering result. (d), (e), and (f) are the captured images zoomed in on the area of instances which are far away from the camera.

CONCLUSION In this work, we presented a crowd rendering system that takes the advantages of GPU power for both general-purpose and graphics computations. We rebuilt the meshes of source characters based on the flatten pieces of texture

š ={ ’ } š    \  }   * 

247

UV sets. We organized the source characters and instances into buffer objects and textures on the GPU. Our approach is integrated seamlessly with continuous LOD and View-Frustum Culling techniques. Our system maintains the visual appearance by assigning each instance an appropriate level of details. We evaluated our approach with a crowd composed of 30,000 instances and achieved real-time performance. In comparison with existing crowd rendering techniques, our approach better utilizes GPU memory and reaches a higher rendering frame rate. In the future, we would like to integrate our approach with occlusion culling techniques to further reduce the number of vertices and triangles during the runtime and improve the visual quality. We also plan to integrate our crowd rendering system with a simulation in a real game application. Currently, we only used a single walk animation in the crowd rendering system. In a real game application, more animation types should be added, and a motion graph should be created in order to make animations transit smoothly from one to another. We also would like to explore possibilities to transplant our approach onto a multi-GPU platform, where a more complex crowd could be rendered in real time, with the supports of higher memory capability and more computational power provided by multiple GPUs.

ACKNOWLEDGMENTS This work was supported by the National Science Foundation Grant CNS1464323. We thank Nvidia for donating the GPU device that has been used in this work to run our approach and produce experimental results. The source characters were purchased from cgtrader.com with the buyer’s license that allows us to disseminate the rendered and moving images of those characters through the software prototypes developed in this work.

248

Computer Games Technology

REFERENCES 1.

S. Lemercier, A. Jelic, R. Kulpa et al., “Realistic following behaviors for crowd simulation,” Computer Graphics Forum, vol. 31, no. 2, Part 2, pp. 489–498, 2012. 2. A. Golas, R. Narain, S. Curtis, and M. C. Lin, “Hybrid long-range collision avoidancefor crowd simulation,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 7, pp. 1022–1034, 2014. 3. G. Payet, “Developing a massive real-time crowd simulation framework on the gpu,” 2016. 4. I. Karamouzas, N. Sohre, R. Narain, and S. J. Guy, “Implicit crowds: Optimization integrator for robust crowd simulation,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 1–13, 2017. 5. ˆš' ^!;        generation,” Computational Geometry. Theory and Applications, vol. 22, no. 1-3, pp. 21–74, 2002. 6. J. Pettré, P. D. H. Ciechomski, J. Maim, B. Yersin, J.-P. Laumond, and D. Thalmann, “Real-time navigating crowds: scalable simulation and rendering,” Computer Animation and Virtual Worlds, vol. 17, no. 3-4, pp. 445–455, 2006. 7. J. Maïm, B. Yersin, J. Pettré, and D. Thalmann, “YaQ: An architecture for real-time navigation and rendering of varied crowds,” IEEE Computer Graphics and Applications, vol. 29, no. 4, pp. 44–53, 2009. 8. C. Peng, S. I. Park, Y. Cao, and J. Tian, “A real-time system for crowd rendering: parallel LOD and texture-preserving approach on GPU,” in Proceedings of the International Conference on Motion in Games, vol. 7060 of Lecture Notes in Computer Science, pp. 27–38, Springer, 2011. 9. A. Beacco, N. Pelechano, and C. Andújar, “A survey of real-time crowd rendering,” Computer Graphics Forum, vol. 35, no. 8, pp. 32– 50, 2016. 10. R. McDonnell, M. Larkin, S. Dobbyn, S. Collins, and C. O’Sullivan, “Clone attack! Perception of crowd variety,” ACM Transactions on Graphics, vol. 27, no. 3, p. 1, 2008. 11. Y. P. Du Sel, N. Chaverou, and M. Rouillé, “Motion retargeting for crowd simulation,” in Proceedings of the 2015 Symposium on Digital Production, DigiPro ’15, pp. 9–14, ACM, New York, NY, USA, August

š ={ ’ } š    \  }   * 

12.

13.

14.

15. 16.

17.

18.

19.

20.

21.

22. 23.

249

2015. F. Multon, L. France, M.-P. Cani-Gascuel, and G. Debunne, “Computer animation of human walking: a survey,” Journal of Visualization and Computer Animation, vol. 10, no. 1, pp. 39–54, 1999. S. Guo, R. Southern, J. Chang, D. Greer, and J. J. Zhang, “Adaptive motion synthesis for virtual characters: a survey,” The Visual Computer, vol. 31, no. 5, pp. 497–512, 2015. E. Millán and I. Rudomn, “Impostors, pseudo-instancing and image maps for gpu crowd rendering,” The International Journal of Virtual Reality, vol. 6, no. 1, pp. 35–44, 2007. H. Nguyen, “Chapter 2: Animated crowd rendering,” in Gpu Gems 3, Addison-Wesley Professional, 2007. H. Park and J. Han, “Fast rendering of large crowds using GPU,” in Entertainment Computing - ICEC 2008, S. M. Stevens and S. J. Saldamarco, Eds., vol. 5309 of Lecture Notes in Computer Science, pp. 197–202, Springer, Berlin, Germany, 2009. K.-L. Ma, “In situ visualization at extreme scale: challenges and opportunities,” IEEE Computer Graphics and Applications, vol. 29, no. 6, pp. 14–19, 2009. T. Sasabe, S. Tsushima, and S. Hirai, “In-situ visualization of liquid water in an operating PEMFC by soft X-ray radiography,” International Journal of Hydrogen Energy, vol. 35, no. 20, pp. 11119–11128, 2010, Hyceltec 2009 Conference. H. Yu, C. Wang, R. W. Grout, J. H. Chen, and K.-L. Ma, “In situ visualization for large-scale combustion simulations,” IEEE Computer Graphics and Applications, vol. 30, no. 3, pp. 45–57, 2010. B. Hernandez, H. Perez, I. Rudomin, S. Ruiz, O. DeGyves, and L. Toledo, “Simulating and visualizing real-time crowds on GPU clusters,” Computación y Sistemas, vol. 18, no. 4, pp. 651–664, 2015. H. Perez, I. Rudomin, E. A. Ayguade, B. A. Hernandez, J. A. EspinosaOviedo, and G. Vargas-Solar, “Crowd simulation and visualization,” in Proceedings of the 4th BSC Severo Ochoa Doctoral Symposium, Poster, May 2017.  {  ! ' }> !  ¨ ‹>Ç! }  ! ACM Transactions on Graphics, vol. 25, no. 3, pp. 1160–1168, 2006. R. Narain, A. Golas, S. Curtis, and M. C. Lin, “Aggregate dynamics for dense crowd simulation,” in ACM SIGGRAPH Asia 2009 Papers,

250

24.

25.

26.

27.

28.

29.

30.

31.

32.

33.

Computer Games Technology

SIGGRAPH Asia ’09, p. 1, Yokohama, Japan, December 2009. X. Jin, J. Xu, C. C. L. Wang, S. Huang, and J. Zhang, “Interactive control of large-crowd navigation in virtual environments using vector

!IEEE Computer Graphics and Applications, vol. 28, no. 6, pp. 37–46, 2008. S. Patil, J. Van Den Berg, S. Curtis, M. C. Lin, and D. Manocha, ;       ! IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 2, pp. 244–254, 2011. E. Ju, M. G. Choi, M. Park, J. Lee, K. H. Lee, and S. Takahashi, “Morphable crowds,” ACM Transactions on Graphics, vol. 29, no. 6, pp. 1–140, 2010. S. I. Park, C. Peng, F. Quek, and Y. Cao, “A crowd modeling framework for socially plausible animation behaviors,” in Motion in Games, M. Kallmann and K. Bekris, Eds., vol. 7660 of Lecture Notes in Computer Science, pp. 146–157, Springer, Berlin, Germany, 2012. F. D. McKenzie, M. D. Petty, P. A. Kruszewski et al., “Integrating crowd-behavior modeling into military simulation using game technology,” Simulation Gaming, vol. 39, no. 1, Article ID 1046878107308092, pp. 10–38, 2008. S. Zhou, D. Chen, W. Cai et al., “Crowd modeling and simulation technologies,” ACM Transactions on Modeling and Computer Simulation (TOMACS), vol. 20, no. 4, pp. 1–35, 2010. Y. Zhang, B. Yin, D. Kong, and Y. Kuang, “Interactive crowd simulation on GPU,” Journal of Information and Computational Science, vol. 5, no. 5, pp. 2341–2348, 2008.  ` ^! ‹ }! ½ }È! ` `  _ ^!  ‹ Skowron, “Multi-agent large-scale parallel crowd simulation,” Procedia Computer Science, vol. 108, pp. 917–926, 2017, International Conference on Computational Science, ICCS 2017, 12-14 June 2017, Zurich, Switzerland. I. Cleju and D. Saupe, “Evaluation of supra-threshold perceptual metrics for 3D models,” in Proceedings of the 3rd Symposium on Applied Perception in Graphics and Visualization, APGV ’06, pp. 41– 44, ACM, New York, NY, USA, July 2006. S. Dobbyn, J. Hamill, K. O’Conor, and C. O’Sullivan, “Geopostors: A real-time geometry/impostor crowd rendering system,” in Proceedings

š ={ ’ } š    \  }   * 

34.

35. 36.

37.

38.

39.

40.

41.

42.

43. 44. 45.

251

of the 2005 Symposium on Interactive 3D Graphics and Games, I3D ’05, pp. 95–102, ACM, New York, NY, USA, April 2005. B. Ulicny, P. d. Ciechomski, and D. Thalmann, “Crowdbrush: interactive authoring of real-time crowd scenes,” in Proceedings of the Symposium on Computer Animation, R. Boulic and D. K. Pai, Eds., p. 243, The Eurographics Association, Grenoble, France, August 2004. !\  >  >    !Computers Graphics, vol. 22, no. 1, pp. 27–36, 1998. `™‹'< ^ !' > Š  error metrics,” in Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’97, pp. 209–216, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 1997. \ ’    ' '   ! '>      meshes,” Computer Graphics Forum, vol. 28, no. 2, pp. 347–353, 2009.  “! š> >   =    !  Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, HPG ’11, p. 151, ACM, New York, NY, USA, August 2011. W.-W. Feng, B.-U. Kim, Y. Yu, L. Peng, and J. Hart, “Feature-preserving triangular geometry images for level-of-detail representation of static and skinned meshes,” ACM Transactions on Graphics (TOG), vol. 29, no. 2, article no. 11, 2010. D. P. Savoy, M. C. Cabral, and M. K. Zuffo, “Crowd simulation rendering for web,” in Proceedings of the 20th International Conference on 3D Web Technology, Web3D ’15, pp. 159-160, ACM, New York, NY, USA, June 2015. F. Tecchia, C. Loscos, and Y. Chrysanthou, “Real time rendering of populated urban environments,” Tech. Rep., ACM SIGGRAPH Technical Sketch, 2001. J. Barczak, N. Tatarchuk, and C. Oat, “GPU-based scene management for rendering large crowds,” ACM SIGGRAPH Asia Sketches, vol. 2, no. 2, 2008. B. Hernández and I. Rudomin, “A rendering pipeline for real-time crowds,” GPU Pro, vol. 2, pp. 369–383, 2016. J. Zelsnack, “GLSL pseudo-instancing,” Tech. Rep., 2004. F. Carucci, “Inside geometry instancing,” GPU Gems, vol. 2, pp. 47–

252

46.

47.

48.

49.

50.

51.

52.

53.

54.

Computer Games Technology

67, 2005. G. Ashraf and J. Zhou, “Hardware accelerated skin deformation for animated crowds,” in Proceedings of the 13th International Conference on Multimedia Modeling - Volume Part II, MMM07, pp. 226–237, Springer-Verlag, Berlin, Germany, 2006. ‘ ½ ! { '>   ! ½ '!  ‹ ' ^! }   instances of 3D models for declarative 3D in the web,” in Proceedings of the Nineteenth International ACM Conference, pp. 71–79, ACM, New York, NY, USA, August 2014. C. Peng and Y. Cao, “Parallel LOD for CAD model rendering with effective GPU memory usage,” Computer-Aided Design and Applications, vol. 13, no. 2, pp. 173–183, 2016. T. A. Funkhouser and C. H. Séquin, “Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’93, pp. 247–254, ACM, New York, NY, USA, 1993. C. Peng and Y. Cao, “A GPU-based approach for massive model rendering with frame-to-frame coherence,” Computer Graphics Forum, vol. 31, no. 2, pp. 393–402, 2012. N. Bell and J. Hoberock, “Thrust: a productivity-oriented library for CUDA,” in GPU Computing Gems Jade Edition, Applications of GPU Computing Series, pp. 359–371, Morgan Kaufmann, Boston, Mass, USA, 2012. E. Millán and I. Rudomn, “Impostors, pseudo-instancing and image maps for gpu crowd rendering,” Iranian Journal of Veterinary Research, vol. 6, no. 1, pp. 35–44, 2007. L. Toledo, O. De Gyves, and I. Rudomín, “Hierarchical level of detail for varied animated crowds,” The Visual Computer, vol. 30, no. 6-8, pp. 949–961, 2014. ’’!ˆ¨ !`‘!         SFSM for massive crowd rendering,” Advances in Multimedia, vol. 2018, pp. 1–10, 2018.

SECTION 3: GAMES SOFTWARE AND FEATURES

CHAPTER 10

Gamer’s Facial Cloning for Online Interactive Games

Abdul Sattar1 , Nicolas Stoiber2 , Renaud Seguier1 , and Gaspard Breton2 1

SUPELEC/IETR, SCEE, Avenue de la Boulaie, 35576 Cesson-Sevigne, France Orange Labs, RD/TECH, 4 rue du Clos Courtel, 35510 Cesson-Sevigne, France

2

ABSTRACT Virtual illustration of a human face is essential to enhance the mutual interaction in a cyber community. In this paper we propose a solution to solve two bottlenecks in facial analysis and synthesis for an interactive system of human face cloning for non-expert users of computer games. Tactical maneuvers of the gamer make single camera acquisition system unsuitable to analyze and track the face due to its large lateral movements.

Citation: Abdul Sattar, Nicolas Stoiber, Renaud Seguier, Gaspard Breton, “Gamer’s Facial Cloning for Online Interactive Games”, International Journal of Computer Games Technology, vol. 2009, Article ID 726496, 16 pages, 2009. https://doi. org/10.1155/2009/726496. Copyright: © 2009 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

256

Computer Games Technology

For an improved facial analysis system, we propose to acquire the facial images from multiple cameras and analyze them by multiobjective 2.5D Active Appearance Model (MOAAM). Facial morphological dissimilarities between a human face and an avatar make the facial synthesis quite complex. To successfully clone or retarget the gamer facial expressions and gestures on to an avatar, we introduce a simple mathematical link between their appearances. Results obtained validate the efficiency, accuracy and robustness achieved.

INTRODUCTION Over the last decade computer games have became more and more an interactive entertainment. Virtual representation of a character has gained the interest of both gamers and researchers. Gamers do not want to sit and play the games, instead they need to get involved in the game to an extent to visualize opponent’s face and interact with him virtually. The use of virtual representation of a human face in game consoles or creating avatars has been tremendously increasing. In addition, a growing number of websites now host virtual characters technologies to deliver their contents in a more natural and friendly manner. Gestures and features (e.g., eyes, nose, mouth and eyebrows) of a human face are actually the reflection of a person’s inner emotional state and personality. They are also believed to play an important role in social interactions, as they give clues to a gamer’s state of mind and therefore help the communication partner to sense the tone of a speech, or the meaning of a particular behavior. For these reasons, they can be identified as an essential nonverbal communication channel in game consoles. { ^!    •       the interaction of a gamer, system needs to overcome two bottlenecks in facial analysis and synthesis. Facial analysis deals with the face alignment, pose, features, gestures and emotions extractions. Excitements caused by the tactical moves of a game, compel the gamer to move around in various directions. These maneuvers produce large lateral movements of a face,   ^              ^    the face. For a facial synthesis system, cloning or retargeting the features, emotions and orientation of a human face on to an avatar is again one of     ^ }               morphological differences between a real face and an avatar. Furthermore, large and complex face deformations due to the expressions made by a

Gamer’s Facial Cloning for Online Interactive Games

257

nonrigid human face makes the online system computationally complex to clone or replicate it on to an avatar. “  >>         •        system as shown in Figure 1. Our system is composed of two cameras installed on the extreme edges of the screen to acquire real-time images of the gamer. Gamer’s face is analyzed and his pose and expressions are synthesized by the system to clone or retarget his features in the form of an avatar so that the gamers can interact with each other virtually. In the  >>   " >   synthesis systems, embedded in our proposed interactive system.

Figure 1: Global system.

Face Analysis Human faces are nonrigid objects. The flexibility of a face is well tackled with the appearance-based or deformable model methods [1], which are remarkably efficient for features extraction and alignment of frontal-view faces. As we will see in Section 2, researchers worked out the bottlenecks of face analysis by emphasizing on the model generation and their search methodologies. However we emphasize on increasing the amount of data to be processed with the help of multiple cameras as shown in Figure 1. In single-view system face alignment cannot be accomplished when a face occludes itself during its lateral motion, such as in a profile view only half of the face is visible. To overcome this dilemma we exploit data from another camera and associate it with the one unable to analyze at the first place. In multicamera system, optimization of more than one error is to be performed between a model and query images from each camera. Searching for an optimum solution of a single task employing two or more distinct errors requires multiobjective optimization (MOO). Many MOO techniques exist but to analyze the face we propose optimization of MOAAM by

258

Computer Games Technology

Pareto-based NSGA-II [2] due to its exploitation and exploration ability, nondominating strategy and population based approach which provide the mutual interaction of the results by multiple cameras. In this paper, we use our previous work of [3] and improved our system by obtaining new results based on a new synthetic face database.

Face Synthesis In facial synthesis system the purpose is to retarget or clone gamer’s face orientation and its features on the synthetic model so that the gamers can interact with each other virtually. Cloning and retargeting is difficult, because avatar does not have the same morphology as the gamer. Our contribution in this system is the introduction of a simple mathematical relation between their appearances called ATM (Appearance Transformation Matrix). To calculate it we make use of two databases explained in Section 5.1. The first database is a large collection of human facial expressions (H-database) and the second database is an optimal database of synthetic facial expressions (A-database) constructed for the avatar based on the analysis of the H-database. Our second contribution is to provide an interactive system for the gamer to build his own database and calculate gamer’s specific ATM. The generation of the gamer’s database is based on our face analysis system of MOAAM and is obtained by requesting the gamer to imitate few specific and relevant facial expressions displayed on the screen. Whole system works in two phases. First of all, user’s oriented face is analysed by MOAAM, which gives its appearance and pose parameters. These appearance parameters are pose-free and belongs to the frontal face of the user. Therefore they are transformed by ATM in the synthetic face’s parameter space and synthetic face is synthesized accordingly. After that pose parameters obtained previously by MOAAM analysis are used to adjust the orientation of the avatar being displayed on the screen. Remaining of the paper is organized as follow. Section 2 presents the previous and related work in both the domains of facial analysis and synthesis. Section 3 presents the preliminary concepts of our system. Section 4 describes the work done in face analysis. Section 5 explains the system to synthesize a face. Detailed description of our proposed interactive system is elaborated in Section 6, while Section 7 concludes the paper.

Gamer’s Facial Cloning for Online Interactive Games

259

PREVIOUS AND RELATED WORK In this section we have divided the previous and related work for both facial analysis and synthesis into two subsections. However, our first contribution in the facial analysis domain is explained in detail in Section 4. And our second contribution in the facial synthesis domain is explained in Section 5.

Face Analysis     Active Appearance Model (AAM) is one of the well-known deformable method [1] efficient in feature extraction and alignment of a face. References [4, 5] performed pose prediction by using 3 AAM models, one dedicated to the frontal view and two for the profile views. References [6, 7] implemented Active Shape Model (ASM) for the face alignment, by using 5 poses of each face to create a model. Reference [8] also used 3 DAMs (Direct Appearance Models) for face alignment. Reference [9] used another appearance based architecture employing 5 view-specific template detectors to track large range head yaw by a monocular camera. The Radial Basis Function Network interpolates the response vectors obtained from normalized correlation from the input image and 5 template detectors. Use of more than one model of AAM has some disadvantages: (i) Storage of shapes and textures of the images of all the models requires an enormous amount of storage memory. (ii) Extensive processing of computing 3 AAM in parallel to determine the model required for query images, eventually makes the system sluggish. Moreover classical AAM search methodology requires precomputed regression matrices, which become a burden on time and memory as the amount of training images increases. Coupled View AAM is used in [10] to estimate the pose. In the training >     ‡; > ‡;   >   views of each subject. Appearance parameters of their CV-AAM have the capability to estimate the pose. Appearance parameters of their model can        >     >         ‘   >     estimation they have used several appearance parameters which can be replaced by one pose parameter in a 3D AAM. Thus, increase in the number of parameters decreases the rapidness of the system.

260

Computer Games Technology

3DAAM Face can also be aligned by 3D deformable model methods in which a set of images are annotated in 3D to model a face. Reference [11] used 3D face model Candide along with simple gradient descent method as a search algorithm for face tracking. References [12] used 2D+3D AAM along with a fitting algorithm, called inverse compositional image alignment algorithm, which is again an extension of a gradient descent method. Reference [13] applied 3D AAM for face tracking in a video sequence using same ICLK (Inverse Compositional Lucas-Kanade) algorithm. The optimization by gradient descent lack the properties of exploration and diversity, hence cannot be used in MOO. In our previous work of [14] we have used genetic algorithm instead of gradient descent for the optimization in 2.5D AAM.

  %& "' *  Pose angles can be estimated by fitting the above 2D or 3D deformable models on multiple images acquired by two, three or multiple cameras. Reference [15] proposed a robust algorithm of fitting a 2D+3D AAM to multiple images acquired at the same instance. Their fitting methodology, instead of decomposing into three independent optimizations from three cameras, adds all the errors. Moreover they used gradient descent (ICLK: Inverse Compositional Lukas Kanade) algorithm as a fitting method, which eventually requires to precompute Jacobians and Hessian matrix. Reference [16] proposed another algorithm of face tracking by Stereo Active Appearance Model (STAAM) fitting, which is an extension of the above fitting of 2D+3D AAM to multiple images. Lack of exploration capability of the method makes ICLK very sensitive to initialization. In [17] the advantages of adaptive appearance model based method with a 3D data-based tracker using sparse stereo data is combined. Reference [18] proposed a model-based stereo head tracking algorithm and is able to track six degrees of freedom of head motions. Their face model contains 300 triangles compare to our 113 triangles usually used in classical AAM and ICLK based AAM and so forth. Moreover their initialization process requires user intervention. Reference [19] performed 2D head tracking for each subject from multiple cameras and obtained 3D head coordinates by triangulation. Lack of ground truth error calculations creates uncertainty in the accuracy of their system. Furthermore slight calibration error massively deteriorates the triangulation.

Gamer’s Facial Cloning for Online Interactive Games

261

Our proposition of face alignment is based on two cameras using 2.5D AAM optimized by Pareto-based multiobjective genetic optimization of NSGA-II. It not only eliminates the steps of precomputation but also provides both exploration and exploitation capability in the search by NSGA-II. Hence it is not sensitive to initialization.

Face Synthesis By facial cloning, we refer to the action of transferring the animation from a source (typically a human face) to a target (another human face or a synthetic one). The cloning (or retargeting) can be either direct or indirect. In direct retargeting, the purpose is to transfer the motion itself of a few selected interest markers (and optionally a texture) from one face to another [20]. The marker trajectories usually undergo a transformation that compensates for the morphological differences between the source and the target face [21–24]. This morphological adaptation is not always satisfactory, especially if the source and the target faces are very different. An interesting way to get around this difficulty is to turn to indirect retargeting. In indirect retargeting, the motion data is not transferred as such, but is first converted by a specific model to a better representation space, or parameter space, more suited for the motion transfer [25–27]. In the next paragraph we will go over some of the most common representations used for indirect retargeting. In order for a facial parameterization to be suited for retargeting applications, it must be adapted to the extraction of parameters from motion capture data, and offer an accurate description of facial deformations. Early parameterization schemes like direct parameterizations [28] or pseudomuscle systems [29–31] usually have the advantage of being simple to conceptualize  >   !  >       not optimal. In particular, when not operated carefully, they can generate        ?  !        extract the values of the parameters from raw facial motion data (video or 3D motion capture). Muscle physics systems attempt to simulate more rigorously the mechanical behavior of the human face, and thus tend to improve the degree of realism of facial deformations [32]. Yet, as for direct parameterization, the manipulation of the muscle network is not particularly intuitive, and the extraction of muscular contractions from video or motion capture data remains an open problem [33]. A popular facial parameterization which directly originates from observation is the Facial Action Coding System (FACS) [34]. This scheme was originally meant to describe facial

262

Computer Games Technology

expressions in a standardized way in terms of combination of basic facial Action Units (AU). Its coherence and good practical performances made it an interesting tool on which to build performance based animation systems. The MPEG-4 standard later extended this concept for facial animation compression purposes, introducing the Facial Animation Parameters (FAP) [35]. The FACS and MPEG-4 FAP have been used to capture and retarget static and dynamic facial expressions between human and synthetic faces [36, 37]. The disadvantage of methods based on multiple separate action units, is that the natural correlation between multiple facial action occurring in each facial expression is ignored. Thus the animation resulting from these approaches tend to be somewhat nonhuman or robotic. More recently studies have aimed at obtaining more natural parameterization by performing a statistical modeling of the facial motion. This consist in gathering a collection of relevant examples (database) and to statistically detect particular variation modes, which encompass the >      {  >    >  contribution of these modes. When two faces have corresponding models, Animations can be easily transferred by mapping the model parameters from one face to the other. Many studies have pointed that motion data  > ^      >  the subtleties of human facial expressions, and have proposed to also capture the textural information [38]. Active Appearance Models (AAM) are frequently used for that propose, since they encompass the motion of well chosen geometric points as well as the pixel intensity changes occurring on   !       ^|š     [39, 40] obtain impressive results of facial expressions transfer between multiple human faces based on an AAM parameterization. For this type of retargeting scheme to be successful however, the appearance models of the source and the target must characterize the same scope of expressions. In particular their databases must correspond. Constructing a database of expressions for a synthetic face which matches the scope of the source human database is not trivial. Reference [41] transfer facial expressions from the AAM parameters of a human face to an avatar based on a blendshape database. The database of the avatar consists of key expressions selected from the human database, however too few expressions are used for the virtual face to allow for a detailed expression retargeting. Reference [42] later improved this approach by preprocessing the human database in order to automatically isolate individual facial actions. Each of the facial actions can then be reproduced on the avatar to construct a blendshape database.

Gamer’s Facial Cloning for Online Interactive Games

263

For a reasonable number of facial expressions, this approach ensures the compatibility between the source and target database, without requiring the construction of many avatar facial examples. Yet, for a more complete scope of facial movements, the number of individual facial actions can become  !         well. Moreover, by decomposing the expressions into individual units, the correlation between these units when performing an expression is lost in the parameterization. Reference [27] performs a linear retargeting of monocular human appearance parameters to muscle-based animation parameters. The transfer function is based on the matching of a human database of key expressions with a database of corresponding animation parameters for the synthetic face. Yet, the choice of the database key expressions is subjective in that case. Moreover the synthetic face is animated with muscle contraction parameters which can sometimes lead to incoherent interpolation, and prevents the system from being used with other types of animation methods. “ >>          >  a human face to a synthetic face, based on pose-free active appearance model parameters delivered by our multiple camera system. The method analyzes a human expression database, and automatically determines which key expressions have to be constructed in the avatar database for the expression       

PRELIMINARY CONCEPTS 2.5D AAM Modeling 2.5D AAM of [3, 14] is constructed by (i) 2D landmarks of the frontal view (width and height of a face model) and x coordinates of landmarks in profile view (depth of a face model) combined to make 3D shape model and (ii) 2D texture of only frontal view mapped on its 3D shape. In the training phase of 2.5D AAM, 68 points are marked manually as shown in Figure 2.

Figure 2: AAM modeling.

264

Computer Games Technology

All the landmarks obtained previously are resized and aligned in three dimensions using Procrustes analysis ([43, 44]). The mean of these 3D landmarks is calculated which is called mean shape. Principal Component Analysis (PCA) is performed on these shapes to obtain shape parameters with 95% of the variation stored in them: (1) where si is the synthesized shape, is the mean shape, És are the eigenvectors obtained during PCA and bs are the shape parameters. The 3D mean shape obtained in the previous step is used to extract and warp (based on the Delaunay triangulation) the frontal views of all the face images. Only two dimensions of the mean shape are used to get 2D frontal view textures. That is why we call our model as 2.5D AAM, since it is composed of landmarks represented in 3D domain but only 2D texture is warped on this shape to adapt 2.5D model. Mean of these textures is calculated. Followed by, another PCA to acquire texture parameters with 95% of the variation stored in these parameters: (2) where gi is the synthesized texture, is the mean texture, Ég are the eigenvectors obtained during PCA and bg are the texture parameters. Both of the above parameters are combined by concatenation of bs and bg ‹}>   >>  >  ˜ (3) where ÉC are the eigenvectors obtained by retaining 95% of the variation and C is the matrix of the appearance parameters, which are used to obtain shape and texture of each face of the database. 2.5D model can be translated as well as rotated with the help of pose vector P: (4) where Êx corresponds to the face rotating around the x axis (pitch: shaking head up and down), Êy to the face rotating around the y axis (yaw: profile views) and Êz to the face rotating around the z axis (roll). tx,ty are the offset values from the supposed origin and Scale is a scalar value for the magnification of the model in all the dimensions. Figure 3 shows the model

Gamer’s Facial Cloning for Online Interactive Games

265

rotating by changing Êy , making left and right semi profile views.

Figure 3: Snapshots of rotating 2.5D AAM.

In segmentation this deformed, rotated and translated shape model obtained by varying C and P parameters, is placed on the query image I to warp the face to mean frontal shape. After this shape normalization we apply photometric texture normalization to overcome illumination variations. The objective is to minimize pixel error

(5) where I(C, P) is the segmented image and M(C) is the model obtained by C parameters. To choose good parameters we need an optimization method. In our proposition, both of these pose P and appearance parameters C are optimized by genetic optimization of NSGA-II.

Multiple Camera System In single-view system face alignment cannot be accomplished when a face occludes itself during its lateral motion. Such as in a profile view only half of the face is visible. To overcome this dilemma we exploit data from another camera and associate it with the one unable to analyze at the first place. This association helps the search methodology to reduce the possibility of divergence. Moreover better outcomes of one camera can escort the other. In multiview systems, higher the amount of processing data higher is the robustness ability of a system however efficiency deteriorates due to high consumption of processing time and memory. In other words a trade-off is required between robustness and efficiency. A database of facial images capable of self-assessing is desired to validate our application. The community lacks such a database which involves lateral motion of a face captured by more than one camera. In order to implement our application we developed a multiview scenario. The purpose of constructing this multiview system is to emulate the scenario of

266

Computer Games Technology

integrating two off the shelf webcams placed on the extreme edges of the display screen facing towards the user as shown in Figure 4.

Figure 4: MultiView System.

AAM rendered on the facial images of both webcams are blended together to represent a face model seen by a virtual camera placed in between. The results of this virtual webcam are compared by a third camera actually placed at the center. In other words it is a comparison between a multicamera system by MOAAM with single-camera system by SOAAM (Single-Objective AAM). These three cameras are placed 25 degrees apart on a boundary of a circle with a radius of 70 cm as shown in Figure 4. Center of this circle serves as a principal point for each camera. Seven individuals from a research team are invited for screen shots with the intention of obtaining 1218 images with lateral motion. Each individual rotates his face    >       three images from each webcam are acquired simultaneously to obtain temporally synchronized images.

Illumination It remains steady through out the sequence. It is accomplished by a white ambient light placed behind the central camera as shown in the Figure 4. The

Gamer’s Facial Cloning for Online Interactive Games

267

light we used comes with the stand and a built-in umbrella holder to give extra flexibility. By adjusting the umbrella’s position we have rejected the bright spot on the face. It works well for taking facial images with webcams.

Camera Calibration It is performed by a publicly available toolbox [45]. A simple planar checkerboard is placed in front of the cameras and sequence of images are taken to calculate calibration parameters. With the help of the toolbox, four corners of the checker board are extracted and calibration is performed with respect to the grid of the checkerboard. The toolbox calculates intrinsic parameters (focal length, principal point, distortion and skew) and extrinsic parameters (rotation vector and translation vector) for each camera. With the help of these parameters, all the facial images of these cameras are calibrated. Figure 5 shows some images of test database acquired from three webcams. A similar scenario is emulated in the software MAYA for a video of synthetic faces. The synthetic face database does not contain camera calibration error hence it is helpful to analyze results free of calibration errors. Figure 6 show some examples of test database of synthetic faces '           !   remaining face models were made in a software named as “Facial Studio”. All of them were imported in MAYA for rendering the synthetic facial images). Some of the facial images of M2VTS [46] (learning database) are also shown in Figure 7.

Figure 5: Test database images: Same pose from 3 webcams.

268

Computer Games Technology

Figure 6: Test database synthetic images.

Figure 7: Learning database images.

FACE ANALYSIS The main objective of our application is to clone a real human face in the form of an avatar. For such an application face analysis plays an important role for face synthesis. The more efficient the analysis is, facial synthesis is likely to be more accurate. To obtain an efficient and robust face analysis

Gamer’s Facial Cloning for Online Interactive Games

269

system we acquire a human face with two cameras and analyze it by an appearance based morphable model of 2.5D AAM.

MOAAM In single-view system, single error between model and query image is optimized. However in multiview system, the optimization of more than one error is to be performed between a model and query images from each camera. AAM fitting on multiviews is shown in Figure 8. In multiview AAM, the model is rendered on both the images from each camera with the same C parameters. The P parameters also remain the same except a yaw angle offset (Êoffset) is introduced between the models rendering on two images. After segmentation, pixel errors between both the images and models are calculated. The objective is to minimize pixel error of (5) obtained from each of the two cameras

Figure 8: Fitting of MOAAM.

270

Computer Games Technology

(6) where P1 and P2 are linked by an offset of yaw angle. In order to optimize both errors we propose Pareto-based NSGA-II MOO.

NSGA-II Genetic Algorithm is a well-known search technique. We have used its multiobjective version of Nondominated Sorting Genetic Algorithm (NSGAII) proposed by [2] to optimize the appearance C and pose parameters P. The target is to find out the best possible values of these parameters giving minimum pixel errors between the model and the query images of both cameras. In this optimization technique each parameter is considered as a gene. All the genes of C and P are concatenated to form a chromosome. A population of particular number of chromosomes is randomly created. Pixel errors (fitness) between query images and the model (represented by each chromosome) are calculated. Tournament selection is applied to select parents from the population to undergo reproduction. Two point crossover and Gaussian mutation is implemented to reproduce the next generation of chromosomes. Selection and reproduction is based upon nondominating sort. The objective is to minimize both of these pixel errors, hence nondominating scenario is to be implemented by Pareto optimization.

Pareto Fronts The fitting of AAM to image data is performed by minimization of the error function. In MOO several error functions are to be minimized, hence mutual relation of these errors point towards the appropriate MOO method. Dominating errors can be dealt with non Pareto-based MOO, but in this scenario both cameras serves the same purpose of acquiring images of a face. Hence nondominating scenario is to be implemented with the desired Pareto optimum solution. The basic idea is to find the set of solutions in the population that are Pareto nondominated by the rest of the population as shown in Figure 9(a). These solutions are assigned the highest rank and are removed from further assignment of the ranks. Similarly, the remaining

Gamer’s Facial Cloning for Online Interactive Games

271

population undergoes the same process of ranking until the population is suitably ranked in the form of Pareto fronts as shown in the Figure 9(b). In this process some kind of diversity is required in the solutions to avoid convergence to a single point on the front. This diversity can be achieved by the exploration quality of Genetic Algorithm.

(a)

(b) Figure 9: Pareto Fronts.

272

Computer Games Technology

Switching of MOO to SOO Processing data from two cameras is meaningful as long as they are relevant. With respect to a camera if a face is oriented such a way that it occludes itself there is no need of processing data from this camera. Eventually in order to avoid wastage of processing we divide field of views of both cameras in three regions R1, R2 and R3 as shown in Figure 4. To determine the region of the face orientation Pareto-based NSGA-II is applied to evolve populations until small number of generations. After each generation evolution, the histogram of genes of the entire population representing the yaw of a face is observed. This histogram follows one of the three curves of Figure 10. Histogram curve-1 corresponds to region-1, where the information from both the cameras are meaningful and data from any one of them cannot be neglected. Whereas histogram curve-2 and curve-3 corresponds to region-2 and region-3 respectively, where the information from one of the camera is sufficient enough to localize the facial features and other camera can be discarded. After few generations, current population decides whether to stay in MOO or to switch to single objective optimization (SOO). Mathematically, let us suppose Pop is a set of population given as

Figure 10: Histogram of chromosomes versus head orientation.

Gamer’s Facial Cloning for Online Interactive Games

273

(7) where N is the number of chromosomes X and M is the number of genes of each chromosome. Now we observe the kth gene of each chromosome which represents yaw angle of the model. In order to calculate the histogram of  ! |Ë 

(8) where Ê is the threshold angle equals to the half of the angle between two cameras. is the ratio of number of chromosomes representing the face position in region-1 to the total number of chromosomes:

(9) The value of decides whether to stay in MOO and utilize both cameras or to switch to single camera mode.

MOAAM Fitting For MOAAM (also called MVAAM: Multiview AAM) fitting we refer readers to our previous work of [3], which illustrates stepwise detailed description of MOAAM fitting on a query image. It includes steps of initialization, reproduction, segmentation, fitness calculations, nondominating sort, replacement and switching of MOO to SOO. In our previous work we have highlighted the effects of slight errors caused by the camera calibration and the ground truth points for a real face database. Camera calibration problem arises when we compare MOAAM results to SOAAM. As we have already mentioned in Section 3.2 that models obtained from two cameras placed at the extreme edges of the display are blended together to compare it with the one obtained from the central camera. This comparison is highly prone to the calibration error of all the three cameras. Whereas the results from a single camera (SOAAM) do not experience

274

Computer Games Technology

any calibration problem. In this paper we have manage to overcome this dilemma by building a synthetic face database of several individuals. The scenario shown in Figure 4 is emulated, in the software named as MAYA, by placing different synthetic characters in between two virtual cameras each calibrated and located 50° apart. A third camera is placed in-between these two cameras for the comparison of results of a single camera and double camera. These cameras have all the characteristics of an actual camera     >    >   100% calibration. Ground truth points are the exact localization of the face orientation and features (nose, eyes and mouth). In real face database there is a possibility of slight errors in the ground truth points since they are marked manually on each facial feature of each image. However in synthetic facial images this problems is solved by obtaining these locations automatically     >    `® “             >>`ƒ`  >  

Experimental Results We performed simulations using 64×64 pixels AAM by annotating 37 subjects of publicly available databases of M2VTS [46]. However for testing database we have used both real face database and synthetic face database. Both these databases contains 2418 facial images, of 7 real and 10 synthetic faces, from each camera. Among 2418, 806 images are considered to be taken from central camera to validate our results. In testing phase face alignment is performed on all the views from left profile to right profile. Two sets of experiments are performed: SOAAM and MOAAM.

Single-Objective AAM In SOAAM, AAM is rendered on the image sequence from the central camera, which is placed to highlight the benefit of MOAAM. As far as optimization is concerned, SOAAM is optimized by classical GA optimization. Same selection and reproduction criteria of NSGA-II are implemented in GA, in order to give a good comparison.

Multiobjective AAM In MOAAM, same AAM is rendered on the face image sequence from the other two cameras, which are actually the part of our multiview system. Localization of face on these two images from each camera is performed by Pareto-based MOO of NSGA-II.

Gamer’s Facial Cloning for Online Interactive Games

275

Best chromosomes obtained at the end of MOAAM and SOAAM contain best appearance and pose parameters for a given face. Features like eyes, nose and mouth can be extracted from these shapes as shown in Figure 11. First three rows correspond to synthetic faces while remaining rows represent real human faces. It can be seen from the images that as the face moves laterally the feature localization gets far better in two cameras (MOAAM) than in single central camera (SOAAM).

(a)

276

Computer Games Technology

(b)

Gamer’s Facial Cloning for Online Interactive Games

(c)

277

278

Computer Games Technology

(d) Figure 11: (a) and (b) Comparison of SOAAM and MOAAM (operating in R2 or R3). (c) and (d) Comparison of SOAAM and MOAAM (operating in R1).

Gamer’s Facial Cloning for Online Interactive Games

279

Figure 12(a) shows percentage of aligned synthetic images versus mean ground truth error (GTEmean) of facial features (eyes, nose and mouth). GTEis actually the mean error obtained by comparing MOAAM analyzed mean locations and manually marked locations of all the facial features of a facial image. The error is normalized by Deye which corresponds to the distance between eyes, that is, an error of 1 corresponds to a mean error equal to the distance between the eyes. To eliminate the vagueness of ground truth markings we consider results starting from 0.1 of Deye, which means any two algorithms having a GTEmean less than 0.1 is considered to be equally accurate. While for the maximum threshold results less than 0.25 of Deye is considered to be well converged results. Figure 12(a) depicts that our sys `ƒ` ¢'™=**   'ƒ` * MOAAM 69% of the images are aligned with a GTEmean less than 0.2 of Deye. Whereas SOAAM aligned 41% of the total images. Similarly Figure 12(b) shows the results of experiments on real faces (previous work); MOAAM 68% and SOAAM 50%.

(a)

280

Computer Games Technology

(b) Figure 12: (a) Comparison of GTEmean for MOAAM and SOAAM (Synthetic face images). (b) Comparison of GTEmean for MOAAM and SOAAM (Webcam images).

Figures 13(a) and 13(b) illustrate the comparison of both algorithms with respect to normalized maximum ground truth error (GTEmax) for both synthetic and real facial images databases respectively. GTEmax represents worst localization of a facial feature (eyes, nose or mouth) normalized by Deye. Figure 13(a) depicts that MOAAM aligned 50% and SOAAM aligned 28% of synthetic facial images with GTEmax less than 0.2 of Deye. Whereas Figure 13(b) shows MOAAM aligned 30% and SOAAM aligned 10% of real faces.

(a)

Gamer’s Facial Cloning for Online Interactive Games

281

(b) Figure 13: (a) Comparison of GTEmax for MOAAM and SOAAM (Synthetic face images). (b) Comparison of GTEmax for MOAAM and SOAAM (Web-cam images).

As far as time consumption is concerned, it is obvious that at the worst MOAAM required twice of the processing time compared to SOAAM but          !            ‘ƒ~€      `     Š            discarding the data from the camera by NSGA-II reduces this twice factor. SOAAM required 1600 warps whereas MOAAM instead of 3200 warps required 2700 warps. Each warp equals 90% of the time consumed by an iteration, that is, 0.03 milliseconds in Pentium-IV 3.2 GHz. Therefore each facial image requires 90 milliseconds for the analysis without any prior knowledge of the pose, however in tracking mode we can reduce this time by employing pose parameters of previous frames, which eventually reduces the number of warps (iterations). Moreover facial analysis by MOAAM can      > >  `ƒ`*   `ƒ` the query face is totally unknown and to analyze it we need a vast learning  !    >  >   `ƒ`         facial images of the same individual who would be analyzed by the system. \  >  >   `ƒ`           compared to generic MOAAM.

282

Computer Games Technology

FACE SYNTHESIS The goal of our application is to clone the gamer’s facial expression to an avatar. The cloning consists of transferring the facial expressions from a source (typically a human face) to a target (another human face or a synthetic one). The avatar facial deformations then originates from real human movements (performance-based facial animation), which usually look more natural than manually-designed facial animation. Moreover, since the expressions of the gamer are captured and transferred in real-time, the facial animation of the avatar acts as a real gaming experience, and significantly improves the interactivity of the game compared to prerecorded animation sequences.

System Description In this section, we present a general description of a system that provides an efficient parameterization of an avatars face for the production of emotional facial expressions, relying on captured human facial data. Here we make use of two databases of our previous work of [47]. An illustration of the system and its applications is displayed on Figure 14.

Figure 14: Overview of the face synthesis system. (Colors represents different types of expressions and are shown for the clarity of the display only).

H-Database The entry point of the system is a database of approximately 4000 facial images of emotional expressions (H-database). These images have been acquired on an actor performing facial expressions without rigid head

Gamer’s Facial Cloning for Online Interactive Games

283

motion. The database was constructed to contain an important quantity of dynamic natural expressions, both extreme and subtle, categorical and mixed. A crucial aspect of the analysis is that the captured expressions do not carry any emotional label. The facial images will allow us to model the deformation of the face according to a scheme used in Section 3.1. The AAM procedure delivers a reduced set of parameters which represent the principal variation patterns detected on the face. Every facial expression can be projected onto this parameter space referred to as the appearance space (Figure 14 presents symbolic 3D representations of this space, although it may contain 15 to 20 dimensions). Note that this process is invertible: it is always possible to project a point of the appearance space back to a facial configuration, and thus synthesize the corresponding facial expression as a facial image.

A-Database A reduced parameter space similar to the one described above can be constructed for the synthetic face, provided that a database of facial expressions for the virtual character is available (A-database). In this section we show how to identify a reduced set of facial configurations from the human database so that a coherent appearance space is constructed for the avatar (typically 25 to 30 expressions). The purpose of this avatar database creation scheme is that the appearance spaces of the human and the synthetic face have the same semantical meaning, and model the same information. It is then easy to construct a mathematical link between them (the ATM as illustrated on Figure 14). The appearance space for the synthetic face is built through statistical modeling, similarly to the human appearance space (Section 5.1.1). For real faces, thousand of database samples can be produced with a video camera and a feature-tracking algorithm, whereas the elements of an equivalent        =      !   are not easy to obtain. It is thus desirable to keep the number of required samples small. Our idea for building the A-database, is to use the human database, and extract the expressions that have an important impact on the formation on the appearance space. Indeed, a lot of samples from the human database bring redundant information to the modeling process, and are therefore not essential in the A-database. Following this logic, we are able to reduce the set of necessary expression to a reasonable size. Practically, we select

284

Computer Games Technology

the extreme elements of the database, meaning the elements presenting the maximal variations with respect to a neutral facial expression. In terms of parameter space, these elements are located on the convex hull of the point cloud formed by all database elements and are detected using [48]. These samples are responsible for shaping the meaningful variance of the database and thus encompass the major part of its richness. By manually reproducing these selected expressions on the face of the virtual character, we can build its very own appearance model according to the method presented in Section 3.1. Our studies have shown that 25–30 expressions are enough to    >>   . For the human database, we used more than 4000 elements. Using the convex hull procedure we have been able to identify 25–30 representatives for the reduced database (see Figure 15), with a small reconstruction error. Such a reduced database can be constructed for any synthetic character, and any human face based on the same extracted elements (see construction of the gamer’s database in Section 6.2). Having to design several facial                  method, yet it also can be seen as an advantage: our system does not rely on any particular facial control method (muscle systems, blendshapes, etc).        >            ƒ           =   ^" 

Figure 15˜{        >       hull of the point cloud formed by all database elements (top). Avatar’s expressions corresponding to each human expression (bottom).

{         >       the two databases, and thus the two appearance spaces. In the next sections,              motion data.

Gamer’s Facial Cloning for Online Interactive Games

285

Appearance Transformation Matrix (ATM) The ideas developed in the previous section have lead to the construction of analogous appearance spaces for the human face and the synthetic face. Both spaces are connected, since the construction of the avatar appearance space is based on elements replicated from the human database. It follows that we have a correspondences between points in the human appearance space and points in the avatar space. We propose to use this sparse correspondence to construct an analytical link between both spaces. This link will then be used to transform human appearance parameters CH into avatar appearance parameters CA, and thus clone a human facial expression on the synthetic face. It can be noted that the modeling scheme of AAM we use is linear equations (1), (2) and (3). Linear variations and combinations are thus preserved by the modeling steps, and we wish to maintain this linear chain in the retargeting process. Therefore, as in other approaches like [27], we applied a simple linear mapping on the parameters of the appearance spaces:

(10) where m and n are the number of appearance parameters of human and synthetic appearance space respectively, while k is the number of expression stored in the database. Hence if CH is a m × k matrix and CA is a n × k matrix, A0 will be of m × n. The matrix A0 is obtained through linear regression on the set of corresponding points. Depending on the dimensionality of the appearance > |‚‡—€!  >  ‹ >}>  Regression [49] to cope with a possible underdetermination of the regression problem. Retargeting results are illustrated by a few snapshots on Figure 16. Experimental details used are given in Table 1. Complete sequences of expression retargeting can also be found on the accompanying video.

286

Computer Games Technology

Table 1: Experimental details. (Avatar1 and Avatar2 are shown in the middle and right columns of Figure 16 resp.)

Figure 16: Examples of cloning of facial expressions. The expressions captured on the human face (left) are successfully transferred to the faces of avatars (middle and right). First row shows neutral faces.

INTERACTIVE SYSTEM Our proposition is a complete human machine interactive system for a game console. Figure 17 is a detailed description of our system. This time it is viewed from perspective of stages of the global system. System is composed of three stages.

Gamer’s Facial Cloning for Online Interactive Games

287

Figure 17: Block diagram of the interactive system.

Avatar’s Face Modeling In this section, we make use of procedure of Section 5.1.2 to obtain a database of simple and realistic facial expressions of an avatar called A-database. The visual aspect of the synthetic character is chosen by the user. Different classes of synthetic faces are available representing different ages, races, gender, physique and features and so forth. Once the class of the avatar is chosen, the required facial expressions, already stored in the system, are generated for this face (from the expressions identified in Section 5.1.2). Note that the system’s user has the possibility to edit the suggested facial expression to personalize the look of its avatar by manually clicking and moving the vertices. Ultimately the A-database contains the expressions, on the user-chosen character, which are necessary to form the A-Database. We can build the its appearance model according to the method presented in Section 3.1. This procedure delivers a reduced set of parameters which represent the principal variation patterns observed on the synthetic face (CA). Manual marking of the landmark on the synthetic face is not needed as the synthetic face is already generated by the system and it contains the location of each vertex.

Gamer’s Face Modeling The procedure of training is very simple and unproblematic. The essence of this phase is to make the system learn the facial deformations of the gamer’s

288

Computer Games Technology

face so it can replicate the localization of features, emotions and gestures on the synthetic face. The construction of the Gamer’s database is similar to the one of the avatar. The gamer has to mimic the expressions that have an important impact on the formation of the appearance space (identified in Section 5.1.2). In practice, the required facial expressions are displayed serially for the user to imitate. Facial images are captured by generic MOAAM, as explained in Section 4 to automatically localize the facial features. Since user is unknown to the system therefore generic MOAAM containing an AAM model based on M2VTS facial images database is used. Feature localized by MOAAM is displayed on the screen for the user to fine tune the location of each feature. Finally all the facial images of the gamer are generated, each corresponding to synthetic facial expression of the A-Database. By reproducing these selected facial expressions of the gamer, we can build its very own appearance model along with its reduced appearance parameters CG according to the method presented in Section 3.1. With CG and CA (obtained in previous section) we can calculate ATM mathematically (see Section 5.1.3). This ATM is gamer dependent and can be used for cloning only for particular gamer who was involved in generating it in the first place. Time cost for this phase is tabulated in Table 2. Table 2˜‹  ™ •  " €

(11)

Online Cloning From the previous two sections we obtained an ATM capable of transforming the appearance parameters from the gamer’s appearance space to the avatar’s appearance space. In online cloning, this transformation involves only a matrix multiplication of real-time gamer’s appearance parameters CG with A0 to obtain avatar’s appearance parameters CA. This analytically simple framework enables real-time performances. The virtual illustration of a gamer is cloned in the form of an avatar synthesized by CA and ultimately

Gamer’s Facial Cloning for Online Interactive Games

289

display on the screen as shown on Figure 17. The appearance parameters of a gamer are acquired in real-time by our facial analysis system of multiple cameras. Tactical moves of the game causes the gamer to move a lot in different direction. Yet the retargeting scheme of Section 5.1 has been designed for stable heads. Employing multiple cameras resolved this problem. Two cameras placed at the extreme edges of the screen acquire real-time image of the gamer and at the same      >   > >  `ƒ` ‹ >  `ƒ`        the previous section and it contain all the pose-free facial variations of the gamer. User’s oriented face is analysed by MOAAM, to give its appearance and pose parameters. These appearance parameters are pose-free and belongs to the frontal face of the user. These parameters are transformed by ATM in the synthetic face’s parameter space and synthetic face is synthesized by them. After that pose parameters obtained previously by MOAAM analysis are used to adjust the orientation of the avatar being displayed on the screen. As shown in the cloning section of the Figure 17, appearance parameters undergoes transformation while pose parameter are directly reproduced on the avatar face to clone both the gamer’s expressions and gestures. Time cost of each block, for a Pentium-IV 3.2 GHz platform, is tabulated in Table 3. The linearity of the AAM scheme allows the reproduction of both extreme and intermediate facial expressions and movements, with low computing requirements. Table 3: Processing time for online cloning

CONCLUSIONS In this paper we proposed a solution to solve two bottlenecks of facial analysis and synthesis in an interactive system of human face cloning for nonexpert users of computer games. Facial emotions and pose of gamers cloned to bring their realistic behavior to virtual characters. Bottlenecks of

290

Computer Games Technology

analyzing the human face and synthesizing it in the form of an avatar are dealt with. Large lateral movements of a gamer makes it impossible to analyze and track his face with single camera. To overcome this dilemma we exploit data from another camera and associate it with the one unable to analyze at the

> \      >     possibility of managing excessive amount of data from multiple cameras. Currently with wide availability of inexpensive webcams the multiview system is as practical as single-view. To analyze the acquired multiview facial images we proposed multiobjective 2.5D AAM (MOAAM) optimized by Pareto-based NSGA-II. We have presented new results (Section 4.2) because of the problem of calibration and ground truth points in our previous work. Our approach of MOAAM is accurate, robust and capable of extracting the pose, features and gestures even with large lateral movements of a face. As far as facial synthesis is concerned, cloning the human facial movements onto an avatar is not trivial due to their facial morphological differences. We proposed a new technique of calculating the mathematical semantic correspondence between the appearance parameters of the human and avatar (ATM matrix). We calculated this ATM for the gamer to be able to clone his emotions on the avatar in real-time. The interactive system we have presented is complete and easy to use. We have shown the results of facial features and pose extraction and how we synthesize these facial details on an avatar by calculating the ATM with the gamer’s help. Although gamer’s and avatar’s database construction and its training is a long and tedious job. But it is supposed to be done once every time a new gamer is introduced. On the other hand our system is capable of performing online cloning of each frame in 64.015 milliseconds (i.e., 15 frames per second), as being nearly a real-time system. For the moment, this approach is limited to be used in an interactive system for the gamers, but it would be interesting to extend it for larger events, like conferences and meetings, with multiple cameras installed on different corners of the room and displayed  >_ `           where the channel bandwidth is limited, since only the small amount of appearance and pose parameters are transmitted from the human face to the avatar for face synthesis.

Gamer’s Facial Cloning for Online Interactive Games

291

ACKNOWLEDGMENTS ‹  ^  >>  ^>    such media. Other research [28] is based on a multimedia computer game called emotional trainer which is used to teach those with Autistic Spectrum Disorders (ASD) to recognise and predict human emotional responses. The above works indicate that computer training programs could be used as an effective tool for cognitive learning developments.

METHODS AND MATERIALS This work suggests the development of a comparative quantitative measures for the assessment of cognitive learning rather than absolute measures. To achieve this, a specially designed Graphic Test (COGNITO) was used on 14 individuals at different age groups (without learning disability). In the test each graphic page includes two optional responding factors: 1)

user-centred attitude (where the participant acts dominantly by his/her own perspective neglecting the game hero), 2) game-hero-centred attitude (participant acts through the game hero, putting him/herself in its role). This “two-mode-option” design is inspired from the fundamental idea that one of the measures will distinguish between normal and Autistic individuals [29] . But at this stage the test is only used for cognitive learning behavioural analysis rather than any Autistic diagnosis assuming that game-hero-centred attitude is a key point which effects learning action      *  > >  the test, the participant takes a decision of self-centred thinking (scored as “1”). Whereas in second option he/she responds to test from game-hero’s point of view (scored as “0”). From beginning to end of the test, he/she is expected to perform a cognitive learning action progressively by observing each sequential graphic answer page followed by each graphic test (inquiry) page shown in Figure 1.

Data Acquisition Method Adequate and reliable data acquisition which should not cause any distraction or anxiety of the user depends on dedicated utilities and a friendly interactive

A Quantisation of Cognitive Learning Process by Computer Graphics...

303

environment. The commonly used traditional environments for data collection from the individuals/patients are labs, hospitals or clinics where the patients or participants (especially children) cannot feel comfortable or objective enough to provide reliable characteristic cognitive information. These factors would also have adverse effects on healthy cognitive learning activities. Due to these reasons, the collection of data is carried out by using computer graphics environment (COGNITO), which is a fifteen-page graphic test whose samples are shown in Figure 1. The communication with the potential participants and data collection were carried out by one the authors of this study who was also involved with the recruitment of participants for the test. The range of ages for the participants was 12 - 42. None of them declared any learning difficulties and all were computer users. No particular computer utility setting was chosen and all did the test in their own environment.

Graphical Quantisation Since it is very difficult to find an absolute quantitative measure which is applicable to all other cases (e.g. to assess the degree of cognitive learning), the suggested method here rather makes a comparative analysis between the learning progresses of participants. Hence the comparative results only belong to the graphic test used in the experiments. The calculated degree of cognitive learning (CL) for each participant is rated between the worst case of CL (a) and the ideal case of CL (b) with regards to (continuous) progressive learning process. Here the two extreme cases are;

The idea of worst case (a) is based on a characteristic sign of learning disability (e.g. Autism) where the patient exhibits a repetitive behaviour in his/her daily life activities [30] [31] . In some cases of the experiment, the participants exhibit a “reverse” form of cognitive learning (a progression from 1 to 0) which may have several reasons such as lack of objectivity (if the test target is guessed in advance), diversity of personal characteristics, etc. In this case a reverse cognitive learning by the participant can be taken account positively and described as a reverse form of learning process. This sort of learning may easily be normalized by the reverse test format. Two       >        Š  >   cognitive learning; normal progressive learning (Ilearning) and reverse progressive learning (Ireverse)

304

Computer Games Technology

(1)

(2) In Equations (1) and (2) the parameters below refer to the user’s graphic response vector Vg (e.g. 0, 1, 1, 0, ···, 0); In each vector: k = number of total transition between 0 and 1 in Vg (e.g. k = 2 for Vg = {0, 0, 1, 1, 1, 0, 0} n = total number of element in Vg (n = 15) m = number of element in Vg i = Boolean value of m. element in Vg (e.g. 0 or 1) As a quantitative measure (derived from Formula (1), (2)) the “strait line

        >   a progressive cognitive learning is assumed to be a linear model (Figure 2). In addition to Index measure of learning, Chi-square test and Correlation       ™ ==  Š of test results.

       The data set contains the Boolean values of participants’ responses to the graphical test which may be described as progressive cognitive learning (PCL) graphical test. The test includes 15 graphical inquiry pages and each test page is followed by an answer page which is expected to lead a progressive cognitive learning action as the participant follows whole parts of test from beginning to end. The ideal learning form in the model is shown in Figure 2 (represented by strait line) and also one participant’s PCL (adapted for comparison and shown by dashed line) produced from the graphic test results. In the figure an ideal PCL refers to the ideal case in Table 1 (ILearning = 5.4). The test was used on 14 participants from different

A Quantisation of Cognitive Learning Process by Computer Graphics...

305

age groups.

Graphic Cognitive Test (COGNITO) COGNITO is a fifteen-page specially designed graphics-see samples shown in Figure 1. Computer graphics are specifically designed so that the graphic agents (characters) in the test domain are supposed to be in interaction with the user and stimulate him/her to make his/her cognitive options to decode his/her cognitive learning characteristics.

Figure 1: COGNITO: sample display of graphic agents used as a stimuli for cognitive learning process.

Figure 2: The strait line model of ideal progressive cognitive learning (PCL) and a participant’s PCL (adapted for comparison and shown by dashed line) produced from the graphic test results Ideal PCL refers to the ideal case in Table 1 (ILearning = 5.4). The line formula is L = (m/n).

The tool COGNITO is used for cognitive learning purposes in the system. When the user receives the correct answer promptly after his/her reaction to stimuli (in this case the answer is “yes”) then he/she is forced to learn the cause-effect relationships in computer environment. A further versions of the test may contain hundreds of computer graphics and each one may be used only once for each participant. In the more developed version, this convergence method of progressive learning would be particularly inevitable

306

Computer Games Technology

to minimize any learning disability progressively. Figure 1 exhibits two example graphic pages of the test in which two different characters are used (scenes are presented in colour) to capture user’s characteristic options.

Cognitive Learning at Broad Range The cognitive graphical test (COGNITO) is specifically designed and used in experiments so that it is expected to stimulate cognitive learning at broad range. The related arguments to support this idea are as follows; 1) The test was completed by the participants without any pre-knowledge. This helped produce low level knowledge during the test session. This assumption is based on previous studies [32] [33]. According to Sun et   > > >   > knowledge relevant to a task, learning proceed differently. It is likely that some skills develop prior to the cognitive learning by high level knowledge (bottom-up learning)”. 2) Each case in the test is represented by a one-page graphics where each one is unrelated to the others to avoid producing obvious cause-effect relationships by the participants to avoid a high level knowledge, hence encourages low level one. According to several researchers [34] - [36] low level knowledge is not always preceded by high level one in learning. They are not necessarily correlated and rely to each others. This indicates that low-level cognitive learning is possible without a high-level learning action. 3) As it was commented by the participants that they found the test very illogical and meaningless. This lack of information about the aim of test should lead to a production of intrinsic knowledge via low level cognitive learning. Because the participants only learn to be self-centred without any other high level knowledge. This matches the description of low level learning by which the knowledge is gained unconsciously without awareness [18] . Hence at the end of the test the participants are possibly unaware of that they learn to think in a self-centred way. According to Sun et al. [33] the low level cognitive learning can be possible by strait back-propagation (supervised learning) where correct input/output is provided. This method is followed in our graphic test by

A Quantisation of Cognitive Learning Process by Computer Graphics...

307

prompt graphic answer page after each question page. Even though many similar graphical methods have been studied so far, the test introduced here  >   =    

RESULTS AND DISCUSSION Quantisation of Progressive Cognitive Learning Action The proposed model introduces the idea of progressive cognitive learning quantisation which is a comparative work whose experimental results are presented within the range between worst and ideal learning cases and distributed equally. The graphical test used for cognitive learning has been specifically designed to support the idea that the participants progressively learn to think in self-centred way by reinforcement. The ideal progressive learning model (Figure 2) shown as strait line in the range between 0 and 1. *{ |     =  *  *€!       |  ‡!        !  ==       >>            •   learning. This was done by matching each learning case (id) to ideal PCL (in Figure 2). As is seen in Table 1 the measure I indicates the forward and reverse learning separately whereas Correlation measure is suitable to see the   >   ¦ €    « € the same row. Learning index (I) is most suitable measure for a comparison      “  ™ ==  =Š  €  the range between 12.2 and 15 and not very suitable to represent the worst case (12.7) which corresponds to repetitive patterned learning (0, 1, 0, ···) in our proposed model. Learning case of Participant 1 (id1) is presented by Figure 3.

308

Computer Games Technology

Table 1: Index values (I) of progressive cognitive learning (PCL) and reverse ‹}’  ‘|€‡€  =     ==   > >• i) ideal case (Figure 2) and    >  —!|> €     * correlation section minus values indicate the reverse learning

Figure 3: A sample case of cognitive learning graph for participant 1 (Ilearning = 2.5) which exhibits the graphical test output and then matched to ideal PCL to quantise degree of learning.

Data Analysis by Bayesian Networks The Bayesian network approach for modelling cognitive processes was introduced in [37] . In his work four of the most distinguished potential hierarchical Bayesian contributions to cognitive modelling were comprehensively discussed. Some of the previous works exhibit the different application fields of Bayesian inference method and classification process separately which provide a useful guidance for this work [38] [39] . In this work two different experiments are done by use of Bayesian network tool (Power Predictor™) for the analysis of data produced by the cognitive learning graphic tests. In general terms, Bayesian networks are called Casual Probabilistic Networks and very useful instrument which achieves a knowledge representation and reasoning. They are also capable of generating very accurate classification results under uncertainty where the data set includes many uncertain conditions [40] . The Bayesian networks are the

A Quantisation of Cognitive Learning Process by Computer Graphics...

309

probabilistic models which graphically encode and represent the conditional independence (CI) relationships among a set of data [38] . In the experiment (Figure 7), a learning Bayesian network software tool (Power Predictor) is used for classification purposes. The Bayesian networks are the probabilistic models which graphically encode and represent the conditional independence (CI) relationships among a set of data. In this work, a learning Bayesian network software tool (Power Constructor™) is also used for the analysis of cognitive data and the inference to construct the network [33] which is a different tool than Power Predictor. Both utilities use the Markov condition to obtain a collection of conditional independence statements from the networks [41]. The algorithm  "          € from a data set and decides if these variables (e.g., participants responses, test graphics, etc.) are independent or linked and it also investigates into how close the relationship between those variables is (Figures 4-6).

     |€

}      ž €?   ^

Figure 4: The comparison between the index values of progressive cognitive learning (grey bars) and reverse progressive cognitive learning (black bars) is shown (W = worst case of PCL, and I = ideal case of PCL).

310

Computer Games Technology

Figure 5˜{  >         >   cognitive learning (grey bars) and reverse progressive cognitive learning (black bars) for each participant is shown (W = worst case of PCL, and I = ideal case of PCL).

Figure 6: The comparison between the Chi-square test values of progressive cognitive learning (PCL) for each participant is shown (W = worst case of PCL and I = ideal case of PCL. The values are scaled as x-10 for a better display).

The aim of this experiment is to investigate what types of graphical attributes with regards to the different levels of learning groups (high/low) may play role in cognitive learning process. This information will play a  >     >         >  >   level, family background, IQ levels, etc.). With regards to the hypothesis used within this proposed system, a progressive cognitive learning to be accomplished by each participant by following the graphical test from beginning to end. The ideal progressive learning action is represented by a   !     — 

A Quantisation of Cognitive Learning Process by Computer Graphics...

311

half (representing low-level learning) and “1s” in its second half (represents

 =    € ?     > !      >            ‚Ï      ?   ‹ ‹  Ѐ ‘ *     attributes 3, 6, 10, 12, 13 and 14 played role to separate between two levels of learning. The data set as is shown by Table 2 used for training/test of Bayesian network ‰ž!       levels (high/low) of ideal progress was carried out. In this experiment the dataset contains the attributes of 15 (graphics) × 14 (participants) to classify two categories of learning groups (high/low) is used with Bayesian network tool ‹  ‹  Ѐ '     >   > >     chosen such as the threshold is selected (automatically) between 0.1 - 50, ROC (under Curve) and equal discretization method are selected, etc. in the   ^   As is seen in Figure 7 the network combination of nodes 3, 6, 10, 12, 13 and 14 are used for separation between the two classes (high/low level learners). The other nodes being disconnected are not necessarily >        >      accuracy. The class node (ID) includes the type of category. In the network the direction of arcs are disregarded. This is because it is well known that in a Bayesian network                 >    ª‡*   !   arcs is not uniquely determined. It can therefore not be expected that the arcs   "               ‡€}  > "   >  >>   {      ^        >   >   >   >>           >        >    kinds of users groups (e.g. regarding their age, educational level, life style, etc.) and hence to classify users’ cognitive learning characteristics. The output may later be used to help take decisions about an effective education or treatment method for learning disabilities of those different categories. For this experiment 1st , 7th and 9th test pages (used as graphical attributes) are selected automatically by Bayesian inference methods (by Power Constructor™ tool) which are more effective to separate two groups (adults/ teenagers) than the other pages (graphical attributes) as is seen in Figure 8. By the experiment each participant user interacts with all graphical >>           > >   his/her cognitive characteristics. In the test a reverse form of data set (shown

312

Computer Games Technology

in Table 2) is used, where the attributes are graphical test pages, the cases are participant individuals and class node also refers to type of individuals (e.g. adult/teenager). Table 2: Reduced form of data set used for training/test of Bayesian network  > ‰ž          progressive learning (high = 1, low = 0) of the ideal was carried out

Figure 7: The Bayesian network representation of the attributes used to classify the low and high learning categories. Ideal progressive learning action refers          ^      (high/low) is done via attributes 3, 6, 10, 12, 13 and 14 which correspond to > >• >>   }     ‚Ï when test/train data set ratio is chosen as 8/7.

A Quantisation of Cognitive Learning Process by Computer Graphics...

313

Figure 8: The attribute selection result of Bayesian inference method where each attribute refers to graphical test page. Test pages 1, 7 and 9 are selected which are effective to separate two group of users.

These tasks are to demonstrate a basic level of quantisation by using ?   >  (e.g. low/ high learning, low/high graphical impact on learning, etc.). It is also possible to increase the number of classes  = >?       € higher level of quantisation.

CONCLUSION AND FUTURE WORKS The data collected from the test participants for the proposed cognitive learning model are used for the comparative work for a group of individuals to quantify differences between the cases rather than suggesting absolute quantitative measures applicable to all other similar cases. Therefore the size of data set used here is not very important and only used to justify the functionality of this proposed quantitative model. The experiment (Figure 7) shows that the hypothesis of progressive cognitive learning of an individual (participant) by the graphical test is proven with 75% accuracy by using the Bayesian Network classifier. The further developed version of suggested model would be used for modelling of learning difficulties or decreasing the effects of them. On the basis of research results which suggest that around half of people with Autism (ADS) may also have a learning disability [43] , in further version of work we also would pay more attention on Autism

314

Computer Games Technology

as well as learning disability to diminish its adverse effects. One of the ADS characteristics is the lack of social communication with the other individuals. In addition to that the ADS individual also ignores what the others think and feel about him/her. For the intervention or supportive skill development which helps avoid these abnormal behavioural effects, more specific graphic-based utilities and rich interactive media GUI may be used. First of all, the differences of cognitive learning characteristics between the two groups of individuals (normal/abnormal) have to be detected. In the second stage these characteristics are embedded in the computer game or graphics agents where each one plays a functional stimulant role in the game or immersive graphical environment by forcing the individual to behave cognitively in opposite direction. Briefly at the intervention stage the behavioural data of normal group whose feedbacks are acquired via game-graphical environment, might be used to adjust the system parameters which then are used to influence the behaviours of abnormal group. This could progressively lead to cognitive normalisation of those from abnormal group by a convergence process. More sophisticated design of the proposed system may also consist of a complete console/video game which includes the game agents functioning in the same way with this graphical cognitive test (COGNITO) while the player is interacting with a rich interactive game environment. Identifying more realistic and well defined attributes directly driven from users’ interaction in the virtual environment will enhance the outcome of the cognitive analysis using suggested Bayesian classification process in this paper’s framework.

A Quantisation of Cognitive Learning Process by Computer Graphics...

315

REFERENCES 1.

2.

3.

4.

5.

6.

7.

8.

9.

Huntinger, P.L. (1996) Computer Applications in Programs for Young Children with Disabilities: Recurring Themes. Focus on Autism and Other Developmental Disabilities, 11, 105-114. http://dx.doi. org/10.1177/108835769601100206 Wainer, A.L. and Ingersoll, B.R. (2011) The Use of Innovative Computer Technology for Teaching Social Communication to Individuals with Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 5, 96-107. http://dx.doi.org/10.1016/j.rasd.2010.08.002 Mitchell, P., Parsons, S. and Leonard, A. (2007) Using Virtual Environments for Teaching Social Understanding to 6 Adolescents with Autistic Spectrum Disorders. Journal of Autism and Developmental Disorders, 37, 589-600. http://dx.doi.org/10.1007/s10803-006-0189-8 Heimann, M., Nelson, K.E., Tjus, T. and Gillberg, C. (1995) Increasing Reading and Communication Skills in Children with Autism through an Interactive Multimedia Computer Program. Journal of Autism and Developmental Disorders, 25, 459-479. http://dx.doi.org/10.1007/ BF02178294 Golan, O. and Baron-Cohen, S. (2006) Systemizing Empathy: Teaching Adults with Asperger Syndrome or High-Functioning Autism to Recognize Complex Emotions Using Interactive Multimedia. Developmental and Psychopathology, 18, 591-617. http://dx.doi. org/10.1017/s0954579406060305 Charlop-Christy, M.H., Le, L. and Freeman, K.A. (2000) A Comparison of Video Modelling with in Vivo Modelling for Teaching Children with Autism. Journal of Autism and Developmental Disorders, 30, 537-552. http://dx.doi.org/10.1023/A:1005635326276 Sun, R., Merrill, E. and Peterson, T. (2001) From Implicit Skills to Explicit Knowledge: A Bottom-Up Model of Skill Learning. Cognitive Science, 25, 203-244. http://dx.doi.org/10.1207/s15516709cog2502_2 Sun, R. and Bookman, L. (1994) Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Norwell. http://dx.doi.org/10.1007/b102608 Sun, R. (1995) Robust Reasoning: Integrating Rule-Based and '=?  š    *   ! ‚! ‡ª|=‡Œ– http://dx.doi.org/10.1016/0004-3702(94)00028-Y

316

Computer Games Technology

10. Anderson, J. (1983) The Architecture of Cognition. Harvard University Press, Cambridge. 11. Sun, R., Peterson, T. and Merrill, E. (1996) Bottom-Up Skill Learning in Reactive Sequential Decision Tasks. Proceedings of 18th Cognitive Science Society Conference, Hillsdale, 684-690. 12. Lindsey, M. (2002) Comprehensive Health Care Services for People with Learning Disabilities. Advances in Psychiatric Treatment, 8, 138147. http://dx.doi.org/10.1192/apt.8.2.138 13. Goschke, T. (1996) Lernen und Gedächtnis: Mentale Prozesse und Gehirnstrukturen. In: Roth, G. and Prinz, W., Hrsg., Kopf-Arbeit: Gehirnfunktionen und kognitive Leistungen, Spektrum Akademischer Verlag, Heidelberg, 359-410. 14. Lamberts, K. and Shanks, D. (1997) Knowledge, Concepts and Categories. Psychology Press, Sussex. 15. Orun, A.B. and Seker, H. (2012) Development of a Computer Game?  ‘ ^  }  ?  *    † Bayesian Inference Methods. Computers in Human Behavior, 28, 1332-1341. http://dx.doi.org/10.1016/j.chb.2012.02.017 16. Hoeft, R.M., Jentsch, F.G., Harper, M.E., Evans III, A.W., Bowers, C.A. and Salas, E. (2003) TPL-KATS-Concept Map: A Computerized Knowledge Assessment Tool. Computers in Human Behaviour, 19, 653-657. http://dx.doi.org/10.1016/S0747-5632(03)00043-8 17. Gordon, D., Schultz, A., Grefenstette, J., Ballas, J. and Perez, M. (1994) User’s Guide to the Navigation and Collision Avoidance Task. Naval Research Lab, Washington DC. 18. Durkin, K. and Barber, B. (2002) Not So Doomed: Computer Game Play and Positive Adolescent Development. Applied Developmental Psychology, 23, 373-392. http://dx.doi.org/10.1016/S01933973(02)00124-7 19. Fleming, M.J. and Rickwood, D.J. (2001) Effects of Violent versus Nonviolent Video Games on Children’s Arousal, Aggressive Mood, and Positive Mood. Journal of Applied Social Psychology, 31, 20472071. http://dx.doi.org/10.1111/j.1559-1816.2001.tb00163.x 20. Goh, D.H., Ang, R.P. and Tan, H.C. (2008) Strategies for Designing Effective Psychotherapeutic Gaming Interventions for Children and Adolescents. Computers in Human Behavior, 24, 2217-2235. http:// dx.doi.org/10.1016/j.chb.2007.10.007

A Quantisation of Cognitive Learning Process by Computer Graphics...

317

21. ™  !`;!;!`¢ƒ} >> !;‡——€? ^  Stereotype: The Case of Online Gaming. Cyberpsychology & Behavior, 6, 81-91. http://dx.doi.org/10.1089/109493103321167992 22. Wright, J.H., Wright, A.S., Albano, A.M., Basco, M.R., Goldsmith, ’ˆ  š ! { ‡——‚€ }> =  }  { >  ; > ˜ ` \   “   š   { > Time. American Journal of Psychiatry, 162, 1158-1164. http://dx.doi. org/10.1176/appi.ajp.162.6.1158 23. Prada, R. and Paiva, A. (2009) Teaming up Humans with Autonomous '  }    *   !|!‰—=|— >˜žž org/10.1016/j.artint.2008.08.006 24. Bostan, B. (2010) A Motivational Framework for Analysing Player and Virtual Agent Behaviour. Entertainment Computing, 1, 139-146. http:// dx.doi.org/10.1016/j.entcom.2010.09.002 25. Nagarajan, S.S., Wang, X., Merzenich, M.M., Schreiner, C.E., ˆ !‹!ˆ ^!‹! |ŒŒ‰€'> `   Used for Training Language Learning-Impaired Children. IEEE Transactions on Rehabilitation Engineering, 6, 257-268. http://dx.doi. org/10.1109/86.712220 26. Parsons, S., Mitchell, P. and Leonard, A. (2004) The Use and Understanding of Virtual Environments by Adolescents with Autistic Spectrum Disorders. Journal of Autism and Development Disorders, 34, 449-466. http://dx.doi.org/10.1023/B:JADD.0000037421.98517.8d 27. Shane, H.C. and Albert, P.D. (2008) Electronic Screen Media for Persons with Autism Spectrum Disorders: Results of a Survey. Journal of Autism and Developmental Disorders, 38, 1499-1508. http://dx.doi. org/10.1007/s10803-007-0527-5 28. Silver, M. and Oakes, P. (2001) Evaluation of a New Computer Intervention to Teach People with Autism or Asperger Syndrome to Recognise and Predict Emotions in Others. Autism, 5, 299-316. http:// dx.doi.org/10.1177/1362361301005003007 29. Hayes, N. (2000) Foundations of Psychology. Thomson Learning, London. 30. Honey, E., McConachie, H., Turner, M. and Rodgers, J. (2012) Validation of the Repetitive Behaviour Questionnaire for Use with Children with Autism Spectrum Disorder. Research in Autism Spectrum Disorders, 6, 355-364. http://dx.doi.org/10.1016/j.rasd.2011.06.009

318

Computer Games Technology

31. ?  ! ˆ“! '! ‘ˆ! ‹^ ! ;\  ’ ! `  *   !|!ª=Œ— 34. Berry, D. and Broadbent, D. (1984) On the Relationship between Task Performance and Associated Verbalizable Knowledge. Quarterly Journal of Experimental Psychology, 36A, 209-231. http://dx.doi. org/10.1080/14640748408402156 35. Nissen, M. and Bullemer, P. (1987) Attentional Requirements of Learning: Evidence from Performance Measures. Cognitive Psychology, 19, 1-32. http://dx.doi.org/10.1016/0010-0285(87)900028 36. Willingham, D., Nissen, M. and Bullemer, P. (1989) On the Development of Procedural Knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 1047-1060. http:// dx.doi.org/10.1037/0278-7393.15.6.1047 37. ’ ! `; ‡—||€   ‹ implements further GMs to increase its playability, e.g., by realizing a    > >"     body. Thus, by providing further GMs in addition to the ones used in the   > !  •    aspects may be improved. *  !   ™  ½    \  utilizes GMs as an educational tool by mapping knowledge rules to them, thus directly           ‘  |€ {  ™  ½    Encoding utilizes the interaction between at least one game-bound GM and one player-bound GM to require the application of the learning content on a rule-based or skill-based level of human performance. Subsequently, learners are provided with immediate feedback about their learning progress. This learning process results in the compilation of a mental model for the knowledge. This mental model ultimately is utilized to apply the knowledge on a knowledge level, i.e., transferring it from the serious game to a real world context. The GMs that encode the knowledge’s rules and that interact with each other are metaphors for the learning content. They are responsible > •^      “          >  as knowledge’s    which can be fully internalized in the form of mental models.

352

Computer Games Technology

Figure 1˜{ ™ ½   \  describes the process of knowledge encoding and learning using GMs. The knowledge gets segmented into coherent sets of rules which are mapped as game rules to interacting GMs. The interaction between these GMs creates a learning affordance for the encoded learning content. This initiates the theoretically grounded learning process.

GAMIFIED TRAINING ENVIRONMENT FOR AFFINE TRANSFORMATIONS GEtiT’s development followed the guidelines of the Gamified Knowledge Encoding. The main goals of this development process were (1) to transform

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

353

the AT knowledge into game rules and (2) to realize GMs that mediate them. Subsequently, after demonstrating its effectiveness in its prototype version [4], GEtiT’s visual style was changed to a state-of-the-art style of modern computer games (see Figure 2). This major overhaul included the implementation of a background music and sound effects to provide learners with additional acoustic feedback. Also, GEtiT received a more advanced point system, an achievement system, a debriefing system, and a small builtin wiki. This section presents GEtiT’s design, describes the realization of the new features as well as the specific VR version, and demonstrates the Gamified Knowledge Encoding.

Figure 2: GEtiT used a very rudimentary visual presentation in its prototype version. This version also lacks a color-coding for the different AT operation types and an acoustic feedback when playing an AT card.

Design Core Gameplay Working with the Gamified Knowledge Encoding, the AT knowledge first was separated into the individual theoretically grounded mathematical operations and the resulting transformation effects. The mathematical operations were mapped as game-knowledge rules to a player-bound GM mediating each individual operation as a playable AT card. Activating

354

Computer Games Technology

a card displays a direct value configuration screen resembling the structure of a 4×4 matrix that allows for the operation’s configuration (see Figure 3).

Figure 3: GEtiT    {>            {   Š    >>    { knowledge to correctly determine a desired transformation’s values.

The AT cards, of which each can only be played once during a particular level, moderate the level of abstraction of the learning content. The degree of the moderation is controlled by providing$  $    : ! ! ! > ; >        !    >      >   { >     €!  > transformation vector (medium), or an empty transformation matrix (hard, > €\>{           screen that further moderates the level of abstraction by either resembling the structure of a vector or a 4×4 matrix. The 4×4 matrix only provides access        {> >       >      >  {  !    activated by clicking on them, are shown at the bottom of the user interface > >      > {  transformation type is indicated with a symbol and a distinct color allowing for a fast and easy recognition (see Figure 4).

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

355

Figure 4: GEtiT>>     { >  >  ‘     ˜ ! !   !  " !   ƒ  ! •        

The transformation effects were mapped as knowledge rules to a manipulable game object, i.e., a game-bound GM, presented in the form of   }  Š >  >>  the entered values to the object GM that immediately changes its status. The object additionally casts an orange trail indicating the path on which it has translated. Thus, the object mediates the effects of an AT operation by providing an immediate feedback and visually demonstrating the underlying principles. The object’s position is displayed in GEtiT’s user interface to provide learners with concrete values they need to use to correctly compute further AT operations. In this way, GEtiT directly encodes the mathematical rules of matrix algebra and their utilization to express and to perform ATs (see Figure 5).

Figure 5: GEtiT challenges learners with spatial AT puzzles. The goal is to match a level’s victory conditions symbolized by a half-transparent object (upper

356

Computer Games Technology

center right) with the object (upper center left) by transforming it using the { €  >        >>  € > =       inputs, the object gets immediately transformed and casts an orange trail.

The application of ATs is required by GEtiT’s level design following the concept of an escape scenario [64]. Each individual level challenges a player to activate an exit portal by solving a spatial AT puzzle. The spatial puzzle is solved by transforming the object in such a way that it matches a level’s victory conditions. The victory conditions are presented in form of a semitransparent copy of the transformable object, i.e., a game-bound switch GM, that indicates the required position, rotation, and overall status of the object. GEtiT additionally displays the coordinates of the switch to allow learners to mainly focus on determining the correct mathematical solution instead of being challenged to locate the target position manually. As soon as the victory conditions are met, the exit portal is opened and the player can proceed to the next spatial puzzle (see Figure 6). The interaction between     ™`     >  for ATs.

Figure 6˜'>>  Š >      { cards. The system then checks if the victory conditions are met.

{ {  >  creates a learning affordance for the AT learning content. Users are required to execute the AT cards GM during the gameplay, thus repetitively applying their AT knowledge on a rule-based level of human performance. Subsequently, they get visually informed about the underlying principles as the object immediately changes its state. This repetitive practice leads to a compilation of mental models for ATs. These mental models ultimately achieve a training transfer from the serious game to a real world application like utilizing ATs to create VR systems or simply solving the assignments of an exam.

Gameplay Enhancements Aside from the three core GMs, GEtiT includes additional GMs to enhance the usability as well as the playability and to increase the learners’ motivation. For enhancing the usability, GEtiT displays the position of a level’s origin

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

357

and the direction of a level’s axes. The former information is mostly needed when a rotation or reflection operation is desired. The latter information is relevant for every transformation operation. Also, GEtiT provides an undo function to allow learners to revert their last action in case of a wrong input. The serious game provides a small built-in AT wiki that informs about the underlying theoretically grounded mathematical aspects. The AT wiki keeps learners immersed when they need to look up further information to determine a spatial puzzle’s correct solution. For the purpose of enhancing GEtiT’s motivational aspects and playability, an achievement and a point system got implemented. The point system is based on a performance rating system that challenges players to solve a level with a minimum amount of cards. Using the undo button keeps the draw counter unchanged to keep players from exploiting it. Beating a level with the minimum or small deviation from it rewards players with a performance dependent amount of points symbolized by stars. The points simultaneously provide users with feedback about their progress towards the completion of the game, i.e., stars earned for a particular level are displayed in the level selection menu. Also, the point system is used to create a ranking among all players when GEtiT is played in classroom mode. Here, GEtiT communicates with a database server to synchronize the points of all registered players. Achievements are unlocked by solving levels in a perfect ! >   > > !  hidden Easter-egg.

"  GEtiT displays a debriefing screen after a level was solved (see Figure 7). The debriefing system provides additional immediate feedback that allows learners to reflect on their computational results [65, 66]. The debriefing screen informs about the number of cards used, the level’s minimum, the stars achieved, the time needed, and a composite mathematical equation of the used ATs. The composite mathematical equation aims at the development of an understanding of different forms of expressing AT operations. This is critical as it directly integrates the theoretically grounded mathematical aspects into the gameplay. By displaying concrete matrixmatrix multiplications, learners can integrate this knowledge in their mental models. The debriefing screen also provides options to continue to the next puzzle, to retry the current puzzle, or to return to the level selection menu.

358

Computer Games Technology

Figure 7: The%  provides information about a player’s gameplay performance and displays the mathematical equation of the used ATs.

Audiovisual Encoding Various sound effects were implemented in GEtiT to provide learners with acoustic feedback [64]. Each AT type received an individual sound effect that is played when an AT card is activated. This provides players with an acoustic feedback when a specific AT operation type successfully was applied. Furthermore, GEtiT provides sound effects for walking (footsteps), jumping, touching a card, using the undo button, and a general event indication. The game includes a dubstep-like background music to support its futuristic visual style.

GEtiT VR GEtiT VR utilizes the same GMs as GEtiT but realizes them in a diegetic way [67] to increase the system’s naturalness, presence, and usability [68, 69]. Naturalness refers to the degree with which actions and effects in a VE correspond to the actions and effects in the real world [70]. The naturalness of an interaction depends on the degree with which it matches the task context [70]. Thus, naturalness is affected by the intuitiveness of the interaction [71]. This main design decision was made to allow for a comparison of the learning outcomes between the two different visualization

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

359

technologies without confounding the results by implementing different GMs. GEtiT VR presents the AT cards as physical objects inside of the VE (see Figure 8). A moveable card holder gives players access to the cards. Selecting and configuring an AT card is realized with a selection and manipulation interaction technique. Selection and manipulation techniques are one of the three fundamental 3D interaction tasks [72]. Their realization is defined in terms of a user’s distance to the target element. The distance can either be remote requiring an artificial pointing metaphor, e.g., a virtual ray, or within arm’s reach allowing for a direct interaction [73]. The latter approach is a very natural interaction technique and can be realized with grasping metaphors simulating a user’s hand or controller inside of an VE [72].

Figure 8: GEtiT VR utilizes the same GMs as the desktop version but realizes the interface elements in a diegetic way.

Implementing a within arm’s reach grasping metaphor, players select a card by merely touching it with one of the game controllers (see Figure 9). A controller’s position is indicated with its 3D asset inside of the VE. Pulling the controller’s trigger button activates the selected AT card. Touching the  •  ^> >              >         {           ' Š !>   •       >

360

Computer Games Technology

Figure 9˜{        !         !> >     •  GEtiT VR.

The positions of the object and of the target are communicated via diegetic labels being directly attached to the objects inside of the VE. Other pieces of information, such as the level selection screen, the main menu, and the AT wiki, are presented in a diegetic way by providing a playing room (see Figure 10). Players can transition between the playing room and the spatial puzzle levels using a Virtual HMD metaphor [74]. This diegetic transition technique metaphor is very natural and provides a high degree of self-control. By slowly putting on or taking off the Virtual HMD, users are in full control over the actual transition. As GEtiT’s levels are normally larger than the tracking area, GEtiT VR implements the intuitive and easy Point & Teleport technology [75] to perform a locomotion inside of the VE aside from real walking [76, 77].

Figure 10: GEtiT VR realizes the game’s menu as a playing room. The Virtual HMD allows for a transition between the menu and a level.

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

361

{   >  >  ™\{~š    by the research goal to analyze if providing a full visual immersion while ^ >   >  the same leads to an increased learning outcome. GEtiT’s GMs were directly ported to VR and realized as diegetic and natural interfaces. This approach, however, neglected further adaptions to ensure a similar usability to GEtiT. Both GEtiT versions were compared in respect to their usability in a user study [78]. In particular, the study      •        " =  >      • {           >    >   ^ >  ^! ! particular level. The satisfaction was determined by assessing the games’ intuitive use and by analyzing the users’ preference. The results revealed            

 ^™\{~š{      between both versions and the majority of the participants favored GEtiT ~š!"       { !  results validated the overall design and the overall playability but indicated >              in VR. As a result, GEtiT VR is a mere prototype and potentially not directly comparable to GEtiT in respect to its learning effectiveness. A comparison of both systems still is very critical to gain insights into the overall feasibility of this approach and to draw technical design guidelines from the results.

Learning Approach GEtiT fulfills some aspects of situated learning [79–81]. The serious game guides the learning process with a complex problem and embeds it in an authentic context. GEtiT provokes an intrinsic motivation in the learner to solve the learning assignments, i.e., to find a solution to the spatial puzzles, by providing an escape scenario. Targeting a training transfer to a computer graphics context [2], GEtiT creates an authentic context by requiring the application of ATs to transform a virtual game object inside the VE. This is achieved by providing the direct value configuration screen requiring the completion of 4×4 matrices. However, the serious game lacks the aspects of collaborative construction and reflection of the learning content which is typically associated with the situated learning theory [82]. Also, GEtiT is designed to achieve a transfer-oriented learning of ATs instead of mainly linking the learning content’s application to the situations created during the gameplay.

362

Computer Games Technology

™\{      >   problem-based learning [83, 84]. Problem-based learning is self-directed learning being motivated with a complex problem [85] and being assisted with scaffolding that guides the learning process [86]. Solving the presented task provides learners with the opportunity to develop an understanding of the underlying principles and to acquire new knowledge. GEtiT acts as a tutorial system, provides learners with complex tasks and scaffolds them. In this way, GEtiT provides opportunities for a transfer-oriented learning.

Technology GEtiT and GEtiT VR are developed with unity in the version 5.5.2p1 [87] for PC and Mac. The gameplay is rendered to the connected main monitor and, in case of GEtiT VR, to the HTC Vive HMD. The VR implementation of GEtiT VR is achieved using the SteamVR Plugin [88] in the version 1.2.0 which already provided functions for the point & teleport locomotion, controller-based system interaction, controller tooltips, and overall player controller. The playing room’s furniture was freely available on the unity asset store [89] or part of the unity standard assets.

EXPERIMENTAL DESIGN Due to the overall indications discussed in Section 2, the underlying design principles derived from Section 3, and the concrete implementation described in Section 4, we assume the following hypotheses:H1The learning outcome is improved when the mediation of the knowledge is audiovisually enhanced.H2The learning outcome is improved when a debriefing system is provided.H3The learning outcome is improved when the learning process takes place in immersive VR. {  >      >  !  ™\{•      !       ™  ½    \  model consisted of two phases. The phase was designed to analyze the effects of an audiovisual enrichment by comparing two different GEtiT versions. GEtiT in the enriched version utilized the aforementioned audiovisual encoding of the AT cards by providing a distinct symbol color and sound effect for each individual transformation type. The reduced version utilized the same color and provided the same sound effect for every transformation >  {   >        >> =       as a third condition. The second phase was designed to compare GEtiT  ™\{~š          ? ™\{

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

363

        and the achievement system which were not implemented in phase 1. Internally, both phases implemented the same experimental design to achieve comparability. The overall procedure was designed to follow the structure of a traditional class-based learning. The GEtiT-based learning began after the learning content was presented in an interactive computer graphics lecture and before it was fully discussed in the preceding session. In this way, the experiment simulated the implementation of GEtiT in the context of a regular curriculum at a university. The experiment consisted of four 90-minute learning sessions taking place on a weekly basis. In the week preceding the last learning session, an AT knowledge assessment test was written. The participants who were assigned to one of the desktop-3D GEtiT groups or the paper-Group completed the sessions in the form of a traditional class. The vr-Group was split into smaller two-participant teams due to the amount of available HTC Vive systems in the lab. The vr-Group was required to take a break in the middle of their sessions to reduce the chances for an effect of cybersickness [90] and to avoid a strong effect of exhaustion.

MEASURES All questionnaires were translated to the common language at the study’s location. For ensuring that all questions were understood properly, the participants’ language proficiency was assessed.

Simulator Sickness During phase 2, the simulator sickness was measured for all participants assigned to GEtiT VR before, during the mandatory break, and after a playing session using the simulator sickness questionnaire (SSQ) [91]. The results were used to measure the overall quality of the VR simulation and to identify potential negative effects that could have affected the study’s results.

       The learning outcome was measured using a 16-assignment pen-and-paper exam assessing the participants’ overall AT knowledge. The assignments were designed to be of similar difficulty to the assignments given in a

364

Computer Games Technology

regular final exam of the interactive computer graphics lecture. Also, GEtiT recorded a participant’s solved levels to analyze the efficiency.

Learning Quality The learning quality of the tested learning methods was measured using a self-designed questionnaire (1 = disagree; 5 = agree) following the idea of the assessment method used for the prototype version [4]. The questionnaire consists of two subcategories and specific questions relevant for each of the two phases. The Learning Quality subcategory consists of nine questions (Q1-Q9) and the system-specific Motivational Aspects subcategory consists of six questions (Q10-16). Q17 and Q18 were added to analyze the audiovisual encoding in phase 1. Q19 and Q20 were designed to assess the achievement system and the debriefing system added to the system in phase 2. For evaluating the results, the overall mean for the sum of a subcategory’s questions is computed. Learning QualityQ1Did you enjoy playing GEtiT / solving the paperbased assignments?Q2Did GEtiT’s puzzles / the assignments help you to develop a better understanding of ATs?Q3Did you notice a knowledge gain while you were solving the GEtiT puzzles / the assignments?Q4Did the     ^   œ”‚“   ^  ™\{> ž    œ”–“   the GEtiT puzzles / the assignments well adjusted?Q7Were you motivated         œ”‰; _   that was based on GEtiT / the paper-based assignments?Q9Was it interesting to solve the GEtiT puzzles / the assignments by using AT operations? Motivational AspectsQ10Was the serious game-based learning method more enjoyable than traditional learning methods, e.g., paper-based assignments?Q11Would you prefer to utilize a serious game instead of visiting a regular class?Q12Did you notice a higher motivation to play GEtiT to practice your knowledge in contrast to other learning methods?Q13Were you motivated by the additional feedback mechanisms, such as highscores and the number of used operations?Q14Did the feedback mechanisms motivate you to try a particular level again to improve your performance?Q15Were you motivated by the indication of the needed time?Q16Were you motivated by the ranking system? Phase 1Q17Did the color(s) of the AT cards help you to internalize the different AT operation types?Q18Did the sound effects of the AT cards help you to internalize the different AT operation types?

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

365

Phase 2”|Œ;     >   ^      motivating?Q20Did the mathematical representation of your solution at the end of each level help you to develop a better understanding of ATs?

Participants The participants were recruited from the students participating in the lecture on interactive computer graphics. They were offered credits mandatory for obtaining their Bachelor’s degree and bonus points for the lecture’s final exam. After being introduced to the experiment, the participants signed an informed consent form. Phase 1. In total, 34 students volunteered to take part in the study. Unfortunately, 13 of them missed at least one session and had to be excluded from the sample. The remaining 21 participants (8 females; 13 males) had a mean age of 23.52 years (SD = 3.30). Based on self-report, 13 participants were frequent computer game players. They were randomly assigned to the enriched-Group (n=8), the reduced-Group (n=5), and the paper-Group (n=8). Phase 2. In total, 27 students volunteered to take part in the study. Unfortunately, 6 of them who were assigned to the GEtiT group missed at least one session and had to be excluded from the sample. The remaining 21 participants (6 females, 15 males) had a mean age of 21.90 years (SD = 1.89). Based on self-report, 13 participants were frequent computer game players. They were randomly assigned to the vr-Group (n = 13) and the GEtiT phase 2 -Group (n=8).

RESULTS In this section, the results of the user study are presented and evaluated according to the given hypotheses and the additional goals of this experiment. The results were compared by calculating either a one-way ANOVA or a two-sample t-test [92]. The effect size was determined using Cohen’s D. For determining a correlation, the Pearson’s product-moment correlation was computed.

Simulator Sickness The participants of the vr-Group were asked to complete the SSQ before the start of the learning session (pre), right after they started their break

366

Computer Games Technology

(mid), and at the end of the session (post). As Table 1 displays, no significant change in the SSQ ratings was found for each of the practice sessions. Table 1: SSQ total scores

       Initially, the three different GEtiT conditions were compared in regard to the yielded test result (F(19) = 0.22, p = 0.65; see Table 2) and the number of successfully solved levels (F(19) = 0.75, p = 0.40; see Table 3) but no significant difference was found. Thus, to increase the accuracy of further analyses, the GEtiT groups were combined and called desktop-Group (n = 21) in the remainder of this paper. The test results of the remaining three different conditions did not differ significantly (F(40) = 0.56, p = 0.46; see Figure 11). Further analyses revealed a Table 2: Test results in the AT knowledge assessment test

Table 3: Gameplay progress at the end of the experiment

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

367

Figure 11: Graphical comparison of the test results between the desktop-Group, the vr-Group, and the paper-Group. Error bars indicate standard deviations.

Figure 12: Graphical comparison of the mean gameplay progress between the desktop-Group and the vr-Group. Error bars indicate standard deviations.

Learning Quality At the end of the experiment, the participants were asked to rate the learning quality. In phase 1, 18 of the 21 participants filled in the questionnaire. In phase 2, all participants completed the learning quality questionnaire. A oneway ANOVA revealed no significant difference between the mean ratings of the learning quality subcategory (F(37) = 3.88, p = 0.06; see Table 4). Also, no significant difference was found between the individual four tested versions in regard to the motivational aspects subcategory (F(30) = 0.80, p = 0.38).

368

Computer Games Technology

Table 4: Mean learning quality ratings (Reduced: n=5, Enriched: n=6, Phase 2: n=8, VR: n = 13, and Paper: n=7)

No difference was found between the reduced and the enriched version for Q17 (t(9) = 0.14, p = 0.89) and Q18 (t(9) = 0.23, p = 0.82) measuring the perceived educational effect of the audiovisual encoding in phase 1. Both visual approaches received a mean rating at the scale’s neutral midpoint. The mean rating for the acoustic encoding was below the scale’s neutral midpoint. The achievement system added in phase 2 received a mean motivational rating above the scale’s neutral midpoint for GEtiT VR and a mean motivational rating slightly below the scale’s neutral midpoint for ™\{{      |Œ€·‡—‡!>·——–€ The perceived learning effect of the debriefng system had a mean rating above the scale’s neutral midpoint for both GEtiT versions. The ratings were    |Œ€·—‰!>·—|€

DISCUSSION Although a lack of statistical significance does not imply an equivalence, the results indicate that GEtiT achieves a similar AT knowledge learning outcome to traditional learning methods, i.e., by using paper-based assignments. Thus, the effectiveness measurements validate the findings of the initial prototype evaluation by confirming GEtiT’s transfer-oriented learning effects [4]. Also the significant correlation between the number of solved levels and the test result contributes to the ongoing validation of the Gamified Knowledge Encoding. By encoding the AT knowledge as game rules in GMs, a repetitive application of the learning content is achieved during the gameplay. This repetitive practice leads to an internalization of the AT knowledge in form of mental models. It also achieves a shift to a more pattern-driven application. The compiled mental models allow for a training transfer from GEtiT to a real world context. This was tested by implementing a pen-and-paper exam that only uses 2D pre- and post-images to visualize a desired AT operation. The participants of the GEtiT groups were not only required to solve the assignments, but also to transfer their knowledge from the 3D serious games to a 2D paper-based exam. As a

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

369

result, the learning outcome of playing GEtiT could be even higher than using traditional learning methods.

       Interestingly, the learning outcome was not affected by the difference in the audiovisual encoding tested in phase 1 and the debriefing system provided in phase 2. The lack of an effect due to the audiovisual encoding is explainable by the fact that the two tested versions were only different in respect to the used AT card colors and sound effects. The overall gameplay and application of the AT knowledge remained the same. Participants potentially were only focused on finding the correct solution to the spatial puzzles without paying attention to the audiovisual realization of the knowledge application. Hence, the learning effect is mainly caused by the frequent application of the AT knowledge independent of the application’s enhanced audiovisual mediation. However, the lack of an increased learning outcome caused by the implementation of the debriefing system is surprising. The reason for this could be an issue with the realization of the debriefing system. Instead of only focusing on the mathematical equation, the screen also provides information about the overall gameplay-related performance. This additional information might have distracted learners from the actual learning content. The participants could also have been in a strong state of flow and hence immediately continued to the next spatial puzzle without analyzing the debriefing screen. A solution would be to directly display and to update the composite mathematical equation during the gameplay. As a result, learners would then be able to directly connect their gameplay actions with the changes in the mathematical equation. Also, separating the mathematical equations from the gameplay information in the debriefing screen could improve its effectiveness. Therefore, H1 and H2    _       the learning outcome was found. ; >     !        that GEtiT VR has a lower learning outcome in contrast to GEtiT’s desktop  {     >      solved levels in the vr-Group. Despite having invested the same amount of time, the vr-Group was not able to complete as many spatial puzzles as the desktop-Group. A reason for this could be the complex interaction technique     *    >   and keyboard, GEtiT VR requires the usage of both HTC Vive controllers

370

Computer Games Technology

    {  ™\{ ~š•       >    >          >              { ! ™\{~š•               { ™`~š‰!         >               This is a critical insight for developers and educators. It demonstrates          ! !> !    "         *  !   importance to check for all usability factors during the development of a serious game. Overall, this leads to the outcome that both GEtiT versions cannot directly be compared in respect to their learning effectiveness. Also, it is not possible to draw generalizable insights about the effectiveness of VR technology for an AT knowledge learning based on this study’s results. Despite these limitations, the study indicated that using GEtiT VR leads to a successful training transfer and successfully demonstrated that the ™ ½   \  is also valid for VR serious games. This is a valuable insight for scientists, game designers, and educators aiming at the development of serious games targeting HMD-VR. Thus, H3     GEtiT versions ultimately were too different to be directly compared in regard to their learning outcome.

Learning Quality The learning quality analysis validates the concept of developing GEtiT to achieve a higher learning quality when practicing the complex and abstract ATs. In this way, the present study also validates GEtiT’s design as well as playability. Although no significant difference was found in the learning quality subcategory between the tested learning methods, the results indicate a clear trend that GEtiT and GEtiT VR achieve a higher learning quality. This outcome is critical as all participants had to invest the same amount of time but felt more engaged when using the serious game. As a result, GEtiT not only yields effective knowledge learning, but also achieves a higher learning quality thus indicating its overall effectiveness. The results also align with previous research [45, 46] by showing the highest learning quality rating in the vr-Group. In this way, the user study confirms that using VR technology can be beneficial for the overall learning quality of a serious game. This result is supported by the behavior of the participants. Except for the vr-Group, all other conditions showed some drop outs. The vr-Group, however, even reported to have experienced a strong intrinsic motivation to attend every session, thus confirming the measured high learning quality.

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

371

{   =>    >           all tested GEtiT versions were perceived as an engaging and motivating learning method. Interestingly, GEtiT VR showed no trend to yield a higher motivation than the desktop version. This outcome is explainable by a habituation effect. Instead of playing GEtiT VR for a single learning session only, the participants used the system over the course of 4 weeks. As a result,    >  ~š   have ceased over time. Interestingly, the implementation of an achievement system had no impact on the motivational aspects subcategory despite being rated as somewhat motivating. This could be a result of the general functionality of an achievement system. It rewards progress milestones but provides no constant feedback like the point system. {  >   Š             phase 1 revealed that the visual presentation of the core player-bound GMs requiring the knowledge application has only a limited effect on the perceived learning effect. This aligns with the assumptions drawn from the effectiveness measurement results. The results also show that acoustic effects are of lower priority when designing a serious game. This insight is important for designers who need to prioritize their development goals. ‘! >           was seen as helpful but not as a critical element relevant for knowledge learning. This         ™`      learning outcome.

CONCLUSION AND FUTURE WORK This paper presents two versions of GEtiT targeting a transfer-oriented learning of ATs. Both versions of the game implement the same core GMs to encode the learning content but use either desktop-3D or immersive VR to visualize the gameplay. In addition, a comprehensive presentation of the Gamified Knowledge Encoding is given for the first time. The two GEtiT versions were compared to a traditional paper-based learning method in regard to the learning outcome and learning quality. Also, the two versions were compared in respect to their efficiency. Lastly, this paper evaluates the effects of a debriefing system and of two different audiovisual encodings, i.e., reduced and enriched, of the learning content on the overall learning outcome and the perceived learning effects. The results of the present study show that encoding and presenting complex knowledge using GMs leads to an effective transfer-oriented

372

Computer Games Technology

knowledge learning. Thus, the results validate the design of GEtiT and      ^    ™  ½    \ . The effectiveness of the learning, however, was not affected by the audiovisual >   >   . Also, while showing ~š      Š!     " ™\{~š•        <  ! conclusions can be drawn from the comparison of the learning effectiveness of both versions. However, the study indicated a higher learning quality for the VR version. This is a critical insight for the ongoing research of VR   >     ^  create effective serious games. Future work needs to be aimed at further evaluations of the knowledge  ™`>>    ™ ½   \ . Also, new methods to realize the AT card GM in GEtiT VR need to be implemented and tested. This would allow for a comparison of the different visualization techniques and potentially reveal new insights about knowledge learning in immersive VR. Finally, instead of assessing the learning outcome with a paper-based exam only, the measurement could additionally be performed inside of GEtiT. This would allow for a more in-depth analysis of its training transfer.

ACKNOWLEDGMENTS We like to thank Mario Reutter and Jonathan Stahl for developing the first prototype of GEtiT. Also, we like to thank David Heidrich for his work in overhauling GEtiT, changing its visual style, and developing GEtiT VR. This publication was funded by the German Research Foundation (DFG) and the University of Würzburg in the funding programme Open Access Publishing.

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

373

REFERENCES K. S. Fu, R. Gonzalez, and C. S. G. Lee, Robotics: Control, Sensing, Vision and Intelligence, McGraw-Hill Education, New York, NY, USA, 1987. 2. J. Foley, A. van Dam, S. Feiner, and J. Hughes, “Computer graphics: principles and practice,” in Addison-WesleySystems Programming Series, Addison-Wesley, Reading, MA, USA, 1990. 3. 'ƒ µ `\’ ^!*    ;=   !Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (VRST ’16), pp. 343-344, ACM Press, Garching, Germany, November 2016. 4. 'ƒ µ `\’ ^!\    knowledge training using game mechanics,” in Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018), Würzburg, Germany, September 2018. 5. 'ƒ µ `\’ ^!™ ^     ˜ knowledge training using game mechanics,” in Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018), Würzburg, Germany, September 2018. 6. J. Rasmussen, “Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 13, no. 3, pp. 257–266, 1983. 7. P. N. Johnson-Laird, Mental Models. Towards a Cognitive Science of Language, Inference, and Consciousness, Harvard University Press, 1983. 8. C. J. Dede, J. Jacobson, and J. Richards, “Introduction: virtual, augmented, and mixed realities in education,” in Virtual, Augmented, and Mixed Realities in Education, D. Liu, C. J. Dede, R. Huang, and J. Richards, Eds., Smart Computing and Intelligence, pp. 1–16, Springer, Singapoore, 2017. 9. 'ƒ µ !;<  !`\’ ^!*        !Proceedings of the DeLFI and GMW Workshops 2017, C. Ullrich and M. Wessner, Eds., Chemnitz, Germany, 2017. 10. K. A. Ericsson, R. T. Krampe, and C. Tesch-Römer, “The role of deliberate practice in the acquisition of expert performance,” Psychological 1.

374

11. 12.

13. 14.

15. 16.

17.

18.

19.

20.

21.

22.

Computer Games Technology

Review, vol. 100, no. 3, pp. 363–406, 1993. J. P. Gee, What Video Games Have to Teach Us about Learning and Literacy, Palgrave Macmillan, New York, NY, USA, 1st edition, 2007. J. McGonigal, Reality Is Broken: Why Games Make Us Better And How They Can Change The World, Penguin Press, New York, NY, USA, 1st edition, 2011. M. Csikszentmihalyi, Flow: Das Geheimnis des Glücks, Klett-Cotta, Stuttgart, 15th edition, 2010. D. Weibel and B. Wissmath, “Immersion in computer games: the role of >>   " !International Journal of Computer Games Technology, vol. 2011, Article ID 282345, 14 pages, 2011. B. Reeves, T. W. Malone, and T. O’Discoll, “Leadership’s online labs,” Harvard Business Review, vol. 86, no. 5, pp. 58–66, 2008.  '  ^   ™   ! ’   ˜   "   of leadership styles on team performance during a computer game training,” in Proceedings of the 9th International Conference of the Learning Sciences, ICLS ’10, vol. 1, pp. 524–531, ACM, Chicago, IL, USA, 2010. P. Prax, “Leadership style in world of warcraft raid guilds,” in Proceedings of the DiGRA Nordic 2010, DiGRA, Stockholm, Sweden, 2010. R. Schroeder, Being There Together: Social Interaction in Virtual Environments, Oxford University Press, New York, NY, USA, 1st edition, 2011. L. Qiu, W. W. Tay, and J. Wu, “The impact of virtual teamwork on realworld collaboration,” in Proceedings of the International Conference on Advances in Computer Enterntainment Technology, ACE’09, pp. 44–51, ACM, Athens, Greece, October 2009. S. Richter and U. Lechner, “Communication content relations to coordination and trust over time: a computer game perspective,” in Proceedings of the 5th International Conference on Communities and Technologies, C&T’11, pp. 60–68, ACM, Brisbane, Australia, 2011. B. D. Glass, W. T. Maddox, and B. C. Love, “Real-time strategy game ˜      " !PLoS ONE, vol. 8, no. 8, Article ID e70350, 2013. } ' ™   ; ?  !         

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

23.

24.

25.

26.

27.

28.

29.

30.

31.

32.

375

selective attention,” Nature, vol. 423, no. 6939, pp. 534–537, 2003. C. S. Green and D. Bavelier, “Action-video-game experience alters the spatial resolution of vision,” Psychological Science, vol. 18, no. 1, pp. 88–94, 2007. E. F. Anderson, L. McLoughlin, F. Liarokapis, C. Peters, P. Petridis, and S. de Freitas, “Serious games in cultural heritage,” in Proceedings of the 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST, pp. 29–48, Malta, 2009. S. de Freitas and F. Liarokapis, “Serious games: a new paradigm for education?” in Serious Games and Edutainment Applications, M. Ma, A. Oikonomou, and L. C. Jain, Eds., pp. 9–23, Springer-Verlag, London, UK, 2011. L. A. Annetta, J. Minogue, S. Y. Holmes, and M.-T. Cheng, “Investigating the impact of video games on high school students’ engagement and learning about genetics,” Computers & Education, vol. 53, no. 1, pp. 74–85, 2009. Y. Klisch, L. M. Miller, M. E. Beier, and S. Wang, “Teaching the biological consequences of alcohol abuse through an online game: Impacts among secondary students,” CBE-Life Sciences Education, vol. 11, no. 1, pp. 94–102, 2012. L. M. Miller, C.-I. Chang, S. Wang, M. E. Beier, and Y. Klisch, “Learning and motivational impacts of a multimedia science game,” Computers & Education, vol. 57, no. 1, pp. 1425–1433, 2011. S. de Castell, J. Jenson, and N. Taylor, “Educational games: moving from theory to practice,” in Educational Gameplay and Simulation Environments: Case Studies and Lessons Learned, D. Kaufman and L. Sauvé, Eds., pp. 133–145, Information Science Reference, Hershey, PA, USA, 2010. A. Gurr, “Video games and the challenge of engaging the ‘net’ generation,” in Educational Gameplay and Simulation Environments: Case Studies and Lessons Learned, D. Kaufman and L. Sauvé, Eds., pp. 119–131, Information Science Reference, Hershey, PA, USA, 2010. H. C. Arnseth, “Learning to play or playing to learn - A critical account of the models of communication informing educational research on computer gameplay,” Game Studies, vol. 6, no. 1, 2006. E. Brown and P. Cairns, “A grounded investigation of game immersion,” in Proceedings of the Extended Abstracts on Human Factors in

376

33.

34.

35. 36. 37. 38.

39.

40.

41.

42.

43.

Computer Games Technology

Computing Systems (CHI ‘04), pp. 1297–1300, ACM Press, Vienna, Austria, April 2004. } ˆ  ! ’ }! ‹ }  ! `       experience of immersion in games,” International Journal of HumanComputer Studies, vol. 66, no. 9, pp. 641–661, 2008. M. Power and L. Langlois, “Designing a simulator for teaching ethical decision-making,” in Educational Gameplay and Simulation Environments: Case Studies and Lessons Learned, D. Kaufman and L. Sauvé, Eds., pp. 146–158, Information Science Reference, Hershey, PA, USA, 2010. M. Schulzke, “Moral decision making in fallout,” Game Studies, vol. 9, no. 2, 2009. E. Adams and J. Dormans, Game Mechanics, Advanced Game Design, New Riders, Berkeley, USA, 2012. ` ' ! ;      ! Game Studies, vol. 8, no. 2, 2008. A. Amory, K. Naicker, J. Vincent, and C. Adams, “The use of computer games  ˜  >>>  >  and game elements,” British Journal of Educational Technology, vol. 30, no. 4, pp. 311–321, 1999. A. C. Oei and M. D. Patterson, “Enhancing cognition with video games: a multiple game training study,” PLoS ONE, vol. 8, no. 3, Article ID e58546, 2013. L. S. Colzato, W. P. M. van den Wildenberg, S. Zmigrod, and B.   person shooter games is associated with improvement in working memory but not action inhibition,” Psychological Research, vol. 77, no. 2, pp. 234–239, 2013. I. D. Cherney, “Mom, let me play more computer games: they improve my mental rotation skills,” Sex Roles, vol. 59, no. 11-12, pp. 776–786, 2008. M. E. Joorabchi and M. S. El-Nasr, “Measuring the impact of knowledge gained from playing FPS and RPG games on gameplay performance,” in Proceedings of the Entertainment Computing – ICEC 2011, pp. 300–306, Springer, Vancouver, BC, Canada, October 2011. M. Pittalis and C. Christou, “Types of reasoning in 3D geometry thinking and their relation with spatial ability,” Educational Studies in

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

44.

45.

46.

47.

48.

49.

50.

51. 52.

53.

54.

377

Mathematics, vol. 75, no. 2, pp. 191–212, 2010. G. Gittler and J. Glück, “Differential transfer of learning: effects of instruction in descriptive geometry on spatial test performance,” Journal for Geometry and Graphics, vol. 2, no. 1, pp. 71–84, 1998. L. Freina and M. Ott, “A literature review on immersive virtual reality in education: state of the art and perspectives,” in Proceedings of the 11th eLearning and Software for Education (eLSE), no. 1, Bucharest, Romania, April 2015. J. Martn-Gutiérrez, C. E. Mora, B. Añorbe-Daz, and A. GonzálezMarrero, “Virtual technologies trends in education,” EURASIA Journal of Mathematics Science and Technology Education, vol. 13, no. 2, pp. 469–486, 2017. J. A. Stevens and J. P. Kincaid, “The relationship between presence and performance in virtual simulation training,” Open Journal of Modelling and Simulation, vol. 03, no. 02, pp. 41–48, 2015. M. Slater, “Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments,” Philosophical Transactions of the Royal Society of London B: Biological Sciences, vol. 364, no. 1535, pp. 3549–3557, 2009. G. Makransky and L. Lilleholt, “A structural equation modeling investigation of the emotional value of immersive virtual reality in education,” Educational Technology Research and Development, vol. 66, no. 5, pp. 1141–1164, 2018. M. Slater and S. Wilbur, “A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments,” Presence: Teleoperators and Virtual Environments, vol. 6, no. 6, pp. 603–616, 1997. C. Dede, “Immersive interfaces for engagement and learning,” Science, vol. 323, no. 5910, pp. 66–69, 2009. B. Weidenmann, “Multimedia, multicodierung und multimodalität beim online-lernen,” in Online Lernen. Handbuch für Wissenschaft und Praxis, L. J. Issing and P. Klimsa, Eds., ch. 5, pp. 73–86, Oldenbourg, Germany, 2009. H. Kaufmann, D. Schmalstieg, and M. Wagner, “Construct3D: a virtual reality application for mathematics and geometry education,” Education and Information Technologies, vol. 5, no. 4, pp. 263–276, 2000. H. Kaufmann, “The potential of augmented reality in dynamic geometry

378

55.

56. 57. 58.

59.

60.

61.

62. 63.

64. 65. 66.

Computer Games Technology

education,” in Proceedings of the 12th International Converence on Geometry and Graphics, 2006. M. Khan, F. Trujano, A. Choudhury, and P. Maes, “Mathland: playful mathematical learning in mixed reality,” in Proceedings of the Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI’18, Montreal, QC, Canada, April 2018. J. R. Anderson, Cognitive Psychology and Its Implications, Worth Publishers, 8th edition, 2015. F. Nickols, “The knowledge in knowledge management,” The Knowledge Management Yearbook 2000-2001, pp. 12–21, 2000. S. Oberdörfer and M. E. Latoschik, “Develop your strengths by gaming,” in Proceedings of the 43rd Annual German Conference on Informatics, Informatik 2013, M. Horbach, Ed., vol. P-220, pp. 2346– 2357, Gesellschaft für Informatik, Koblenz, Germany, September 2013. B. Dalgarno and M. J. W. Lee, “What are the learning affordances of 3-D virtual environments?” British Journal of Educational Technology, vol. 41, no. 1, pp. 10–32, 2010. V. Kaptelinin and B. A. Nardi, “Affordances in HCI: Toward a mediated action perspective,” in Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI 2012, pp. 967–976, ACM, Austin, Texas, USA, May 2012. T. Lynam, R. Mathevet, M. Etienne et al., “Waypoints on a journey of discovery: Mental models in human- environment interactions,” Ecology and Society, vol. 17, no. 3, 2012. Take-Two Interactive, “Kerbal space program,” 2011–2019, https:// kerbalspaceprogram.com. S. Oberdörfer and M. E. Latoschik, “Effective orbital mechanics knowledge training using game mechanics,” in Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018), IEEE, Würzburg, Germany, September 2018. D. Perry and R. DeMaria, David Perry on Game Design: A Brainstorming Toolbox, Course Technology, Boston, MA, USA, 2009. ;}^!'  !  !ž discipline,” Simulation & Gaming, vol. 41, no. 6, pp. 898–920, 2011. ½{;  !{   ;   ˜

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

67. 68.

69.

70.

71.

72.

73.

74.

75.

76.

379

concept analysis,” Nursing Education Perspectives, vol. 30, no. 2, pp. 109–114, 2009. A. R. Galloway, Gaming: Essays on Algorithmic Culture, vol. 18, Minnesota Press, 2006. P. Salomoni, C. Prandi, M. Roccetti, L. Casanova, and L. Marchetti,               ! in Proceedings of the 13th IEEE Annual Consumer Communications & Networking Conference (CCNC 2016), pp. 387–392, 2016. D. Medeiros, E. Cordeiro, D. Mendes et al., “Effects of speed and transitions on target-based travel techniques,” in Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST ’16, pp. 327-328, ACM, New York, NY, USA, November 2016, http://doi.acm.org/10.1145/2993369.2996348=0pt. D. A. Bowman, R. P. McMahan, and E. D. Ragan, “Questioning naturalism in 3D user interfaces,” Communications of the ACM, vol. 55, no. 9, pp. 78–88, 2012. S. Ghosh, C. S. Shruthi, H. Bansal, and A. Sethia, “What is user’s perception of naturalness? an exploration of natural user experience,” in Proceedings of the Human-Computer Interaction - INTERACT 2017, R. Bernhaupt, G. Dalvi, A. Joshi, D. K. Balkrishan, J. O’Neill, and M. Winckler, Eds., vol. 10514 of Lecture Notes in Computer Science, pp. 224–242, Springer International Publishing, Cham, Switzerland, 2017. J. J. LaViola Jr., E. Kruijff, R. P. McMahan, D. Bowman, and I. P. Poupyrev, 3D User Interfaces: Theory And Practice, Addison-Wesley Professional, 2017. D. A. Bowman, E. Kruijff, J. J. LaViola Jr., and I. Poupyrev, “An introduction to 3-D user interface design,” Presence: Teleoperators and Virtual Environments, vol. 10, no. 1, pp. 96–108, 2001. S. Oberdörfer, M. Fischbach, and M. E. Latoschik, “Effects of VE   Š   >   ! !    !   ! in Proceedings of the 6th Symposium on Spatial User Interaction, pp. 89–99, ACM, Berlin, Germany, October 2018. E. Bozgeyikli, A. Raij, S. Katkoori, and R. Dubey, “Point & teleport locomotion technique for virtual reality,” in Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, pp. 205– 216, Austin, TX, USA, October 2016. G. Bruder and F. Steinicke, “Threefolded motion perception during

380

77.

78.

79.

80. 81.

82. 83.

84.

85.

86.

Computer Games Technology

immersive walkthroughs,” in Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, VRST ’14, pp. 177–185, ACM, New York, NY, USA, 2014, http://doi.acm.org/10.1145/267101 5.2671026=0pt. T. Y. Grechkin, J. M. Plumert, and J. K. Kearney, “Dynamic affordances in embodied interactive systems: The role of display and mode of locomotion,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 4, pp. 596–605, 2014. 'ƒ µ !;<  !`\’ ^!†  knowledge learning in VR and desktop-3D,” in Proceedings of the CHI 2019, ACM Press, Glasgow, UK, 2019. J. Herrington and R. Oliver, “Critical characteristics of situated learning: Implications for the instructional design of multimedia,” in Proceedings of the ASCILITE 1995 Conference, pp. 253–262, Melbourne, Australia, December 1995. J. R. Anderson, L. M. Reder, and H. A. Simon, “Situated lerning and education,” Educational Researcher, vol. 25, no. 4, pp. 5–11, 1996. D. W. Shaffer, K. R. Squire, R. Halverson, and J. P. Gee, “Video games and the future of learning,” Phi Delta Kappan, vol. 87, no. 2, pp. 104– 111, 2005. A. Woolfalk, Educational Psychology, Allyn and Bacon, New Jersey, USA, 10th edition, 2008. H. S. Barrows, “Problem-based learning in medicine and beyond: a brief overview,” New Directions for Teaching and Learning, vol. 1996, no. 68, pp. 3–12, 1996. F. Dochy, M. Segers, P. Van den Bossche, and D. Gijbels, “Effects of problem-based learning: a meta-analysis,” Learning and Instruction, vol. 13, no. 5, pp. 533–568, 2003. M. Wilhelm and D. Brovelli, “Problembasiertes lernen (pbl) in der lehrpersonenbildung: der drei-phasen-ansatz der naturwissenschaften,” Beiträge zur Lehrerinnen- und Lehrerbildung, vol. 27, no. 2, pp. 195–203, 2009. C. E. Hmelo-Silver, R. G. Duncan, and C. A. Chinn, “Scaffolding and achievement in problem-based and inquiry learning: a response to Kirschner, Sweller, and Clark,” Educational Psychologist, vol. 42, no. 2, pp. 99–107, 2007.

Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge...

381

87. Unity, “Unity 5.5.2p1,” March 2017, https://unity3d.com/unity/qa/ patch-releases/5.5.2p1. 88. Valve Coorperation, “Steamvr plugin,” 2015–2018, https://www. assetstore.unity3d.com/en/#!/content/32647. 89. Unity, “Unity asset store,” 2017, https://www.assetstore.unity3d.com. 90. K. M. Stanney, K. S. Kingdon, D. Graeber, and R. S. Kennedy, “Human performance in immersive virtual environments: effects of exposure duration, user control, and scene complexity,” Human Performance, vol. 15, no. 4, pp. 339–366, 2002. 91. R. S. Kennedy, N. E. Lane, K. S. Berbaum, and M. G. Lilienthal, “Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness,” The International Journal of Aviation Psychology, vol. 3, no. 3, pp. 203–220, 1993. 92. G. Norman, “Likert scales, levels of measurement and the ‘laws’ of statistics,” Advances in Health Sciences Education, vol. 15, no. 5, pp. 625–632, 2010.

SECTION 4: GAMES FOR SOCIAL AND HEALTH PURPOSES

CHAPTER 14

Hall of Heroes: A Digital Game for Social Skills Training with Young Adolescents

Melissa E. DeRosier1 and James M. Thomas1 1

3C Institute, 4364 South Alston Avenue, Suite 300, Durham NC 27713, USA

ABSTRACT Traditional social skills training (SST) programs are delivered in person and suffer from significant time, financial, and opportunity barriers that limit their reach and potential benefits for youth. This paper describes the design and preliminary evaluation of Hall of Heroes, a digital game that presents SST through an engaging superhero-themed virtual story world. Participants were randomly assigned to complete the digital game (n = 15) Citation: Melissa E. DeRosier, James M. Thomas, “Hall of Heroes: A Digital Game for Social Skills Training with Young Adolescents”, International Journal of Computer Games Technology, vol. 2019, Article ID 6981698, 12 pages, 2019. https://doi. org/10.1155/2019/6981698. Copyright: © 2019 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. - Link: https://www.hindawi.com/journals/ijcgt/2019/6981698/

386

Computer Games Technology

or to a waitlist control condition (n = 14) and were compared on parentreport measures of social emotional functioning. Youth who completed Hall of Heroes significantly improved in their abilities to relate to others (both peers and family members) as well as to accept affection and express emotions with others, compared to youth who did not complete the SST intervention. Further, youth in the treatment condition showed a significantly greater decline in feelings of anxiety, depression, and hopelessness than did youth in the control condition. Both parents and youth reported high levels of engagement in and acceptability of the Hall of Heroes. This study adds to the research literature, supporting the potential of a game-based SST platform for effectively helping youth develop prosocial social problemsolving skills.

INTRODUCTION Social Relationships in Early Adolescence The early adolescent period is a turbulent and stressful one for youth who must adapt and adjust to a myriad of concurrent life changes. At the same time as significant physical and cognitive growth is occurring, youth move from the smaller, more protective elementary school setting, with smaller classrooms and strong teacher support, to the larger, more impersonal middle school environment. In late elementary and middle school, academic demands increase considerably with more difficult assignments, higher expectations for independence, and more stringent grading [1, 2]. Concomitant with escalating academic demands, the social environment shifts from smaller, more protective settings to much larger, varied, and competitive social groups [3–6]. The turbulence of these academic, social, and developmental shifts, combined with a decline in adult supervision, leads to a period of heightened vulnerability. For many youth, the early adolescent period is associated with loss of self-esteem, increased anxiety and depression, and drops in academic performance [3, 7]. And, adolescents with lower social skills  !>  !     ) and poorer peer relations are at greater risk of academic failure, behavioral problems,      ‰Ÿ|| * ! >    >>  prosocial behaviors serve as protective factors, helping youth adjust more quickly to changing demands and fostering greater school engagement and academic success [2, 3, 8].

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

387

Social Skills Training Social skills training (SST) interventions have repeatedly been shown to significantly improve social skills, behavior, and interpersonal relationships [12–15]. Effective SST can serve a protective function, as youth learn and practice those social skills that increase prosocial and/or inhibit maladaptive behaviors [16, 17]. For example, cooperation fosters companionship—a key function of peer relations—while impulsive and domineering behaviors undermine the development and maintenance of positive social relationships. As Peterson et al. [7] explain, “Improvements in social skills and problemsolving abilities allow the preadolescent to build a supportive network and engage in appropriate social activities that may ultimately contribute to reduced school drop-out rates as well as other indices of improved adaptive functioning” (pp. 289). Effective SST programs that improve social problem solving and social skills are essential for preparing youth for the transition to middle school [8, 10]. Unfortunately, SST is traditionally delivered in person by trained         >        !   ^ > !   !  time commitments, which too often prevent youth from participating in and     ''{ |ª! |‚ ‹ >      >  (e.g., nonadherence to the in-person treatment protocol) has also been shown to undermine outcomes for youth [7, 18]. And, lastly, SST is typically delivered in a group format with one provider to many youths. Research suggests tailoring interventions to the individual may be necessary to achieve intended gains in social and academic outcomes [19, 20].

Games for Impact To help overcome the limitations of in-person SST, a growing number of intervention developers are employing emerging technologies [21]. Gamebased platforms offer a cost-effective means to deliver program content in an accessible and engaging way, eliminate travel and provider staffing and training requirements, and standardize program delivery on a broad scale [19] (Merry et al., 2012). When digital games are dynamic and individualized, the user benefits from increased practice opportunities as well as a more tailored and personalized learning experience [19, 22–24]. Further, the engaging and interactive nature of games, and the relative anonymity of digital learning environments, means youth are often motivated to more fully and openly

388

Computer Games Technology

participate in game-based SST compared to traditional in-person SST [25, 26]. There is growing support for the effectiveness of digital games for social, emotional, and behavioral development [21, 27–30]. Digital platforms have been shown to increase comprehension and recall of presented material over conventional instructional methods and produce greater generalization of acquired skills to real-life behavior (Clark et al., 2016) [31, 32]. Further, there is increasing evidence that digital environments are especially effective at targeting higher-order thinking skills of analysis, synthesis, and evaluation, which form the basis of social problem solving [28, 33].

The Development of Hall of Heroes Drawing on the foundational principals of SST and digital game-based learning, 3C Institute developed Hall of Heroes, an online single-player point-and-click adventure game wherein the youth navigate a virtual school-like world, engaging with other characters to solve tailored social problem-solving tasks. As illustrated in Figures 1 and 2, the full Hall of Heroes software includes a personalized avatar builder, a gameplay training scene, six baseline assessment scenes, seven tutorial scenes, a final skill mastery application scene, six cinematic cut scenes, and seven knowledge monitoring quizzes. Each of the first six scenes serves as an introduction to and baseline measure of one of Hall of Heroes’ six core social skills: impulse control, communication, cooperation, social initiation, empathy, and emotion regulation. Each of the seven tutorial scenes provides dynamic instructional content focusing on two or three particular social problemsolving strategies for these core skills (see Table 1 for scene overviews and learning objectives). The skill concepts covered in these scenes are reinforced in the knowledge assessments where players are asked a series of application questions relevant to the skill nuances explored in the preceding scenes and provided with dynamic feedback based on their answers.

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

389

Table 1: Overview of Hall of Heroes six social emotional tutoring scenes with skill areas and learning objectives.

Scene

Overview

Skill Areas

Learning Objectives

The Replica

Players work with two other students to put together a time machine replica during science class.

Social Initiation Cooperation Impulse Control

Identify the correct group to initiate with and do so appropriately Cooperate effectively, including negotiating about how to divide work Utilize available resources and organizational skills to effectively manage the assigned task

PowerFlip

Players spend their free period attempting to join in with two different pairs of students playing a card game.

Social Initiation Emotion Regulation

Determine where to stand when joining in Identify the appropriate person to talk to without interrupting Identify their own emotions after being rejected Choose which group to re-join post rejection

Hero Simulation

Players work with a group of fellow graders to produce a hero training video.

Cooperation Communication

Negotiate and compromise with another student to determine project roles Coach other students in expressive nonverbal and verbal communication, focusing on tone of voice, facial expressions, and body language

Fire and Ice

Players hone key interpersonal skills during their Hero-Civilians Relations class by playing three mini-games in a virtual reality simulator.

Impulse Control Empathy Emotion Regulation

Demonstrate impulse control in a response inhibition simulation Appropriately label dialog options based on their anticipated impact on another student Navigate 3 “choose your own adventure” style activities which require stopping and thinking in highly emotional situations

390

Computer Games Technology

The Escape

Students have just witnessed the evil villain Dr. Klepto steal the teachers’ superpowers. Players must manage the fallout and rally other students to save the school.

Emotion Regulation Communication Empathy

Regulate their own emotions with an activity (e.g., deep breathing and positive self-talk) Use receptive communication skills to determine the emotional states of other students Help those students to regulate their emotions using appropriate activities

Surveillance

Players watch security      is going on at Hall of Heroes, and then work with other students to clarify details and form a theory and timeline of what happened.

Empathy Impulse Control

Observe and interpret other characters’ emotions in context Balance attention to detail and time constraints in a high-stakes situation Distinguish facts from assumptions

The Mission

Players and other students work together to formulate a plan to travel back in time to stop Dr. Klepto and save the school.

Impulse Control Cooperation Communication

Work with other students to make and implement an action plan, which includes identifying the best team members for each step }>      ! *   "      " 

Figure 1: Hall of Heroes  " 

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

391

Figure 2: Examples screenshots of Hall of Heroes game components.

Hall of Heroes was developed by a multidisciplinary team comprising clinical and developmental psychologists, SST intervention developers, computer programmers, artists and animators, and experts in computer games for learning. The development process followed best practices in educational game design, with thoughtful consideration for the unique

392

Computer Games Technology

social and developmental challenges of early adolescence [23, 24, 28, 32, 34]. These considerations led us to employ multiple gameplay styles, varying the settings and backgrounds for each scene, and to incorporate a compelling storyline through both playable scenes and cinematic cut scenes, resulting in a motivating and engaging game environment where > >       ‡–!‚Ÿ‘  presents a sample of the range of gameplay experiences in Hall of Heroes, which include simulations, drill and practice mini-games, opportunities for open exploration, dialog driven content, and action-based challenges. By incorporating direct instruction, varied practice exercises, dynamic feedback, and self-directed elements, the game is able to effectively target multiple modes of learning [23, 26, 34, 37].

Figure 3: Example screenshots of Hall of Heroes gameplay.

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

393

Situating the game in a school-like setting allows for the inclusion of tasks that directly address the routine aspects of middle school, such as changing classes and using a locker, which repeatedly come up as areas of pretransition concern and anxiety for youth [7, 38]. In addition, the school setting allowed us to simulate challenging social situations unique to a school environment, such as choosing a table for lunch and understanding when and how to engage a teacher for help in a bullying situation. Incorporating these common transition experiences into a superhero world serves to balance fantasy and realism, maintaining engagement and allowing for deeper learning and skills transfer [32]. To further develop this rich narrative world, Hall of Heroes is populated with a large and diverse cast of nonplayable characters (NPCs) which serve narrative and pedagogical roles that mirror traditional instructional  ˜> !  ! !  >!  \>    educational game design agree that the inclusion of interactive embodied social pedagogical agents enhances student motivation and engagement and improves learning outcomes including retention and skills transfer (e.g., [39–41]). NPCs as embodied pedagogical agents support the social cognitive aspects of learning, serving as an interactive bridge to help players understand complex concepts [39, 42]. Rather than simply presenting information, the NPCs in Hall of Heroes engage and motivate players through direct experiences with the game world to actively participate in the construction of social problem-solving knowledge, for example, by guiding players to look for clues in school security footage in order to check their assumptions about a situation. NPCs also serve to model and guide players      >  ! > ! >  self-talk through conversation and branching dialog menus. *   !          and acceptability of the Hall of Heroes game for SST using a randomizedcontrol design. We expected youth who interacted with Hall of Heroes      >   ! ! and behavioral outcomes compared to youth in the waitlist control condition who did not complete the game.

394

Computer Games Technology

METHODS Participants Participants were recruited via postings on national parenting, game research, and school listservs and social media sites. Interested parents completed an online survey to determine eligibility. To be eligible to participate, youth must have (1) been attending the 5th or 6th grades; (2) received instruction in a regular education classroom for at least 40% of the school day; (3) been English language proficient; and (4) had access to a computer with Internet use at home. Eligible participants (N = 32) were randomly assigned to either the treatment (TX) or waitlist control (WLC). Nine percent of participants were attrited over the course of the study (i.e., failed to complete post-data  ^€!  > ‡Œ } =Š            > >  >    study and those who did not based on age, race, gender, or study condition. {  >        `· 11.35, SD = 0.88), with 76% male and 31% representing a racial or ethnic >{      >       the two conditions.

Measures Positive Behavioral and Emotional Skills. Youth behavioral and emotional strengths were measured using The Behavioral and Emotional Rating Scale, Second Edition (BERS-2 [43]), a 58-item strength-based assessment that measures youth positive emotions, behaviors, and interpersonal skills. Parents reported how many statements, such as “acknowledges painful feelings,” “considers consequences of own behavior,” and “completes homework regularly,” were like their child over the past 30 days on a 4-point scale ranging from 0 (not at all) to 3 (very much). The BERS-2 composite Strength Index was calculated by summing across all items and converting this raw score to a standard score (M=100, SD=15). Further, the composite score was broken down into its five subscales: interpersonal strength (strengths in how the child relates to others), affective strength (strengths in how the accepts affection and expresses emotions), family involvement (strengths in the child’s relationships with family members), intrapersonal strength (child’s internal emotional strengths, outlook on one’s own competencies and accomplishments), and school functioning

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

395

(strengths in completing school-related tasks). Raw scores for each subscale were computed by summing scores for all relevant items and converting them to standard scores (M=10, SD=3). Higher scores reflect greater behavioral and emotional strength in that area. The BERS-2 Strength Index demonstrated good reliability (Cronbach’s ' = .97) and prior factor analyses have established the measure’s content, criterion, and construct validity [43]. Psychosocial Distress. Youth psychosocial distress was measured using the 64-item parent-report Youth Outcomes Questionnaire, Second Edition (Y-OQ [44]). Parents were asked to think about their child over the past 7 days and report how true statements were for their child on a 5-point scale from 0 (never or almost never) to 4 (almost always or always). The Y-OQ Total Distress score was computed by summing across all items and respective items were combined for six subscales: intrapersonal distress (18 items covering anxiety, depression, and hopelessness), somatic (8 items including headaches, dizziness, and stomachaches), interpersonal relations |—     !   !    problems), social problems (8 delinquent or aggressive behavior items), behavioral dysfunction (11 items covering organization, concentration, handling frustration, and ADHD-related symptoms), and critical items (9 items consisting of symptoms often found in youth receiving inpatient services, such as paranoid ideation, hallucinations, mania, and suicidal feelings). Higher scores on each scale indicate higher levels of psychosocial distress. The Y-OQ showed high reliability (Cronbach’s ' = .97) and its publishers detail several studies establishing content, criterion, and construct validity with both clinical and nonclinical (community) samples [44]. Game Evaluation. Following completion, youth and parents were asked to provide feedback on the Hall of Heroes program via evaluation surveys. Youth were asked to rate how much they agreed with statements regarding their engagement and liking of Hall of Heroes. Parents rated the extent to which they agreed with statements such as “I would recommend Hall of Heroes to other parents of or grade youth” and “Hall of Heroes was easy for my child to understand.” Youth and parent evaluations were made using a 5-point scale with 1 = strongly disagree and 5 = strongly agree. Openended responses were also collected, to collect more detailed feedback on youth and parent impressions of the SEL intervention.

396

Computer Games Technology

Procedures Precautions were taken to ensure study ethics and protection of human subjects. The study protocol was approved by 3C Institute’s institutional review board. Parent consent and youth assent were obtained from all participants prior to participation. All parts of the study were completed online through a secure project website. All parents in the TX and WLC groups completed the BERS-2 and Y-OQ at pretest. The week following pretest data collection, youth in the TX group began interacting with Hall of Heroes. The game was released in four segments, with one segment released per week for four weeks; youth had one week to complete each segment. Within one week following the TX group’s four-week interaction with the program, parents in both the TX and WLC groups completed the same set of outcome measures again, as posttest measures, and youth and parents in the TX group completed product evaluation surveys.

RESULTS AND DISCUSSION Preliminary analyses were conducted using a Multivariate Analysis of Variance (MANOVA) with condition as the between-subjects factor to determine whether scores on the BERS-2 and the Y-OQ differed at baseline (preintervention) across the two conditions. These analyses revealed no significant difference between groups at baseline on any subscale of the BERS-2 or the Y-OQ. However, given the small sample size, we elected to use an Analysis of Covariance (ANCOVA) approach in order to control for baseline scores when analyzing intervention effects [45, 46].

Positive Behavioral and Emotional Skills As can be seen in Table 2, results of the ANCOVA revealed significant improvements in positive behavioral and emotional skills for youth in the treatment condition, as measured by the BERS-2. As expected, parents reported that youth who completed Hall of Heroes showed significantly greater growth in their total Strength Index (a measure of youth overall behavioral and emotional strengths) compared to WLC youth who did not complete the intervention. Further, when this index was broken down into its composite subscales, youth in the treatment condition showed particular gains in three areas: interpersonal strength, affective strength, and family involvement. Thus, youth who completed Hall of Heroes significantly

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

397

improved in their abilities to relate to others (both peers and family members) as well as to accept affection and express emotions with others, whereas youth who did not complete Hall of Heroes showed essentially no change in these areas. These results were expected given that Hall of Heroes directly targets social skills that are critical for positive interpersonal relationships, including communication, cooperation, and social initiation as well as emotional awareness and emotion regulation skills which would directly impact the youth’s affective abilities. Table 2: Summary of relevant descriptive information, ANCOVA statistics, and effect sizes for parent-reported outcomes on the BERS-2.

¢                 BERS-2 intrapersonal strength subscale. It may be that the duration of the study was too brief (4 weeks) to impact youth’s internal emotional strengths and opinions of their own competencies and accomplishments, which may take more time and real-world experience to alter than possible with this  ¢           ?\š'=‡ school functioning subscale either. However, Hall of Heroes does not directly address academics or school-related tasks, so lack of a treatment effect for this subscale may be expected. While intrapersonal and school academics were not directly targeted by the Hall of Heroes intervention, improvements for these subscales may emerge with greater exposure to the intervention and/or more time for these more distal outcomes to be impacted by social skill improvements for the youth. As the youth practice and implement the social skills taught in Hall of Heroes!   =       >      

398

Computer Games Technology

Future research with a longer intervention period could directly test this hypothesis.

Psychosocial Distress As can be seen in Table 3, ANCOVA results demonstrated TX youth experienced significantly lower Overall Psychosocial Distress on the Y-OQ at postintervention compared to WLC youth, after controlling for preintervention scores. This result suggests that Hall of Heroes’ bolstering of social emotional skills may be key to helping youth cope with stressful situations. Results further indicated that this overall significant effect was largely due to significant improvement in intrapersonal distress for youth in the TX condition. Thus, youth who completed Hall of Heroes showed a significantly greater decline in feelings of anxiety, depression, and hopelessness than did youth who did not participate in the intervention. Table 3: Summary of relevant descriptive information, ANCOVA statistics, and effect sizes for parent-reported outcomes on the Y-OQ. Mean (SE) Pre

ANCOVA Post

F

p-value

Effect Size (

Overall Distress Treatment

82.07 (6.46)

54.20 (7.06)

Wait-list Control

73.79 (9.76)

65.57 (8.89)

Treatment

29.47 (2.27)

18.73 (2.72)

Wait-list Control

24.29 (2.75)

21.71 (2.81)

Treatment

9.73 (1.35)

6.07 (1.34)

Wait-list Control

8.29 (1.58)

7.79 (1.52)

Treatment

9.53 (1.75)

5.33 (.92)

Wait-list Control

8.43 (1.92)

6.64 (1.30)

4.57

.042

.15

8.23

.008

.24

3.58

.070

.12

.86

.103

.10

.99

.329

.04

Intrapersonal Distress

Somatic

Critical Items

Interpersonal Relationships Treatment

8.07 (1.36)

5.00 (1.83)

Wait-list Control

9.43 (1.61)

7.93 (1.62)

Social Problems

)

Hall of Heroes: A Digital Game for Social Skills Training with Young ... Treatment

3.80 (1.23)

2.60 (1.20)

Wait-list Control

5.14 (1.42)

4.71 (1.49)

Treatment

21.47 (1.93)

16.47 (1.54)

Wait-list Control

18.21 (2.19)

16.79 (1.73)

.98

.332

.04

.24

.632

.01

399

Behavioral Dysfunction

“       !     ®=ƒ”       trends suggesting youth who completed Hall of Heroes appeared to experience fewer symptoms of somatic distress (e.g., headache, stomach, and dizziness) and fewer critical psychological symptoms (e.g., thoughts of suicide, paranoia, and mania) compared to youth who did not complete the intervention. We know, from the literature, youth who experience social problems often develop somatic complaints, either as a function of the anxiety associated with interpersonal problems or as a coping mechanism to avoid socially stressful situations, such as school [9]. The trend towards fewer somatic symptoms may be an indirect effect from the overall lessening of psychosocial distress in the area of anxiety for youth in the treatment condition. Similarly, lower psychosocial distress in the areas of depression and hopelessness would be expected to decrease the level of serious psychological (critical) symptoms experienced by a youth. ¢       =>         the Y-OQ for the interpersonal relationships or social problems subscales. {   >     interpersonal strength subscale on the BERS-2, reported above. However, for the Y-OQ, interpersonal relations   !  !   communication problems and social problems focuses on delinquent and aggressive behavior problems. While greater (but  €   in these problem areas were seen for youth in the TX condition compared to those in the WLC condition, these areas represent antisocial behavior problems that are not the primary focus of Hall of Heroes. Hall of Heroes focuses on teaching prosocial skills and behaviors and positive social problem solving. We know that antisocial behavior problems are more resistant to change [13] and that positive impact on antisocial behavior problems may take longer than the 4-week intervention period of this study. It may be that declines in these antisocial behavior problems would emerge with greater exposure to Hall of Heroes as well as greater real-world experience using prosocial skills that are antithetical to antisocial behaviors. With regard to the behavioral dysfunction of the Y-OQ, this subscale focuses on ADHD-related symptoms, such as disorganization, poor

400

Computer Games Technology

concentration, and trouble handling frustration, which are not directly addressed by the Hall of Heroes intervention. Thus, it is not too surprising       >       >    However, while Hall of Heroes is not designed as a treatment tool for any >      !  ;    ; ! }  Disorder), it is possible that improvements in these types of behavioral problem areas may arise over time as youth practice new social emotional skills in the real-world. Future longitudinal research would be needed to test whether group differences in antisocial and ADHD-related behavior problems emerge over time for youth.

Hall of Heroes Evaluation Because Hall of Heroes was designed to be used independently by youth and because greater engagement in intervention is associated with greater skill gains (Clark et al., 2016) [47], it was important to examine how youth evaluated the game. In general, as can be seen in Table 4, both youth and parents rated Hall of Heroes favorably across all areas. Parents supported Hall of Heroes as engaging, and youth reported they found the game fun and enjoyed both the characters and story. Also of note were youths’ high marks for interest in playing more games like Hall of Heroes and parents’ belief that Hall of Heroes is an effective tool for teaching social skills with youth. Table 4: Means (SD) of youth and parent evaluations of Hall of Heroes.

Youth Ratings Overall, I liked Hall of Heroes. I would like to play more games like Hall of Heroes. I thought Hall of Heroes was fun to play. I liked the characters in Hall of Heroes. I liked the pictures/graphics in Hall of Heroes. I liked the story in Hall of Heroes. Parent Ratings Hall of Heroes is of high overall quality.

Mean (SD) 4.23 (.83) 4.38 (.65) 4.31 (.85) 4.23 (.92) 3.92 (1.38) 4.15 (1.14) Mean (SD) 4.00 (.65)

Hall of Heroes: A Digital Game for Social Skills Training with Young ... Hall of Heroes was fun and engaging for my child. Hall of Heroes was easy for my child to understand. My child’s overall experience with Hall of Heroes was positive. Hall of Heroes is an effective tool for teaching students social skills. Hall of Heroes is an effective tool for preparing students for middle school. I would recommend Hall of Heroes to other parents of

401

4.27 (.80) 4.00 (.85) 4.13 (.83) 4.20 (.68) 4.27 (.59) 4.33 (.82)

grade students. Note. All ratings were made on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree).

Follow-up requests for open-ended feedback helped to put these ratings into context. When the youth were asked what they learned in Hall of Heroes, nearly all noted at least one of the program goals, including “cooperation with others,” “respect for others,” “how to deal with feelings,” and “communication and working together,” to name a few. Representative comments from parents were like “my child learned about working together for a common goal,” “my child had fun enough to keep her wanting to play, yet helped her with some new ideas about how to interact with others,” and “there were more than few scenarios that my child encountered that forced him to reevaluate and change directions,” among other comments. Further, parents reported they were likely to recommend Hall of Heroes to other parents of adolescents.

CONCLUSION The results of this study provide support for the usability and acceptability of Hall of Heroes as well as initial evidence of its potential as a game-based social skills training (SST) intervention for adolescents. High ratings from both youth and parents, and the positive open-ended feedback, support the idea that the digital game Hall of Heroes is an acceptable and engaging way to implement SST. Also encouraging was the fact that we found all participants were able to complete the intervention in the allotted time, and there were no reports of technical or user errors with the software. These findings indicate youth are able to complete Hall of Heroes independently and are sufficiently engaged in the program to maintain interest and motivation throughout SST.

402

Computer Games Technology

‘  >   >>        > > in Hall of Heroes for increased interpersonal and emotional strength and decreased psychosocial distress, key ingredients to preparing youth for the transition to middle school. Despite the small sample size and brief intervention period, youth who completed the game-based SST showed improvements in multiple areas. In fact, those areas found to change      ! >  ! greater interpersonal and affective strengths) showed moderate to high effect  !          => ''{ programs (see [13]). This work supports the growing body of evidence showing that digital games are potentially effective for delivering a wide array of psychotherapeutic   ! ''{      >  of social skills as a protective force in early adolescence, particularly in helping youth cope with the stressful transition to middle school, the need for effective intervention methods that can overcome the barriers to traditional SST programs becomes increasingly urgent. Hall of Heroes and programs like it which deliver dynamic, personalized, and engaging social skills instruction offer a low-cost, accessible, and easy to scale alternative to reach youth with effective SST on a broader scale than ever before.

Limitations and Future Directions This initial study of the Hall of Heroes game-based SST intervention adds to the growing literature indicating digital games can effectively address a wide array of social, emotional, and behavioral concerns (see review in [21]). This pilot test represents the first step in evaluating the efficacy of Hall of Heroes specifically and as such a number of limitations should be considered. First, our sample size was quite small which limits statistical power and our ability to investigate potentially informative subgroup differences (e.g., grade or gender differences) and also can potentially limit generalizability of findings. It is important to note that while the sample was recruited from the broader community with no requirement that youth have a preexisting social skills problem, it appears that parents who signed up were at least concerned about their child’s social skills or social functioning. Examination of baseline levels of each outcome measure showed elevated distress in all areas of the Y-OQ and below average strengths on the BERS-2 for both TX and WLC youth (no group differences were present at baseline). Given what appears to be a self-selection bias by parents for participation in this intervention study, results should be considered generalizable to adolescents in need of

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

403

social skills intervention, rather than youth more generally. Future studies employing a larger sample size are needed to confirm treatment effects and investigate potential differential treatment response by subgroup, including youth with varying degrees of baseline social difficulties. Second, the current study relied on parent report for outcome measures. Given our design, it was not possible to include independent measures where the reporter was not aware of the intervention condition of youth. Because >      !   "   their reported behavioral changes. However, it is important to note that the pattern of results was not equivalent across all subscales. If biases were solely responsible for parental ratings, we would expect all subscales to be equally impacted. However, those subscales that were most closely related to the content taught through Hall of Heroes (prosocial skills, positive social > €         >  by participation in the intervention. Outcome areas which are not directly targeted by the intervention (e.g., school-related performance, antisocial behavior problems, and attentional problems) or which may require a longer  >  ª ^        this study. Future investigations with multiple informants and informants who are blind to treatment condition, such as teachers, as well as a longer longitudinal design, would be important for clarifying the observed changes by parents in the current study. Third, the brief intervention period meant data collection and intervention delivery occurred during the spring semester for all participants. Further research is needed to examine whether time of year impacts treatment effects. Studies which compare timing relative to the transition to middle school (e.g., late grade, early/late grade) and include measurement beyond immediate postintervention would be useful for determining the optimal conditions for delivering the intervention.

Data Availability Individuals interested in the data generated through this study or gaining access to the Hall of Heroes game should contact the first author.

 !  "  The authors acknowledge possible conflicts of interest related to this study. While the authors do not directly benefit financially from the results of this

404

Computer Games Technology

study, both authors are affiliated with the company that sells Hall of Heroes (https://www.Centervention.com) and as such may indirectly benefit in the future from the results of the study. They have administered this study in accordance with ethical research guidelines and do not believe this potential conflict has impacted the reported results.

Acknowledgments This research was supported by the US Department of Education, Institute of Education Sciences, through contract ED-IES-13-C-0041. The authors would like to thank the parents and youth who participated in this research as well as the Hall of Heroes development team for their contributions to this program. In addition, the authors would like to thank Dr. Ashley Craig and Emily Brown for their contributions to earlier drafts of this paper as well as their contributions to the development and testing of Hall of Heroes.

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

405

REFERENCES 1.

A. Bellmore, “Peer Rejection and Unpopularity: Associations With GPAs Across the Transition to Middle School,” Journal of Educational Psychology, vol. 103, no. 2, pp. 282–295, 2011. 2. K. R. Wentzel and K. Caldwell, “Friendships, Peer Acceptance, and Group Membership: Relations to Academic Achievement in Middle School,” Child Development, vol. 68, no. 6, pp. 1198–1209, 1997. 3. J. W. Aikins, K. L. Bierman, and J. G. Parker, “Navigating the transition  _    ˜ {  "    > =   >  self-system characteristics,” Social Development, vol. 14, no. 1, pp. 42–60, 2005. 4. M. E. Gifford-Smith and C. A. Brownell, “Childhood peer relationships: Social acceptance, friendships, and peer networks,” Journal of School Psychology, vol. 41, no. 4, pp. 235–284, 2003. 5. K. H. Rubin, W. M. Bukowski, and J. G. Parker, “Peer interaction, relationships, and groups,” in Handbook of child psychology: Vol. 3. Social, emotional, and personality development, N. Eisenberg, R. W. Damon, and R. M. Lerner, Eds., vol. 3, pp. 571–645, Wiley, New York, NY, USA, 6th edition, 2006. 6. L. Zarbatany, D. P. Hartmann, and D. B. Rankin, “The Psychological Functions of Preadolescent Peer Activities,” Child Development, vol. 61, no. 4, pp. 1067–1080, 1990. 7. M. A. Peterson, E. B. Hamilton, and A. D. Russell, “Starting well: Facilitating the middle school transition,” Journal of Applied School Psychology, vol. 25, no. 3, pp. 286–304, 2009. 8. J. N. Kingery, C. A. Erdley, and K. C. Marshall, “Peer acceptance and friendship as predictors of early adolescents’ adjustment across the middle school transition,” Merrill-Palmer Quarterly, vol. 57, no. 3, pp. 215–243, 2011. 9. J. Kupersmidt and M. E. DeRosier, “How peer problems lead to negative outcomes: An integrative mediational model,” in Children’s peer relations: From development to intervention, J. B. Kupersmidt and K. A. Dodge, Eds., pp. 119–138, American, Washington, DC, USA, 2004. 10. J. Parker, K. Rubin, S. Erath, J. Wojslawowicz, and A. Buskirk, “Peer relationships, child development, and adjustment: Developmental

406

11.

12.

13.

14.

15. 16.

17.

18.

19.

20.

Computer Games Technology

psychopathology perspective,” in Developmental Psychopathology, Cicchetti and Cohen, Eds., pp. 419–493, Wiley, New York, NY, USA, 2006. K. R. Wentzel, “Sociometric status and adjustment in middle school: A longitudinal study,” Journal of Early Adolescence, vol. 23, no. 1, pp. 5–28, 2003. M. E. DeRosier, “Building Relationships and Combating Bullying: Effectiveness of a School-Based Social Skills Group Intervention,” Journal of Clinical Child & Adolescent Psychology, vol. 33, no. 1, pp. 196–201, 2004. J. A. Durlak, R. P. Weissberg, A. B. Dymnicki, R. D. Taylor, and K. B. Schellinger, “The impact of enhancing students’ social and emotional learning: a meta-analysis of school-based universal interventions,” Child Development, vol. 82, no. 1, pp. 405–432, 2011. S. Foster and J. Bussman, “Evidence-based approaches to social skills training with children and adolescents,” in Handbook of evidencebased therapies for children and adolescents: Bridging science and practice, pp. 409–428, Springer Science & Business Media, New York, NY, USA, 2007. K. T. Mueser and A. S. Bellack, “Social skills training: Alive and well?” Journal of Mental Health, vol. 16, no. 5, pp. 549–552, 2007. M. Greenberg, C. Domitrovich, and B. Bumbarger, “The prevention    =   ˜     ! Prevention & Treatment, vol. 4, pp. 1–67, 2001. R. Raimundo, A. Marques-Pinto, and M. L. Lima, “The Effects Of A Social-Emotional Learning Program On Elementary School Children: The Role Of Pupils’ CharacterisTICS,” Psychology in the Schools, vol. 50, no. 2, pp. 165–180, 2013. F. M. Gresham, “Current status and future directions of school-based behavioral interventions,” School Psychology Review, vol. 33, no. 3, pp. 326–343, 2004. K. Bosworth, D. Espelage, T. DuBay, G. Daytner, and K. Karageorge, “Preliminary evaluation of a multimedia violence prevention program for adolescents,” American Journal of Health Behavior, vol. 24, no. 4, pp. 268–280, 2000. Social and Character Development Research Consortium (SCDRC), \    '    ‹  ‹  '   }   

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

21.

22.

23.

24.

25.

26.

27.

28.

29.

407

Development and Reduce Problem Behavior in Elementary School Children,” Tech. Rep. NCER 2011-2001, NCER, IES, Washington, DC, USA, 2010. M. E. DeRosier and J. M. Thomas, “Video games and their impact on teens mental health,” in Proceedings of the Technology and adolescent mental health, M. Moreno and A. Radovic, Eds., pp. 237–253, Springer International Publishing, 2018. J. J. Horan, “Effects of Computer-Based Cognitive Restructuring on Rationally Mediated Self-Esteem,” Journal of Counseling Psychology, vol. 43, no. 4, pp. 371–375, 1996. D. H. Goh, R. P. Ang, and H. C. Tan, “Strategies for designing effective psychotherapeutic gaming interventions for children and adolescents,” Computers in Human Behavior, vol. 24, no. 5, pp. 2217–2235, 2008. M. S. Khanna and P. C. Kendall, “Computer-assisted cognitive behavioral therapy for child anxiety: Results of a randomized clinical trial,” Journal of Consulting and Clinical Psychology, vol. 78, no. 5, pp. 737–745, 2010. K. Bosworth, D. Espelage, and T. DuBay, “A computer-based violence prevention intervention for young adolescents: Pilot study,” Adolescence, vol. 33, no. 132, pp. 785–795, 1998. D. S. McNamara, G. T. Jackson, and A. Graesser, “Intelligent tutoring and games (ITaG),” in Gaming for classroom-based learning: Digital role playin as a motivator of study, Y. Baek, Ed., pp. 44–65, formation Science Reference, Hershey, Pa, USA, 2010. Á. E. Carrasco, “Acceptability of an adventure video game in the treatment of female adolescents with symptoms of depression,” Research in Psychotherapy: Psychopathology, Process and Outcome, vol. 19, no. 1, pp. 10–18, 2016. K. Fenstermacher, D. Olympia, and S. M. Sheridan, “Effectiveness of a computer-facilitated, interactive social skills training program for      >   !School Psychology Quarterly, vol. 21, no. 2, pp. 197–224, 2006. A. Rubin-Vaughan, D. Pepler, S. Brown, and W. Craig, “Quest for the Golden Rule: An effective social skills promotion and bullying prevention program,” Computers & Education, vol. 56, no. 1, pp. 166– 175, 2011.

408

Computer Games Technology

30. R. P. Sanchez, C. M. Bartel, E. Brown, and M. E. DeRosier, “The  >        children with social skills challenges,” Computers and Education, vol. 78, pp. 321–332, 2014 (Bulgarian). 31. M. Moisala, V. Salmela, L. Hietajärvi et al., “Gaming is related to enhanced working memory performance and task-related cortical activity,” Brain Research, vol. 1655, pp. 204–215, 2017. 32. J. L. Tan, D. H.-L. Goh, R. P. Ang, and V. S. Huan, “Participatory evaluation of an educational game for social skills acquisition,” Computers & Education, vol. 64, pp. 70–80, 2013. 33. A. Corbett, K. Koedinger, and W. Hadley, “Cognitive tutors: From research to classrooms,” in Technology enhanced learning: Opportunities for change, P. S. Goodman, Ed., pp. 235–263, Erlbaum, Mahway, NJ, USA, 2001. 34. J. P. Gee, “What video games have to teach us about learning and literacy,” Computers in Entertainment, vol. 1, no. 1, p. 20, 2003. 35. R. Bartle, “Hearts, clubs, diamonds, spades: Players who suit MUDs,” Journal of MUD Research, vol. 1, no. 1, p. 19, 1996. 36. C. Girard, J. Ecalle, and A. Magnan, “Serious games as new educational tools: How effective are they? A meta-analysis of recent studies,” Journal of Computer Assisted Learning, vol. 29, no. 3, pp. 207–219, 2013. 37. K. Kiili, “Digital game-based learning: towards an experiential gaming model,” The Internet and Higher Education, vol. 8, no. 1, pp. 13–24, 2005. 38. P. Akos and J. P. Galassi, “Middle and high school transitions as viewed by youth, parents, and teachers,” Professional School Counseling, vol. 7, no. 4, pp. 212–221, 2004. 39. C.-H. Chen and M.-H. Chou, “Enhancing middle school students’             =   ! Journal of Computer Assisted Learning, vol. 31, no. 5, pp. 481–492, 2015. 40. Y. Kim and A. L. Baylor, “A social-cognitive framework for pedagogical agents as learning companions,” Educational Technology Research and Development, vol. 54, no. 6, pp. 569–596, 2006. 41. R. Moreno, R. E. Mayer, H. A. Spires, and J. C. Lester, “The case for social agency in computer-based teaching: do students learn

Hall of Heroes: A Digital Game for Social Skills Training with Young ...

42.

43. 44.

45.

46.

47.

409

more deeply when they interact with animated pedagogical agents?” Cognition and Instruction, vol. 19, no. 2, pp. 177–213, 2001. Y. Kim, J. Thayne, and Q. Wei, “An embodied agent helps anxious students in mathematics learning,” Educational Technology Research and Development, vol. 65, no. 1, pp. 219–235, 2017. M. H. Epstein, Behavioral and Emotional Rating Scale, Pro-ed, Austin, Tex, USA, 2nd edition, 2004. G. M. Burlingame, M. G. Wells, J. C. Cox, M. J. Lambert, M. Iatkowski, and D. Justice, Youth Outcome Questionnaire, OQ Measures, LLC, Salt Lake City, Utah, USA, 2005. S. E. Maxwell, D. A. Cole, R. D. Arvey, and E. Salas, “A Comparison of Methods for Increasing Power in Randomized Between-Subjects Designs,” Psychological Bulletin, vol. 110, no. 2, pp. 328–337, 1991. A. J. Vickers and D. G. Altman, “Analysing controlled trials with baseline and follow up measurements,” British Medical Journal, vol. 323, no. 7321, pp. 1123-1124, 2001. J. J. Vogel, D. S. Vogel, J. Cannon-Bowers, G. A. Bowers, K. Muse, and M. Wright, “Computer gaming and interactive simulations for learning: a meta-analysis,” Journal of Educational Computing Research, vol. 34, no. 3, pp. 229–243, 2006.

CHAPTER 15

Kinect-Based Exergames Tailored to Parkinson Patients

Ioannis Pachoulakis,1 Nikolaos Papadopoulos,1 and Anastasia Analyti2 1

Department of Informatics Engineering, Technological Educational Institute of Crete, Heraklion, Crete, Greece 2 Institute of Computer Science, Foundation for Research and Technology–Hellas (FORTH), Vassilika Vouton, Heraklion, Crete, Greece

ABSTRACT Parkinson’s disease (PD) is a progressive neurodegenerative movement disorder where motor dysfunction gradually increases. PD-specific dopaminergic drugs can ameliorate symptoms, but neurologists also strongly

Citation: Ioannis Pachoulakis, Nikolaos Papadopoulos, Anastasia Analyti, “KinectBased Exergames Tailored to Parkinson Patients”, International Journal of Computer Games Technology, vol. 2018, Article ID 2618271, 14 pages, 2018. https://doi. org/10.1155/2018/2618271. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. - Link: https://www.hindawi.com/journals/ijcgt/2018/2618271/

412

Computer Games Technology

recommend physiotherapy combined with regular exercise. As there is no known cure for PD, traditional rehabilitation programs eventually tire and bore patients to the point of losing interest and dropping out of those programs just because of the predictability and repeatability of the exercises. This can be avoided with the help of current technology, character-based, interactive 3D games promote physical training in a nonlinear fashion, and can provide experiences that change each time the game is played. Such “exergames” (a combination of the words “exercise” and “game”) challenge patients into performing exercises of varying complexity in a playful and interactive environment. In this work we present a Unity3D-based platform hosting two exergames tailored to PD patients with mild to moderate symptoms. The platform employs Microsoft Kinect, an affordable off-theshelf motion capture sensor that can be easily installed in both home and clinical settings. Platform navigation and gameplay rely on a collection of gestures specifically developed for this purpose and are based upon training programs tailored to PD. These gestures employ purposeful, large-amplitude movements intended to improve postural stability and reflexes and increase upper and lower limb mobility. When the patient’s movements, as detected by Kinect, “match” a preprogrammed gesture, an onscreen 3D cartoon avatar responds according to the game context at hand. In addition, ingame decision-making aims at improving cognitive reaction.

INTRODUCTION Parkinson’s disease (PD) is a multisystem neurodegenerative disorder caused by the depletion of dopamine in the substantia nigra area of the brain, which results in both motor and nonmotor clinical symptoms [1]. Relevant symptoms usually appear around 60 years of age and progress slowly, but irreversibly. Although patients with PD in the early stages typically show some level of motor dysfunction, patients at more advanced states show nonmotor cognitive, behavioral, and mental-related symptoms [1, 2]. Tremor, Rigidity, Akinesia, and Postural instability (grouped under the acronym TRAP) are the four distinct, fundamental symptoms of PD [3]. Akinetic symptoms include both hypokinesia (reduced amplitude of voluntary movement) and bradykinesia (reduced speed in voluntary movement). These symptoms that often develop in different combinations hinder common daily activities, trouble patients’ social relationships and reduce the quality of life, especially as the disease progresses [4, 5]. Although a cure for PD has not yet been discovered, medications usually control symptoms and

Kinect-Based Exergames Tailored to Parkinson Patients

413

maintain body functionality at reasonable levels through the early years of the disease [1], but also [6]. However, as neurodegeneration progresses, new motor symptoms emerge that include motor complications related to the chronic administration of dopaminergic agents with short half-lives (such as hyperkinesis and motor fluctuations), drug resistant postural instability, and freezing of gait. However, standard medication paths cannot be easily achieved, as the course and symptoms of the disease varies significantly among patients [7]. During the intermediate and advanced stages of the disease, the majority of patients experience significant motor problems despite optimal pharmacological management. In addition, physiotherapy appears highly effective in controlling PD-related symptoms: physical exercise involving stretching, aerobics, unweighted or weighted treadmill and strength training improves motor functionality (leg action, muscle strength, balance, and walking) and, therefore, quality of life [8, 9]. Exercise and physical activity have been shown to delay the deterioration of motor function, to improve cognition and to prolong functional independence, e.g., [9–11]. Furthermore, important new evidence [12] suggests a possible neuroprotective and neurorestorative      ‹;{ >  >=       and cognitive function applies to all disease stages. Calgar, et al. [13] point out the effectiveness of home-based, structured physical therapy exercise programs tailored to the individual patient, with measurable improvements in motor capability. Finally, Farley & Koshland [14] demonstrate that PD patients practicing large amplitude exercises (i.e., those involving large scale limb movements such as reaching an object, walking, twisting the torso to the side, stepping and reaching forward or sideways) reap measurable    >  >  >>  lower limbs. Farley & Koshland further support that this so-called “Training BIG” amplitude-based intervention strategy seems to activate muscles to meet the distance, duration, and speed demands set forth by the relevant exercises. At the same time, PD patients attending long-term repetitive exercise programs tend to get bored of the same daily physiotherapy routine [15]. In fact, [16] argues that exercise must be more frequent than two or three times a week to be effective in stepping performance, mobility, cognition, and reaction as well as reducing the frequency of falls. They also advocate further research to investigate the doses of training according to the state of the disease. PD patients must also adhere to regular and long-lasting

414

Computer Games Technology

clinical physiotherapy programs [17] before noticing improvement and in order to maintain gains from exercise in the medium and longer term. In fact, Mendez, et al. [18] propose investigating the learning potential of patients with Parkinson’s disease by applying new therapeutic strategies and validating their utility. Indeed, exergames (exercise-based games) based on consoles like Sony’s PlayStation Eye, Nintendo’s Wii, or Microsoft’s Kinect appear very promising exercise tools. Such games use audio and visual cueing in loosely structured 3D environments to avoid repeatability (the seed of boredom) characterizing more traditional physiotherapy programs. A review conducted by Vieira et al. [19] concludes that tools based on virtual reality used as therapeutic tools as they have been shown to improve motor function, balance, and cognitive capacities. A number of other recent                     of exergames for PD patients, which seem to emerge as a highly effective physiotherapy practice [20]. Extreme care and forethought must be put forth on designing exergames >  ‹;> ‘ > !   >  and positive feedback, be progressively challenging and adaptable to a >  > •        even eliminate accidents such as falls. Therefore, designing safe exergames for PD patients is a challenging task and certain design principles must be followed [21]. Amini et al. [22] designed a system that monitors PD patients’ movement and detects freezing-of-gait (FOG) and falling incidents which can be handled appropriately (contact emergency services, relatives, etc.). In addition, that same system employs an automated laser casting system based on servo motors and an Arduino microcontroller casting visual cues based on patient walking patterns in order to improve patient mobility. Barry et al. [20] propose that platforms which do not require raised platforms, handheld controllers, and/or body markers need to be further evaluated with respect to the physiotherapy opportunities with maximal safety for PD patients. Microsoft’s Kinect sensor falls in that category, as it requires no external input controllers and can capture the motion of the entire human body in 3D, using an RGB camera and a depth sensor. Players can manipulate game interactions by moving their body in front of the sensor. A Kinect-based game for PD patients developed by Galna et al. [23]  >  >>    >  > control. In that game, the upper torso of the player is mapped onto a farmer avatar who drives a tractor, collects fruits and avoids obstacles in a 3D environment. To guide the tractor so as to avoid obstacles such as sheep,

Kinect-Based Exergames Tailored to Parkinson Patients

415

   ! >      " take large steps (front, back, and sideways) with the other leg to the direction he/she wants the tractor to move. In order to preserve patient motivation, the          ˜  >     from simple hand movements for low levels, through more complex activities combining cognitive decisions and physical movements, all the way to dual tasking (simultaneous hand and foot work). The design principles for the game resulted from a workshop where participants played and evaluated a wide range of commercial games developed for Microsoft Xbox Kinect and ¢ “{  ^ >     >    interacting with Wii’s handheld controller and balance board. The authors conclude that Kinect seems both safe and feasible for PD patients and advocate further investigating the potential of the sensor as a home-based physiotherapy solution. Another PD-oriented multiplayer game also based on the Kinect sensor was developed by Hermann et al. [24] to investigate if cooperation within the context of a game can lead to improvements in terms of communication and coordination. Using hand movements, two players collect buckets of  "    _      {   >     *     > €  players drain the area, while in the second mode (strong cooperation) only one player drains, while the second player reveals the object. That work concludes that multiplayer games are indeed feasible for PD patients and that asymmetric roles (strong cooperation) can motivate communication between the participants and lead to a better game experience. Cikajlo et al. [25, 26] practiced intensive upper extremity physiotherapy via tele-rehabilitation on 28 early-stage Parkinson patients using a Kinectbased exergame called “Fruit Picking.” The objective of the game was to collect apples before they fell from a tree and place them in a basket at the bottom of the screen (raise arm with open hand, close hand on an apple to grab it). The system recorded the player’s movements and adapted the                   !     > !   tree was more sparsely populated with more apples, which ripened and fell more frequently from it. Further analysis showed evidence for clinically meaningful results for these target-based tasks.

416

Computer Games Technology

MATERIALS AND METHODS IN THE DESIGN AND IMPLEMENTATION OF A KINECT-BASED INTERACTIVE 3D EXERGAME PLATFORM The present work reports on a Kinect-based, interactive 3D platform hosting two exergames (the Balloon Goon game and the Slope Creep game) tailored to PD patients stages 1 through 3 in the Hoehn and Yahr [27] scale, i.e., with mild to moderate symptoms, without severe postural instability and motor impairment. The Kinect sensor provides data streams from an RGB camera and an IR depth camera which are processed by the MS Kinect SDK in real time to create a depth-map and yield coordinates for the 3D skeleton of the human in front of the sensor in each frame. A custom collection of gestures tailored to the PD condition has also been designed to navigate the platform and game menu and to interact with game objects in the sense that when the player’s movement “matches” a gesture programmed in a game, a 3D cartoon avatar moves as programmed in that game context. These gestures have been adopted from existing training programs (e.g., [28]) aiming at improving postural stability and reflexes as well as increasing overall mobility for upper & lower limbs. This approach is in line with the findings of Farley and Koshland [14], who demonstrate measurable benefits in limb speed and agility for PD patients practicing large amplitude training in the sense of performing extended but purposeful and fluent movements. The game platform, the gaming environments, the visual artefacts users can interact with and the exergames themselves have been developed on the Unity game engine, a powerful development platform used to develop 2D and 3D games. Several video and audio effects embedded in the game environment clearly communicate feedback based on game progress to the user. These include a score board, target/obstacle hit sound effects, instructive text and applause. To achieve a game goal, patients have to complete either a prescribed or a dynamically-generated sequence of tasks such as hitting _   >    >>!      >  ™    >          accommodating different levels or by increasing the frequency of tasks with game progress and summons various patient capabilities and dexterities, an approach intended to lessen the frustration from repetitive failure. ‹=     !‹;=>     intended to circumvent on-screen pointer systems using the location of a patient’s hand, a highly frustrating approach for people having trouble maintaining a steady hand to select a menu option. For that same reason

Kinect-Based Exergames Tailored to Parkinson Patients

417

gestures were favored which cannot be caused by inadvertent movement due to tremor/hyperkinesia. These gestures are by design reserved for navigation to avoid further confusion to the player. As a result, patients do not have to resort to alternative input methods and controllers. These main menu navigation gestures are discussed directly below. “Twist Torso to Navigate” Gesture. This gesture implements a previous/ next action to navigate a list by rotating the upper torso to the left or to the right, as shown in Figure 1. Because of this, the list must be short (a few items only) otherwise navigation becomes cumbersome. Gesture detection tracks the shoulder joints on a horizontal plane at the level of the shoulders    >>  > !  to [29, 30].

Figure 1: A sequence of screen snapshots displaying the “Twist Torso to Navigate” gesture in action; (a) the player faces the TV monitor which shows the current (Balloon Goon) exergame. He is instructed to either raise both hands to select and start the shown game or twist to the right to select the next game (Slope Creep); (b) by twisting his torso to the right, the player opts to navigate to the next (Slope Creep) game; (c) the right torso twist gesture has now been recognized; as a result, the Slope Creep game is shown and can be selected; (d) the player, however, opts to twist to the left in order to return to the previous (Balloon Goon) game. When the left torso twist gesture is recognized, the screen on the monitor will again be that shown in (a).

418

Computer Games Technology

“Stretch Arms High to Select/Pause” Gesture. The gesture requires                     either select the game or action shown in the center of the screen (outside a game) or pause the game under way (within the game). This is a common       |!     "   Figure 2 presents how the gesture operates in the context of navigation.

Figure 2: Screen snapshots displaying the “Stretch Arms High to Select” gesture in action; starting from the relaxed upright standing posture shown in Figure 1(a) for the Balloon Goon game or from that shown in Figure 1(c) for the Slope Creep game, the player can raise both hands high as shown in (a) or (b) in      ! >  !      

THE BALLOON GOON GAME This score-based, gesture-driven game, first presented in [32], is the first game that was added in the new platform. The game starts as shown in Figure 3 and calls for controlled arm and leg gestures reminiscent of “punches” and “kicks” in order to pop balloons which fall randomly along four vertical rods. Arm/leg extensions (punches/kicks) can be used to pop inner/outer rod balloons, respectively. Bilateral exercise is encouraged by requiring that balloons falling along the left two rods can be popped by left punches/kicks and those falling along the right two rods by right punches/kicks. Successful gesture detection triggers appropriate predefined animations of the on-screen avatar. Three game levels of increasing difficulty have been implemented. As one progresses to higher levels, the speed and number of the balloons increase so that they drop more frequently. In addition, the third level is even more demanding in motor and cognitive capabilities and reaction time, as it includes higher value (bonus) balloons along with “bomb” balloons which must be avoided as popping them incurs a drop in the score. Finally,

Kinect-Based Exergames Tailored to Parkinson Patients

419

performance is prominently displayed to the patient during gameplay and upon game completion.

Figure 3: Successful recognition of the gesture shown in Figure 2(a) selects and activates the Balloon Goon game. The game starts by displaying three screens         € _      to follow instructions on how to pop balloons by “punch” or “kick.” Finally, d            

Gestures and Animations Arm Extension (“Punch”) Gesture. The gesture is detected when the player shown in the bottom-right inserts in each of the screenshots shown in Figure 4 stretches forward the left or the right arm. As one might expect for exercises tailored to patients with Parkinson, the game requires a controlled purposeful arm extension reminiscent of a punch and not an actual powerful and fast punch. Gesture detection is examined by comparing distances between the joints of the hand being extended and the corresponding shoulder along the relevant vertical plane. Successful gesture detection triggers the corresponding 3D avatar (shown in the main screen in Figure 4) animation that may or may not cause a balloon to pop. The gesture is based on reaching-and-grasping therapy exercises [33] that increase arm mobility. As a side note, the “punch” gesture is also used to select game menu options, e.g., to activate an on-screen quit button or proceed to the next level.

420

Computer Games Technology

Figure 4: Balloon Goon Game. Sample screenshots of a left (a) and a right (b) “punch” animation performed by the avatar shown in the main screen. Each of these animations is triggered by successfully detecting a left/right “Arm Extension” gesture executed by the human actor, as shown in the corresponding bottom right corner insert for each screenshot.

Leg Extension (“Kick”) Gesture. The gesture is detected when the human actor shown in the bottom-right inserts in each of the screenshots in Figure 5 raises a leg by bending the knee and stretches it out to the front. As in the “punch” gesture, the game does not require an actual powerful and fast kick, but rather a controlled and purposeful leg extension. The logic employed to detect a “kick” is similar to the Arm Extension (“punch”) gesture and is based on the positions of the Foot and Hip joints. Gesture detection triggers the corresponding 3D avatar animation shown in the main screen in Figure 5 and can cause the balloon to pop if performed in a timely manner. The required gesture combines a leg-lift-and-hold in marching position which improves balance [29, 34] with a leg stretch exercise used to strengthen leg muscles [35]. It is important to note that patients unsure of their balance may use side supporters without interfering with the gesture detection algorithm.

Figure 5: Balloon Goon Game. Sample screenshots of a left (a) and a right (b) “kick” animation performed by the avatar shown in the main screen. Each of

Kinect-Based Exergames Tailored to Parkinson Patients

421

these animations is triggered by successfully detecting a left/right “Leg Extension” gesture executed by the human actor, as shown in the corresponding bottom right corner insert for each screenshot.

Game Logic An overview of the Balloon Goon game logic appears in Figure 6. Upon game initialization, the human actor is identified and his/her skeletal structure is tracked using Kinect SDK functions. Subsequently, a live feed from the RGB camera of the Kinect sensor is shown in an insert within the main window; this can be seen in all in-game screenshots up to this point. To start the game the player must raise both hands; a detected “Stretch Arms High to Select” gesture causes the game environment to load. A countdown counter then appears to alert the player that the game will begin shortly. That same counter also appears as a recapping aid every time the game resumes to alert and prepare the player. For example, to take a break from the game being played, the player performs a “Stretch Arms High to Pause” gesture to pause the game and be presented with two options: quit or resume playing the current game, either of which can be activated by the corresponding “punch” gesture.

Figure 6: Functional diagram for the Balloon Goon game logic.

At the end of the count down, the game starts and balloons appear falling along the four vertical rods. Punch/Kick gestures pop balloons along the inner/outerrods. Every game loop checks for the detection of either of these gestures (also for a third “pause” type gesture). If a punch or kick gesture is detected, the corresponding animation of the avatar is activated. On screen,

422

Computer Games Technology

the appropriate arm or leg of the avatar stretches to perform a punch or kick as appropriate. Popping a falling balloon requires an actual collision of the hand or foot of the avatar with that balloon, meaning that the animation must be triggered in time, before the balloon leaves the pop-able region. The event of a valid collision between the hand and foot of the avatar with a popable balloon triggers a the following actions to notify the player: the balloon becomes brighter, an appropriate sound and balloon popping animation play ! !          >>>  higher levels, balloons not only fall faster and more frequently, but also new balloon types appear mixed with regular balloons: bonus balloons, as shown in Figure 7(a), carry a higher value and must be collected, whereas a bombtype balloon, shown in Figure 7(b), carry a penalty and must be avoided.

(a)

(b) Figure 7: At higher game levels in the Balloon Goon Game, balloons fall faster and more frequently and carry higher reward (as in screenshot a) or penalty (as in screenshot b).

Kinect-Based Exergames Tailored to Parkinson Patients

423

At any point during game play, a “Stretch Arms High” gesture pauses the game. A pause screen appears as shown in Figure 8(a) presenting two options: quit and resume play, each of which can be activated by an appropriate “punch” left or “punch” right gesture, respectively. However, as shown in Figure 8(b) our player has decided to leave the room to take a break, as a result of which a “Player Not Detected” note appears on screen. When the player returns and is once more detected, he/she may select to resume play, at which point a countdown counter appears which alerts and reaccustoms him/her to the state of the game when paused as in Figure 8(c).

Figure 8; (a) Player executes a “Stretch Arms High” gesture to pause the game, after which he decides to leave the room; (b) as a result of moving off the FOV of the Kinect sensor, the player appears as “not detected”; (c) a countdown

424

Computer Games Technology

counter appears on-screen after the player has returned and opted to continue   ¿€   ! =>  >     to the player; (e) the player can now opt to quit the game or proceed to the next game level.

When all balloons have dropped, the game level is considered complete and performance data for that level is communicated on screen as in Figure 8(d) which remains active for some time for the player to digest that information. Then a new screen appears as in Figure 8(e) presenting two options: quit or proceed to the next level. In having to make the right decision in the allotted time, the game calls for a real time collaboration of the cognitive and neuromuscular systems and >     !  " ‘ > ! >>! >     which (left or right) pole that balloon falls and then decide whether to use a punch (inner pole) or a kick (outer pole). Furthermore, the entire decisionmaking process and ensuing action must conclude while the balloon drops through the “pop-enabled” region.

Scene Design The 3D avatar used in the exergames has been adopted from Unity’s free asset store, where it is provided already rigged and skinned. Additional 3D game assets (trees, prizes, obstacles, etc.) have been designed in Autodesk 3ds Max 2016 with textures created in Adobe Photoshop CS2. Some of these game objects can interact with the avatar or other game assets, while others are used as static scenery. These game assets have been modelled with a sufficiently low polygon count to strike a balance between a visually appealing game environment and a responsive real-time game experience, even for computers with mediocre hardware resources. A third-person viewpoint affords a clear view of both the avatar and the game scene, which includes static game assets such as a background static terrain, two directional lights and a skybox carefully placed so as not to distract the player and at the same time host game information in a nonintrusive manner. The main game assets are the four vertical posts and the balloons falling along those posts. A single texture has been used for the vertical rods, with

Kinect-Based Exergames Tailored to Parkinson Patients

425

the exception of a segment on each rod which is textured with a different (reddish) color to denote an “active” region and inform the player when to pop a balloon: when a balloon enters this active segment, it becomes brighter and “poppable.” In addition, balloons that can be popped by a “punch” gesture are differently textured from those that can be popped using a “kick”      ‘ Œ€‘!       types of balloons: bonus balloons (textured with a smiley face) which give extra points when popped and bomb balloons (textured with a bomb) that take away points (last two inserts in Figure 9) if popped.

Figure 9: From left to right: an orange balloon that can be popped with a “punch” gesture, a green balloon that can be popped with a “kick” gesture, a smiley (bonus) balloon, and a bomb-type balloon.

THE SLOPE CREEP GAME In this newly developed gesture-driven game (snapshots of which appear in Figure 10) the player controls a 3D cartoon skier in a snowy landscape. The skier moves along either one of two parallel lanes to collect prizes (rings, stars) of different values and also to avoid obstacles. As in real cross country ski, the player pushes two imaginary ski poles down and backwards to make the avatar move forward. Leaning left/right causes the avatar to change lanes. Regarding prizes, rings on the ground can easily be collected, but stars are a lot higher and can only be collected by causing the skier to jump over a platform. A successful jump requires only a squat gesture (to pick up additional speed for a higher jump) at some distance before the avatar reaches the ramp. Sparse rocks in the terrain can be avoided by changing lanes; otherwise part of the avatar’s life is lost. The game ends when the avatar crosses the finish line. During gameplay and also upon game completion the player is kept informed of his/her performance (rings/stars collected, lives available). Progressive difficulty is introduced within a level as rings are gradually succeeded by stars and rocks.

426

Computer Games Technology

Figure 10: Successful recognition of the gesture shown in Figure 2(b) selects and activates the Slope Creep game. The game starts by displaying four screens        € _      follow directions on how to cause the on screen avatar to accelerate, turn, and jump.

Gestures and Animations “Push Both Ski Poles” Gesture. The gesture involves stretching both arms to a frontal extension, followed by lowering them to the initial position (as shown in Figure 11) within a given time period. Positional deviations between both the left/right hands and the left/right shoulder joints are calculated in real time and compared to the actual lengths of the left/right arms. The logic employed is similar to the Arm Extension (“punch”) gesture in the Balloon Goon game, with the additional requirement that both arms must move in unison to push the ski poles from an initial position where both arms extend to a full frontal extension. Successful gesture detection triggers the corresponding 3D avatar to move forward in the game environment. The gesture is based on arm and shoulder strengthening exercises. For patients experiencing differential mobility issues between left and right sides, holding a stick with both hands should help arm synchronization.

Kinect-Based Exergames Tailored to Parkinson Patients

427

Figure 11: Screen snapshots displaying the “Push Both Ski Poles” gesture in action; a shows the player having stretched both arms to a frontal extension, followed by b lowering them to the initial relaxed position.

“Lean Torso to the Side” Gesture. The gesture is detected when the human actor shown in the bottom-right insert in the screenshots shown in Figure 12 leans the upper torso to the left or right side. The gesture is detected based on deviations between the positions of multiple joints, such as the Hip Center, Shoulder Center and both Shoulder joints, the Core Length and the Shoulder length. For the gesture to be detected successfully, the player must learn to one side while keeping the core of the body straight. Successful gesture detection causes the 3D avatar to turn left or right in the game environment by applying the correct animation clip to switch between ski lanes (this appears in the main screen area in Figure 12). The exercise > "   >  >     ‚ \ >           !       standing position may even promote balance as patients transfer their weight from one side to the other.

Figure 12: Slope Creep game. (a) The main screen shows a sample screenshot from the lean to the right animation performed by the avatar shown in the main screen. The animation is triggered by successfully detecting the “Lean Torso” to the right gesture executed by the human actor shown in the insert in the bottom

428

Computer Games Technology

right corner of the screenshot; (b) as a result, the avatar is now aligned to the line of rings to be collected.

Squat Gesture. The gesture requires that the human actor performs a Š!‘ |!    ^   foot-to-hip distances (along the vertical axis) compared to those measured while standing. Squats improve patients’ balance and strengthen leg muscles [34]. In the Slope Creep game, squats cause the avatar to pick up speed so as to jump higher from an upcoming raised platform.

Figure 13: Slope Creep game. (a) The human actor is directed to squat; (b) the squat of the human actor (see bottom-right insert) is recognized; (c) as a result, the avatar picks up speed and performs a high jump to collect the purple star.

Patients with stability problems may resort to mobility support aids like a four-legged walking cane or partially support their weight on a pair of chairs put on either side to better control their posture and avoid a fall. If one opts to use chairs, they should be set a bit further away from the sides of the body to avoid interference with the “Push Both Ski Poles” gesture.

Game Logic The Slope Creep game mixes torso and arm gestures to guide an avatar along a ski lane, collecting prices and avoiding obstacles along the way.

Kinect-Based Exergames Tailored to Parkinson Patients

429

In contrast to the Balloon Goon game, the Slope Creep game uses a single longer level of progressive difficulty, implemented by the rate of appearance of bonuses and obstacles, which become more frequent with game progress. The gestures employed target flexibility, muscle strength and balance and aim at improving cognitive reaction through a timely response to upcoming prizes and obstacles. Figure 14 displays the game logic.

Figure 14: Functional diagram for the Ski game logic.

Similarly to the Balloon Goon game!     a “Stretch Arms High to Select” gesture to start the game. However, no countdown counter is present in this game, because initiative to start moving the skier avatar now must be taken by the user who has to execute a “Push Both Ski Poles” gesture. In the meantime, an idle animation of the skier occasionally shifting his weight from side to side keeps the screen busy. The game environment consists of a straight ski lane (composed of two parallel paths) and various game artefacts. The skier avatar can switch between paths (to collect prizes or avoid obstacles) by performing a “Lean Torso to the side” gesture, which triggers the corresponding animation causing the 3D character to switch paths. However, leaning to the left/right path when the avatar is already on left/right path, respectively, has no effect. Prizes along each path appear either in the form of rings positioned near the ground or stars at a respectable height above ground. To collect a price, the

430

Computer Games Technology

avatar has to switch to the correct path so as to force an avatar-prize collision, upon which the score maintained at the top of the game environment is incremented by the value of the prize collected. As in the Balloon Goon game, collisions with game artefacts trigger various sounds to enhance the game experience. Whereas collecting a ring is pretty straight forward, collecting a highervalue star is slightly more involved/demanding. An upcoming star is preceded by a speed-up lane (to alert the player), which ends in a jumping ramp with the star even higher above that ramp. The following scenario has been implemented, which does not require a player to actually jump at any point in the game, so as to avoid possible falls as a result of mobility and  > { >     > =>  ž she is not already on that lane and squat when cued for the avatar to pick up speed. Successful detection of a squat gesture triggers an animation which speeds up the avatar. When the avatar enters the jumping ramp, a collision is detected and another animation performs a high jump over the ramp. The parameters of this jump guarantee collision of the avatar with the star above the ramp resulting in collecting the star and incrementing the score on the score board accordingly. On the other hand, missing the squat results in a slower approaching avatar speed so that the lower-jump animation that is        >  Obstacles in the form of rocks along the ski lanes can be avoided by performing a “Lean Torso to the side” gesture to the other (empty) lane in a timely manner, as in Figure 15(a). Failure to escape the obstacle, an animation is triggered which shows the avatar falling over the obstacle and landing on the snow, as in Figure 15(b). One (up to a total of 3) lives is lost and the player is directed to perform “Push Both Ski Poles” gesture to retry. Unless all lives are lost, thus ending the game, the player reaches      !   ‘  |–€!           > detailed performance information (prizes collected and lives retained) and a numerical score, as in Figure 16(b). The screen lingers for a few seconds before the player can opt between restarting the game and quitting. Finally, as in the Balloon Goon game, a “Raise Both Hands” gesture results in game pause as well as in the appearance of an appropriate menu screen.

Kinect-Based Exergames Tailored to Parkinson Patients

431

Figure 15: Slope Creep game. (a) Sample screenshot showing the human actor (bottom-right insert) leaning to the left to avoid a rock; (b) sample screenshot of the avatar animation that is triggered after colliding with an obstacle.

Figure 16: Slope Creep game. (a) Sample screenshot showing the avatar reach      !    €            ’s performance.

Scene Design Several 3D game assets such as terrain, trees and fences have been designed in 3ds Max ab-initio using basic shapes. Textures designed in Photoshop were subsequently applied on objects (Figure 17). In addition, the ski equipment (skis and poles) have been designed and attached on the avatar while other objects are allowed to dynamically interact with it. However, ski equipment can also become detached from the avatar, e.g., in case the avatar collides with an obstacle and falls. A third-person camera view tracks the avatar during the gameplay. Other assets that can interact with the avatar include ramps, speed lanes, sidebars, prizes (in the form of coins and stars), and obstacles (in the form of rocks). A darker skybox (compared to the Balloon Goon game) was selected to create a contrast with the showy terrain.

432

Computer Games Technology

Figure 17: Static scenery game objects are composed by trees, fences and the game terrain.

Whereas the main game assets objects were created in 3ds Studio Max, the majority of dynamic game assets were used to interact with the avatar were recomposed within Unity to allow for grouping. For example, the jumping ramp game asset group is composed of the ramp 3D model, the speed lane, and three sidebars. This group was saved inside Unity as a prefab to allow spawning in various locations in the game scene (Figure 18). Prefabs of stage parts with already positioned obstacles and prizes can be loaded at game starts but also during gameplay. These premade game sets                     > these stage sets in the desirable position inside the game environment.

Figure 18: Slope Creep game grouped game objects saved as prefabs. From left to right: the avatar with attached ski equipment, the speed lane and ramp group, and the obstacle with attached skis (having detached from the avatar).

DISCUSSION AND FUTURE WORK Regular physical exercise and appropriate training can significantly improve motor function, postural control, balance, and strength in Parkinson’s disease (PD) patients. This work presents an interactive 3D game platform built on Unity’s 3D game engine which hosts two exergames (the Balloon Goon game and the Slope Creep game) developed specifically for PD patients with mild to moderate motor symptoms (Hoehn & Yahr stages II and III). The platform employs Microsoft’s Kinect sensor to capture patient movement in real time, without handheld controllers or external raised

Kinect-Based Exergames Tailored to Parkinson Patients

433

platforms that may be a danger to or impede specifically patients with PD. In fact, main platform navigation and gameplay adheres to PD-specific design requirements and principles drawn from the bibliography and presented in [21]. The choice to employ the Kinect sensor offers a unique opportunity to create game systems that facilitate patient monitoring during exercise to provide real time feedback, such as onscreen guiding artifacts and repetition counters as well as performance data. A unique contribution of the present work is the design and implementation of a custom collection of platform gestures that are tailored to the PD condition and have been designed mainly to enforce correct exercise form and execution and are employed as a motion “vocabulary” to facilitate game navigation and interaction with game objects. For example, when a macroscopic physical movement of a player “matches” a preprogrammed gesture, an onscreen menu item is selected or a 3D cartoon avatar responds, according to the current game context. It is of paramount importance to note that these gestures have emerged from existing training >   > >    "      increasing overall mobility for upper and lower limbs. Therefore, to reap             >!         followed accurately and in correct form. Our way to enforce this is through a vocabulary of gestures. Furthermore, within the context of each game, these    >          and neuromuscular systems at various degrees and promote their real-time collaboration. {   !  >>   >     ^ ([22, 23, 25, 26]) which adopt an objective-based approach which allows free body form during gameplay as long as the game objective is met, because it is much more disciplined and structured, as it is gesture-based. That is, a given gesture is recognized as such only if the player’s motion  >  > ‘ > ! ?™ , proper detection of a “Punch” gesture involves the player extending an arm from the resting position up and forward until fully extended to the front (as in panel (b) in Figure 3) and proper detection of a “Kick” gesture requires the player to both lift and extend a leg forward until fully extended to the front (see panel (c) in Figure 3). Moving on to the Slope Creep game, for the “Push Both Ski Poles” gesture to be recognized and have the avatar respond as programmed, the player must (a) stretch both arms in unison to a frontal extension, (b) lower them down to the initial position again in

434

Computer Games Technology

! € >     >   > *  >       > •           in stage (a) or in stage (b) or if the entire moving pattern (a)+(b) takes too long, the gesture tracking logic discounts the motion pattern being tracked as a legitimate gesture and does not allow the on-screen avatar to respond as programmed, i.e., to start moving forward in the game environment. Proper             >    relevant joints (hand, elbow, and shoulder for arm movements and hip, knee and ankle for leg movements) and deriving relevant angles in real time. Similar timing constraints are applied in the “Lean Torso to the side” and “Squat” gestures of the Slope Creep game. Parameterization is such that parameter values can be set for a spectrum of patients (e.g., stage II in the =>  basis, according to directions or feedback of an attending physiotherapist. The latter approach is meaningful in situations where some patients are more affected and frustrated by repeating failure as they are acquainted to platform navigation and the games themselves. * !    >         is the platform’s ability to quantify patient mobility/dexterity on a per        =>  >        detailed “kinesiological imprints” can be affected by various factors, such as time of day, patient tiredness, effectiveness of administered drugs, on/ off times, meaningful and statistically sound results over a period of a few days of using the platform are possible: for example, exercise early in the day and at the same time after taking medication. As a result, carefully customized daily exercise schedules afford the possibility to collect a time series of performance data that can be usefully correlated with e.g., detailed medication history records and disease progress. The platform has recently been demoed to local physiotherapists attending Parkinson patients with mild to moderate motor symptoms to obtain feedback with respect to the body kinesiology entailed in each individual game. We actively seek to identify and address safety issues not already foreseen by the principles laid out in, e.g., [21], and thus not already  >     >       >   {       >      the looseness or strictness of the individual gestures per PD stage or even on > => !>       exercise paths and dynamically motivating patient participation into game scenarios that are more challenging at both physical and cognitive levels.

Kinect-Based Exergames Tailored to Parkinson Patients

435

Based on early positive feedback, we also seek funding to fully deploy the platform in selected physiotherapy establishments and at a later stage possibly obtain proper license to allow willing PD patients to run the platform in their homes, partly to enable the collection of rich time series of performance data that can be usefully analyzed and correlated with, e.g., detailed medication history records and disease progress. It is however our belief that safe and carefully designed and implemented exergame platforms >    ‹; >        _      traditional physiotherapy programs to enliven a daily exercise schedule,            >      ‹;    training curricula tailored to the disease, though in a more loose and playful environment.

Data Availability No data were used to support this study.

 !  "  The authors declare that there are conflicts of interest regarding the publication of this paper.

Acknowledgments Initial/preliminary results that the research presented in this article, namely, the Balloon Goon game, appear as a conference paper [32]. The present article is significantly expanded so that it (a) incorporates that game into a new exergame platform for Parkinson patients, (b) adds an entirely new game, (c) describes architectural and design components for each game, and finally (d) adds proper unifying gesture-based game-level navigation that is proper Parkinson patients. The authors would also like to acknowledge that the work described herein was partially funded by the first internal research funding program of TEI of Crete.

436

Computer Games Technology

REFERENCES A. Yarnall, N. Archibald, and D. Burn, “Parkinson’s disease,” Medicine, vol. 40, no. 10, pp. 529–535, 2012. 2. K. R. Chaudhuri, D. G. Healy, and A. H. V. Schapira, “Non-motor symptoms of Parkinson’s disease: diagnosis and management,” The Lancet Neurology, vol. 5, no. 3, pp. 235–245, 2006. 3. J. Jankovic, “Parkinson’s disease: clinical features and diagnosis,” J. Neurol. Neurosurg. Psychiatry, vol. 79, no. 4, pp. 368–376, 2008. 4. ˆƒ>!“?!`’ !??ȁ ^!”  in Parkinson’s disease,” Journal of Medicine and Life, vol. 5, no. 4, pp. 375–381, 2012. 5. B. L. Den Oudsten, G. L. Van Heck, and J. De Vries, “Quality of life and related concepts in Parkinson’s disease: a systematic review,” Movement Disorders, vol. 22, no. 11, pp. 1528–1537, 2007. 6. M. F. Dirkx, H. E. M. den Ouden, E. Aarts et al., “Dopamine controls Parkinson’s tremor by inhibiting the cerebellar thalamus,” Brain : a journal of neurology, vol. 140, no. 3, pp. 721–734, 2017. 7. M. Horstink, E. Tolosa, U. Bonuccelli et al., “Review of the therapeutic management of Parkinson’s disease. Report of a joint task force of the European Federation of Neurological Societies and the Movement Disorder Society-European Section. Part I: early (uncomplicated) Parkinson’s disease,” European Journal of Neurology, vol. 13, no. 11, pp. 1170–1185, 2006. 8. A. Shumway-Cook, W. Gruber, M. Baldwin, and S. Liao, “The effect of multidimensional exercises on balance, mobility, and fall risk in community-dwelling older adults,” Physical Therapy in Sport, vol. 77, no. 1, pp. 46–57, 1997. 9. V. A. Goodwin, S. H. Richards, R. S. Taylor, A. H. Taylor, and J. L. Campbell, “The effectiveness of exercise interventions for people with Parkinson’s disease: a systematic review and meta-analysis,” Movement Disorders, vol. 23, no. 5, pp. 631–640, 2008. 10. L. E. Dibble, O. Addison, and E. Papa, “The effects of exercise on balance in persons with parkinson’s disease: a systematic review across the disability spectrum,” Journal of Neurologic Physical Therapy, vol. 33, no. 1, pp. 14–26, 2009. 11. M. Lugassy and J.-M. Gracies, Principles of Treatment in Parkinson’s Disease , A. H. V. Schapira and C. W. Olanow, Eds., Elsevier, 2005 1.

Kinect-Based Exergames Tailored to Parkinson Patients

12.

13.

14.

15.

16.

17.

18.

19.

20.

437

(Catalan). M. Svensson, J. Lexell, and T. Deierborg, “Effects of physical       "!  > !     ! and behavior: what we can learn from animal models in clinical settings,” Neurorehabilitation and Neural Repair, vol. 29, no. 6, pp. 577–589, 2015. A. T. Caglar, H. N. Gurses, F. K. Mutluay, and G. Kiziltan, “Effects of home exercises on motor performance in patients with Parkinson’s disease,” Clinical Rehabilitation, vol. 19, no. 8, pp. 870–877, 2005. B. G. Farley and G. F. Koshland, “Training BIG to move faster: the application of the speed-amplitude relation as a rehabilitation strategy for people with Parkinson’s disease,” Experimental Brain Research, vol. 167, no. 3, pp. 462–467, 2005. O. Assad, R. Hermann, D. Lilla et al., “Motion-Based Games for Parkinson’s Disease Patients,” in Entertainment Computing – ICEC 2011, vol. 6972 of Lecture Notes in Computer Science, pp. 47–58, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. J. Song, S. S. Paul, M. J. D. Caetano et al., “Home-based step training using videogame technology in people with Parkinson’s disease: a single-blinded randomised controlled trial,” Clinical Rehabilitation, vol. 32, no. 3, pp. 299–311, 2018. R. Kizony, P. L. Weiss, M. Shahar, and D. Rand, “TheraGame: A home based virtual reality rehabilitation system,” International Journal on Disability and Human Development, vol. 5, no. 3, 2006. F. A. D. S. Mendes, J. E. Pompeu, A. M. Lobo et al., “Motor learning, retention and transfer after virtual-reality-based training in Parkinson’s disease - effect of motor and cognitive demands of games: A longitudinal, controlled clinical study,” Physiotherapy (United Kingdom), vol. 98, no. 3, pp. 217–223, 2012. G. D. P. Vieira, D. Freitas, and G. Henriques, “Virtual Reality in Physical Rehabilitation of Patients With Parkinson’s Disease,” Virtual Reality in Physical Rehabilitation of Patients With Parkinson’s Disease, vol. 24, no. 1, pp. 31–41, 2014. G. Barry, B. Galna, and L. Rochester, “The role of exergaming in Parkinson’s disease rehabilitation: a systematic review of the evidence,” Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, article 33, 2014.

438

Computer Games Technology

21. I. Pachoulakis, N. Papadopoulos, and C. Spanaki, “Parkinson’s disease patient rehabilitation using gaming platforms: lessons learnt,” International Journal of Biomedical Engineering and Science, vol. 2, no. 4, pp. 1–12, 2015. 22. A. Amini, K. Banitsas, and W. R. Young, “Kinect4FOG: monitoring and improving mobility in people with Parkinson’s using a novel system incorporating the Microsoft Kinect v2,” Disability and Rehabilitation: Assistive Technology, pp. 1–8, 2018. 23. ?™!;ˆ ^!™'   !š  > >  with Parkinson’s disease using the Microsoft kinect: game design and pilot testing,” Journal of NeuroEngineering and Rehabilitation, vol. 11, article 60, 2014. 24. R. Hermann and M. Herrlich, Strong and Loose Cooperation in Exergames for Older Adults with Parkinson’s Disease, Mensch Comput, München, 2013.View at: Publisher Site 25. * }^_! >  extremities with target based games for persons with Parkinson’s disease,” in Proceedings of the 2017 International Conference on Virtual Rehabilitation, ICVR 2017, Canada, June 2017. 26. *}^_!˜žž >^ž ž ž  ž‘ ¯ Counts.pdf. 30. R. Webber and B. Ramaswamy, “Keep Moving - Exercise and Parkinson’s,” Publisher: Parkinson’s Disease Society of the United Kingdom, 2009, https://www.gov.im/media/1355779/keepingmoving-exercise-and-parkinsons.pdf.

Kinect-Based Exergames Tailored to Parkinson Patients

439

31. “Exercises for people with parkinson’s.”, http://www.parkinson. ca/atf/cf/%7B9ebd08a9-7886-4b2d-a1c4-a131e7096bf8%7D/ EXERCISEMAR2012_EN.PDF. 32. I. Pachoulakis and N. Papadopoulos, “Exergames for Parkinson’s Disease patients: The balloon goon game,” in Proceedings of the 2016 International Conference on Telecommunications and Multimedia, TEMU 2016, pp. 12–17, Greece, July 2016. 33. KNGF, Guidelines for physical therapy in patients with Parkinsons disease. 34. T. Ellis, T. Rork, and D. Dalton, “Be active! an exercise program for people with Parkinson’s disease,” Publisher: The American Parkinson Disease Association, 2008. 35. “Exercises for people with Parkinson’s,” Publisher: Parkinson Canada, https://www.parkinson.ca/wp-content/uploads/Exercises_ for_people_with_Parkinsons.pdf.

CHAPTER 16

Development of a Gesture-Based Game Applying Participatory Design to Reflect Values of Manual Wheelchair Users Alexandre Greluk Szykman,1 André Luiz Brandão,1 and João Paulo Gois1 1

Federal University of ABC–UFABC, Avenida dos Estados 5001, Santo André, Brazil

ABSTRACT Wheelchair users have been benefited from Natural User Interface (NUI) games because gesture-based applications can help motor disabled people. Previous work showed that considering values and the social context of these users improve game enjoyment. However, the literature lacks on studies that address games as a tool to approach personal values of people Citation: Alexandre Greluk Szykman, André Luiz Brandão, João Paulo Gois, “De > ™  =? ™ >>‹ >; š " ~  of Manual Wheelchair Users”, International Journal of Computer Games Technology, vol. 2018, Article ID 2607618, 19 pages, 2018. https://doi.org/10.1155/2018/2607618. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. - Link: https://www.hindawi.com/journals/ijcgt/2018/2607618/

442

Computer Games Technology

with physical disabilities. Participatory design encompasses techniques that allow absorbing and reflecting values of users into technologies. We developed a gesture-based game using participatory design addressing values of wheelchair users. To manage the development of our game, we permitted creativity and flexibility to the designers. Our design is aligned to the Game SCRUM and make use of concepts from the Creative Process. The products of each stage of the design that we applied are both a gesture-based game and its evaluation. We tested the enjoyment (immersion, difficult while playing, etc.) of users for the game that we developed thought game-based quantitative and qualitative analyses. Our results indicate that the game was able to provide a satisfactory entertaining experience to the users.

INTRODUCTION Practicing exercises has been proven to develop physical, social, and cognitive capabilities of wheelchair users [1, 2]. Further, researchers showed that exercises lead wheelchair users to have a better life quality than ablebodied people who do not practice [3]. Nevertheless, in the current social context, wheelchair users have reduced opportunities to participate in group physical activities [1, 2]. Researchers have been tackling this issue with the development of exergames, where users perform physical exercises while interacting with the mechanics of digital games [1, 4]. User interaction with exergames often employs Natural User Interface (NUI) through motion capture [5]. There is evidence that people with disabilities, including wheelchair users, become motivated while practicing solo or group exercises through interaction with NUI-based games [6]. Szykman et al. [6] described the panorama of the research involving people with physical disabilities and NUI-based games developed until the year of 2015. The authors sought answers to the question “How are researchers conducting studies with NUI games and people with physical disabilities?”. Szykman et al. [6] evidenced the importance of the development of NUI games for people with disabilities through two main observations: (1) 35% of the users who went through a treatment involving NUI games declared some form of improvement in life quality and (2) 95% of the researchers that measured the users’ enjoyment for the developed games pointed out that the users had a positive experience. Most of the gesture-based games have their main focus on the development of the rehabilitation of the users [6]. Examples for rehabilitation are games whose users’ interface is structured upon the necessary movements for

Development of a Gesture-Based Game Applying Participatory Design ...

443

physiotherapy after a stroke or cerebral palsy. Szykman et al. also observed that there is a small number of games that focus on the inclusion of people with physical disabilities into society. An example of inclusion is a game that allows people with an amputated arm to play together with able-bodied people. From the designing side, Gerling et al. [7] analyzed the importance of cooperation between experts and wheelchair users in the development of games with participatory design (PD). The authors concluded that expert designers who are not wheelchair users had more positive expectations about the representation of disabilities in a game content than wheelchair users. “    >          between insights of those two roles. They also mentioned that the manner     >     "   >  experience of the game. In general, the enjoyment was higher when the act of overcoming the handicap was represented in a manner that empowered users [7]. That insight converges to the conclusions of Hutzler et al. [8] and Szykman et al. [6] about representing tools for overcoming disabilities as personal values of users in the game. In 2016, Gerling et al. [9] represented the values of young powered wheelchair users in the design of gesture-based games through the application of PD techniques. The authors elicited the participants’ values for selfperception, gaming preferences, and gesture-based play. The results, based on testimonials from the users, showed that the positive player experience of the game was substantially satisfying with the representation of their values in content [9]. We describe the design that we conducted to the creation of a gesturebased game with manual wheelchair users. We based the design aligning it to Game SCRUM and on concepts of participatory design. Besides, the design activities that occurred during our design also took into account aspects of Creative Process to aid the emerging of users’ values. We measured the _ ! !    >!    developed game with quantitative and qualitative analysis. The evaluation of the game indicates that the design that we applied lead to a productive game. The contribution of this paper is the description of the design of a   =       "         > participatory design'>  !  ^  >           "                 purpose of entertainment.

444

Computer Games Technology

We organize this paper as follows. In Section 2 we present valueguided approaches, from seminal studies to those closer to our approach. In Section 5 we present our design, detailing the set of sessions that composed it as well as aspects of the game we developed. After that, in Section 3 we evaluate the gesture-based game that we developed, in which wheelchair users and their relatives played together. In Section 4 we discuss how the  >  > "    “  >   sessions with those ones in previous studies and discuss the values emerged along the PD sessions. In Section 6 we detail limitations of our study. In Section 7 we conclude the present work.

VALUE-GUIDED APPROACHES TO DESIGN The interest for and the distinct conceptualizations of human values have been continuously present in several areas, e.g., computer ethics, social informatics, and participatory design [11]. Nonetheless, a significant challenge when taking into account values in the design process arises. Precisely, values, which are naturally controversial or conflicting, not always are natural to be detected and incorporated into the technology. Such a challenge, as well as the different views of values, has led the HCI community and correlated research areas to produce fruitful studies, reflections, and methodologies to elucidate values into the development of technologies. One common aspect of the main studies is that values must be dominant and central in the design process [12–15]. Friedman and colleagues presented seminal studies about the valuesensitive design (VSD) [11, 16, 17]. The starting point of VSD is the values that center the human well-being, dignity, rights, and justice and welfare. The VSD methodology relies on an iterative process that combines a tripod of studies: conceptual, empirical, and technical [11]. Such studies provide a rich set of questions, from philosophical to technical aspects, to be answered     >  } ^ |      >     !  ! ! consequently, recreating Human-Computer Interaction (HCI) as a design  >  ‘     "    > !  { |‘ ‡        the International Wheelchair Basketball Federation (IWBF Table) to classify the functional abilities of users in a range from 1.0 (little or no controlled trunk movement) to 4.5 (normal trunk movement in all directions). Two of       >> !   — ª‚ The other wheelchair user has quadriplegia, scoring 2.0, respectively.

Development of a Gesture-Based Game Applying Participatory Design ...

447

Table 1: Characteristics of the codesigners (participants).

Label

Role in PD

Age

Sex

*“?‘} 

A B C D E

Wheelchair User Wheelchair User Wheelchair User Tennis Coach Physiologist

26 40 34 58 53

Female Male Male Female Female

2.0 3.0 4.5 4.5 4.5

“          > >  . According to previous studies, we time-marked our sessions to 60 minutes [30]. This time limit aims to avoid fatigue and the pressure under the participants and keep their focus. Before the beginning of each session, we announced the time limit to all participants. Every PD session occurred after a wheelchair tennis practice by the participants. Since our participants always wanted to make productive contributions, an effect of an agreed-upon time limit was that participants tried to resolve future discussions timely instead of getting lost in endless arguing. All sessions were video-recorded to capture the information of the design as a narrative [9, 31]. The application resulted in a reviewed prototype of a gesture-based game. For the artistic assets, we had the support of the Jecripe Project (https://jecripe.wordpress.com/english/). The role of the participants during the sessions, as “experts of their own lives”, included providing information based on their values to nourish the participant’s Creative Process. Furthermore, they participated in identifying and prioritizing requirements, evaluating prototypes, and interpreting and discussing collected data. The tasks of our role as researchers are listed below [31]:(i)Explain the objectives of the design activities in each PD session(ii) Facilitate discussions among participants(iii)Document data coming from participants(iv)Abstract the resulting information in the game elements considering the technical feasibility of their implementation(v)Discuss the     > >^   >  studies and good practice in participatory design in the literature

448

Computer Games Technology

Session 1: Introducing the Framed Problem Our design initially required researchers to pursue an analysis of the population and social impact and used technology for the game to be developed. Our research is detailed in a previous work [6] and resulted in a problem framing: How to develop a gesture-based game for wheelchair users focusing on their values and social context? In this session, we introduced the framed problem and conducted a conversation to settle the participants. Our objective was to capture relevant elements from values of the participants. The discussion flowed freely, and the participants did not seem to worry about the final goal (the game). The main topics of the conversation were objects or events of interest of the participants, such as the sports that they used to practice and obstacles of their daily life. We defined and conducted a design lasting 60 minutes and with a theme focused on elements of interest of the participants. Figure 1 shows a photo of the first session; the meeting format of the other sessions was similar to that one shown in the figure.

Figure 1˜‹ >!?!}  ‹; ˜  >>  and pencil to annotate their insights.

As supporting materials for the creative stimulus of the participants, we provided paper, pencils, colored pens, scissors, and glue, following the

Development of a Gesture-Based Game Applying Participatory Design ...

449

observations from Pommeranz et al. [30], who investigated how the use of    "        Thus, we made such materials available to participants to help them express   ¢    !  >       We noticed that all participants adopted the practice of writing their elements of interest as topics on the paper voluntarily. They ignored scissors and glue. We believe that, as the designers were very excited because the session occurred just after their tennis practices and there was a time restriction of 30 minutes, they opted to a strong conversation interaction instead of using manual materials where they would not interact with each other. {  "          >     {  predetermined time span for the session induced the participants to write more elements on paper and faster. The feeling of participating in a game with time constraints made participants worry about optimizing the process. Sometimes, when the conversation involved only a part of the participants, the others used the time to write down more elements. We used the written elements to restore the pace of the conversation when it decreased. During long gaps between interactions, we (the researchers) intervened asking the quietest participant to read aloud the elements that he or she had written and about which we had not yet talked. As a result, the entire session generated data to be analyzed. Therefore, we obtained a satisfactory outcome from this session. However, we noticed that extending the session’s time could have led to more information being collected. {  "     "      !        participants in rare opportunities. The main interaction took place at the beginning of the session when we explained the framed problem. There were also interactions to keep the pace of the conversation. In that case, we interacted when we perceived a lack of balance in the discussion, i.e., when one of the participants spoke much more than others. In those situations, we asked the opinion of the quieter participants on the subject at hand. We opted to keep the discussion among wheelchairs A, B, and C and not between ablebodied (and their tennis coaches) D and E [31]. Thus, we took into account but did not voluntarily encourage interactions between participants D and E. Finally, we realized that the fact that users are aware of our intention to develop a game with the Kinect sensor contributed to this topic being often mentioned during the conversation, as observed by Smith et al. ‡!{      >   from participants. We outlined the information in the form of a recorded

450

Computer Games Technology

discussion. Afterward, we could work with that information to represent it in a cloud map. We detail that step of the process in the next session. Table 2 summarizes the characteristics of Session 1. Table 2: Summary of the characteristics of Session 1.

Session 2: Cloud Map Building Watching the recording of the first session, we analyzed it, transcribing all the main elements that were verbalized during the conversation. We considered as main elements the substantives or verbs that represented relevant aspects to the theme of the conversation. Supported by the visualization software VOSViwer [34], we output those elements into a cloud map, highlighting and evidencing the most relevant words from texts. The maps can also represent words that have a close relationship to each other. During the second PD session, participants B, C, and E (Table 1) attended. We conducted the meeting with those participants and updated the others afterward. In this session, only two from the three researchers participated. We managed to keep the number of participants higher than the number of researchers to allow the participants to feel more comfortable during the sessions [8]. In the session, the participants recommended performing        >      "          elements that they tried to express. Figure 2 displays the resulting cloud map from this PD session. That map contained information about values and social context of the participants. We used that information in the future sessions to structure relevant user stories, commonly employed in Game SCRUM [35] that guided the development of the game. Table 3 summarizes the characteristics of Session 2. Table 3: Summary of the characteristics of Session 2.

Development of a Gesture-Based Game Applying Participatory Design ...

451

Figure 2: Cloud Map generated from information collected in Session 1. The most relevant words are highlighted and located closer to words related to them.

As the codesigners asked to modify the cloud map, we noticed that some values started to arise. For instance, the words in the cloud map (Figure 2) adrenaline, speed, race, and height were mentioned in the context where the codesigners were looking for empowerment. While the codesigners were discussing theater, friendship, and justice, they focused on noncompetitive sports. Finally, when talking about fun, sports, game, and physical activity, they are talking about their motivation to engage in diverse activities in their lives. We discuss in detail each of these aspects in Section 4.

Session 3: Generating User Stories The goal of Session 3 was to combine into user stories the possible aspects of the cloud maps within the given time. The participants randomly read some items in the maps and tried to create sentences with those elements. We suggested that, while forming the sentences, participants kept in mind the essential game elements: mechanics, technology, story, and aesthetics [29]. The stories did not necessarily use the exact words that are in the cloud map. Instead, we noticed that the participants were inspired by the words in the maps to create stories with similar words [36]. For instance, the fact that the words car and adrenaline were present in the maps led to a discussion about including a race in the game. As the codesigners connected the elements, the values emerging in the previous session started to be more evident. The use of cloud maps showed to be effective for the session. To perform the combinations, participants tended to focus on elements highlighted in the cloud map. Cloud maps evidence elements that were frequently mentioned,

452

Computer Games Technology

which are the most relevant to the users. Therefore, participants prioritized combinations among the most relevant elements. In this session, we had to intervene as researchers more often than in previous sessions. We realized  ! ! > >      >>  of the Design Activity. Thus, we provided some examples of combinations to clarify the purpose. We noticed that the participation of at least two researchers in the session was important. While one researcher intervened with examples of combinations, the other mediated the session to guarantee that participants created their combinations. We observed divergence of views between participant A and the subgroup formed by B and C. B and C proposed user stories directing to a more realistic and serious game, whereas A produced user stories leading to a more recreational and playful game. Regarding the gesture-based aspect of the game, participant A proposed controls by extravagant gestures inspired by dance moves. Participant B demonstrated to be slightly disconcerted in imagining himself executing those kinds of movements. However, participant B discoursed on the importance of the game has an educational aspect for people who need to gain experience in conducting wheelchairs. Participant C agreed with B by nodding his head, while A demonstrated signs of frustration. } > > !  " >>     ˜

! Š          be something serious or something playful or even artistic and, second, the question whether game mechanics should allow or also require a more extrovert expressibility. To include more participants in the discussion and         !    "    session. As a result from Session 3, we obtained a database of 42 user stories for    >   “         into the four basic game elements proposed by Schell [29]: Aesthetics, mechanics, story, and technology. In this study, the stories are sorted in     >  !  ' ª“  observed that even though user stories related to technology (Table 5) "             !   > > >   lower number of those stories compared to different categories. Keith [35] recommended around 100 user stories for simple games. Nevertheless, as the session got close to its end, the frequency of creating combinations for the game decreased considerably, and the participants appeared to be

Development of a Gesture-Based Game Applying Participatory Design ...

453

tired [32, 33]. Figure 3 relates the number of suggested user stories and the basic game elements. Table 4 displays the summary of the characteristics of Session 3. Table 4: Summary of the characteristics of Session 3.

Table 5: User stories about technology collected from participants during Session 3.

Figure 3: Number of suggested user stories by basic elements of games [29].

Session 4: Prioritizing User Stories The goal of the fourth PD session was to converge the user stories retrieved in Session 3 into the basic elements for structuring a Concept Prototype of the game. Between the third and fourth sessions, we did not interact with the participants to allow subconscious idea generation, also referenced as the period of incubation of ideas of the Creative Process, to improve the quality of the creative outputs [36, 37]. We understood that the session successfully reached its goal because the information received in it was enough to develop the Concept Prototype. The convergence of the collected information in the other sessions marked the illumination event [36] and the transition from the concept to the preproduction phase.

454

Computer Games Technology

We started the session asking the participants to try visualizing the structure of a game based on the user stories generated in Session 3. The participants remained at the consensus on the game elements that should build in the game. After 15 minutes of discussion, there were some disagreements. The main impasse of the session was the balance between the playful and educational aspects of the game. This question had already appeared in the previous session, during a discussion between the participant A and participants B and C. We intervened by providing printed versions of the tables with the user stories. We suggested to the participants to decide     >    " ‰“   tables at hand, it was easier for the participants to converge their opinions. We understood that such a fact happened because the stories that represented contradictory elements were not eliminated, but attributed with a lower level >‚{ >    ^   "  the previous session. Table 6 displays a summary of the characteristics of Session 4. Table 6: Summary of the characteristics of Session 4.

Summary of the characteristics of Session 4.

Objective

Prioritize the generated user stories

Format

Design Activity played with the participants and facilitated by researchers

Duration

60 minutes

Output

User stories prioritized in a way that allows the development of the game

The Concept Prototype The result of Session 4 was the definition of the Concept Prototype to be developed. Considering the deadline needed for the project, we agreed with the participants which user stories would be implemented for this first functional version of the game. We implemented seven user stories out of 13 about the game mechanics, three of the three about technology, 9 of the 12 about story, and 9 of the 14 about aesthetics. We developed the first

Development of a Gesture-Based Game Applying Participatory Design ...

455

version of the game following the directions in the prioritized user stories. We structured the aspects of the game not covered by the user stories based on insights from previous work [6]. In the next subsections, we detail the prototype. Because this is a Concept Prototype, we implemented the main character as male because he reflects a high-performance athlete who is a friend of the codesigners. However, in future versions of the game, we can develop a customizable character to the users.

Design and Gameplay Differently, from the study of Gerling et al. [9], all the participants of our work requested that the main character of the game should be a wheelchair user. Having the main character as a wheelchair user evidenced the motivational value of the game because the players felt more engaged to participate in outdoor activities by watching the character of the game. We managed to create the context and scenarios of the game in a way to involve the character in diverse activities as a wheelchair user. Thus, we divided the game into four stages named House, Street, Sea, and Sky. The first playable stage is the House of the main character. Figure 4 illustrates the possible flow between stages. Starting from the House, the player can choose from carrying out activities inside the home or moving to another stage. Figure 5 displays the scenario of the House stage. For selecting the stages, the character can drive a car or take a bus. Thus, the stage House works as a selection menu. Opting for driving the vehicle, the player enters the Street stage. Opting to take the bus, he or she comes directly into the chosen stage.

Figure 4˜‹ "    ˜ >      to stages Sea or Sky by taking the bus or drive to them by passing through the Street stage.

456

Computer Games Technology

Figure 5: A screen from the developed game with one person playing. The depicted stage is the House, where the character of the game explores the scenario to enter the other stages.

Except for the House stage, the game is essentially multiplayer. This aspect encourages interactions of the main player, whom we are assuming is a wheelchair user, with other people around him or her. The game mechanics are similar in the stages Street and Sea. The player must dodge obstacles while piloting a car and a jet-ski, respectively. The goal of all stages is to              of each stage, the game records the players’ time-stamp in a ranking that is reproduced on a panel in the House stage. The player starts the route with an initial speed. This speed increases as the player go around obstacles without colliding or go through items in turbo mode. Hitting obstacles causes the speed to decrease. Having the speed varying according to how well players            to the skills of the players. Especially in the Sky stage, the player rides a >     >       !       >    { >     The game aims to produce an encouraging atmosphere to the player. This aspect was a result of our balance between the playful and educational aspects suggested by the participants during the PD process. Players can try the stages various times to improve their scores (playful). As a consequence, players perform more exercises (educational). The game is multiplayer,   >   {           >  

Development of a Gesture-Based Game Applying Participatory Design ...

457

      {  >   Š    >       gameplay. For instance, if one player hits an obstacle, the speed for both players decreases. The purpose of this feature is to allow friendly and equal interaction between players. In all stages, when one player is close to the

  !>> >>  >      !      ^      the players. Figure 6 displays the celebration screen after players completing the Street stage. In general, the game aims to encourage wheelchair users to practice activities considered obstacles by the community, such as driving a car or riding a jet-ski or a paraglider.

Figure 6: Celebration screen after a stage completion.

User Interface There were few user stories concerning the technology of the game, so we had few features to define the user interface. Nevertheless, we designed the controls of the Concept Prototype as simple as possible to facilitate the engagement and understanding of the participants. The gesture-based interaction for this version of the game prioritized the movement of the arms, including wheelchair users with high or low mobility in the column. We developed the controls equally for all stages as a simulation of handling a steering wheel. In particular, in the House stage, the character speed in the forward plane was controlled by small head movements. In the Sky stage, the head movements controlled the height of the character. A forward bend causes that character to move forward, and vice versa. Figure 7 represents the logic for controlling the character.

458

Computer Games Technology

Figure 7: Relationship between body movements and the inputs in the game for            

Session 5: Enhancing the Game’s Interface Based on Users’ Values Session 5 was based on the methodology of Sprint Reviewing of the Game SCRUM [35]. Along with the participants, we decided which features of the game could be enhanced with the experience that we acquired. We enhanced game features to increase the playing experience in general. Moreover, we detected with the participants the necessity of including features to evidence their values in the game. At this point, we could represent the game values that emerged during the sessions (empowerment, noncompetition, and motivation). The empowerment of the players in the game was mainly symbolized by the explicit representation of the main character as a wheelchair user. In the game, the character handles his wheelchair without any help and is capable of moving to any location. He performs activities that, according to the participants, are considered obstacles to that group of people: taking a bus, driving a car, riding a jet-ski, and a paraglider. For starting the performance of any of the activities, the character steers the wheelchair to the location where the action starts. In that context, we represent the wheelchair as a mean to transition from the game character’s home to a more attractive place, rather than a tool to supply a disabled person.

Development of a Gesture-Based Game Applying Participatory Design ...

459

The noncompetition value is represented by the fact that succeeding in the levels does not depend on beating the other player. Instead, both players

           { > > mutually agreed with this feature. Their objective was to develop a friendly atmosphere for friends and relatives that had suffered a lesion and were still recovering from a potential trauma of learning how to handle a wheelchair. The motivation value was mentioned by the participants as a necessity of the game working also as an encouragement for players to try to perform the same or similar activities in real life. All the participants agreed that the gesture-based aspects of the game could increase the potential for this value if the required gestures in the game are more similar to the ones used in the real-life activities. It is because, according to our discussion, by repetitively performing similar gestures during the game, there is a better probability of >      >        From the Sprint Review, we concluded that values empowerment and noncompetition were well represented in the game. However, motivation could be better represented with a revision in the gesture inputs. Thus, we decided to modify the user interface so that the required gestures were more similar to the ones in real life. Table 7 summarizes the characteristics of Session 5. Table 7: Summary of the characteristics of Session 5.

The Preproduction Prototype We developed the reviewed game during the preproduction phase, according to the Game SCRUM’s principle [35]. Therefore, we considered it a preproduction prototype. The developed game was named “Wheelchair Jecripe”, as a continuity of the works of the Jecripe Project studies [39]. In Sessions 3 and 4, we structured the user stories to guide the game development. Most of the user stories were related to the mechanics,    !  ƒ         technology about the gesture inputs (Figure 3). As a consequence of not

     !        the most in Session 5 were the gesture inputs. According to the participants,

460

Computer Games Technology

the revision of the movements reinforced the motivational aspect of the game. With the new inputs, players performed movements not precisely identical to those that they would do in real life in the activities depicted in the game. Nevertheless, the revised movements are more similar to the real movements than the initial ones. This aspect potentially increases their            As a recommendation from the participants, we switched from hand movements to body movements in the Street, Sea, and Sky stages: reclining the body to the left (right) makes the character move to the left (right). Specially for the Sky stage, players control the character with open arms,        " “      >     the House stage so that players could control the character by representing movements of controlling a wheelchair: representing turning the left (right) wheel of the wheelchair makes the character turn to the right (left). Representing turning both wheels to the front while reclining the body to the front moves the character forward. Representing the movement of pulling back with the arms makes the character move back. Figure 8 displays the revised inputs for the game.

Figure 8: Relationship between body movements and the inputs in the game for controlling the character by NUI in the revised version of the game.

The gesture inputs for the House    Š   >    setup of the Kinect sensor and a more complex approach for motion capture.

Development of a Gesture-Based Game Applying Participatory Design ...

461

Contrary to the other stages, the gestures in the House stage required tracking of body parts in the -axis (Figure 8). For optimized motion capture, the ½  >    "‡  {        >   |‚   {       >     motion captures of the spatial coordinates because it avoided occlusions of the players’ body parts in the -axis. Figure 9 illustrates the mounting setup for the revised inputs.

Figure 9: The Kinect mounting setup for the revised inputs. The n2 coordinates      • {       of body parts.

In addition to the revision of the user interface, the participants asked for   >   •" “ >   >   with options to quit the game and return to the House stage. At any time in the game, players can lift the left arm and access that menu, in the same  ½ =    Š  ! "        ‘ |—     "  of the game.

462

Computer Games Technology

Figure 10˜š  "    ˜      !>ers can quit the game or return to the House stage at any time during gameplay.

EVALUATING THE ENJOYMENT OF THE GAME We analyzed the enjoyment of the game. Thus, we asked some of the participants and other wheelchair users as well as their relatives to play the developed game. To estimate the enjoyment of testers for the game, we evaluated metrics such as learnability, immersion, enjoyment, or fatigue. Then, we compared the player experience of those groups (participants and other wheelchair users and their relatives) for the game. This section presents the experiment and its results. All testimonials presented in this section were freely translated from Brazilian Portuguese by the authors.

Experimental Setup We experimented the preproduction prototype of our game with a population of 19 people (), divided into two groups. The first group consisted of four members of the codesigners: two wheelchair users and two able-bodied users, three female and one male, aged 58, 53, 26, and 34. The second group was the noncodesigners group, with 15 testers. This group contained seven wheelchair users and eight able-bodied users. All the able-bodied users were relatives or colleagues of the wheelchair users.

Development of a Gesture-Based Game Applying Participatory Design ...

463

Seven were female, and eight were male. The average age of this group was 32 years old, ranging from 10 to 60 years. Figures 11 and 12 represent the sex and age distributions of the groups.

Figure 11: Genre distribution among the testers.

Figure 12: Age distribution among the testers.

Figure 13 displays the favorite game genres among the testers. It is possible to see in Figure 13 that, for all the groups, most of the testers prefer games related to sports. We noticed that having the common practice of playing sports increased the expectation among participants for the game. A    > >      "   fact:(i)“I like to practice sports, I am really enjoying this project. I am sure

464

Computer Games Technology

it will be of great value to all involved people. Perhaps this game will make me play more video games.” (Translated from Brazilian Portuguese)

Figure 13: Favorite game genres among testers.

{        >   Š  ! >       |‚  ‡—  !     >  Š   “         testimonials and observations from the testers. We only interrupted the    >     >     >         case of software or hardware problems. The experiment had the approval of the Federal University of ABC’s Ethics Committee. For the testers that   >       Š  ! we asked their parents to answer the questions. A testimonial from the             "     ˜€*       the child playing the game. The answers of the questionnaires are based on the playing session of my son. He is a ten-year-old wheelchair user with coordination and cognitive limitations. His cerebral palsy is due to a problem during birth. He is a kid of easy comprehension. He communicates trough gestures or communication tablets. He is very interested in electronic games.” (Translated from Brazilian Portuguese) We summarized the user experience questionnaires in Tables 8 and 9. Table 8 shows the pretest questionnaire, which was based on insights from our previous studies [6]. We have formulated the topics in the pretest Š    Š    >  >      {  Œ >   posttest questionnaire, also based on related the work of van der Spek [40].

Development of a Gesture-Based Game Applying Participatory Design ...

465

We have formulated the topics in the posttest questionnaire to inquire the enjoyment of testers on the game. Testers were graded from each topic from the questionnaires from 1 to 10 according to how much they agreed with the information presented on that topic, except for topic 1 from { ‰!         IWBF Functional *    [2]. Table 8˜‹  Š      > >  !  from Portuguese.

Table 9: Posttest questionnaire, translated from Portuguese.

To generate topics 1 to 11 from Table 9, we employed part of the standardized ITC Sense of Presence Inventory [41] that measures the immersion level of testers when experiencing digital media. We formulated our questionnaire following the study of van der Spek [40], who employed the ITC Sense of Presence Inventory to measure the immersion of several testers in different versions of a serious game. Spek’s study was able to provide clear conclusions by using the ITC Sense of Presence Inventory since part of the ITC Sense of Presence Inventory is related to engagement and other factors, for instance, physical spaces and negative effects. Thus, the average of topics 1 to 11 from Table 9 formed an index for the immersion caused by the game to players.

466

Computer Games Technology

In addition, as our game is gesture-based for wheelchair users, we had    >  Š        users to infer the tiredness of the users as well as the facility of getting used to the Microsoft Kinect Sensor in our game.

Results For our analysis of the results, we created boxplots from the questionnaires (Tables 8 and 9) and also considered the testimonials of the testers. Figure 14, for instance, displays the boxplots of the players’ immersion level (average of grades resulting from the answers of topics 1 to 11 in Table 9). Each boxplot depicts the results of the codesigners and noncodesigners, individually, and then the results for all testers.

Figure 14: ITC score boxplot graphics: the graphs compare the dispersion of results from the immersion level for the codesigners and noncodesigners in the game. The ITC score was formed by the average of grades from topics 1 to 10 of Table 9.

Figure 14 reveals that, even though the developed game was in a prototype stage during the test, most of the testers experienced a satisfying immersion level, close to 9. The outliers in Figure 14 occurred due to the presence of a boy who has cerebral palsy. The boy had a hard time understanding how the game works. This result conforms to the results shown in Figures 15–17. These boxplots show facility of learning how to play, getting used to the sensor, and scoring in the game. Those topics received median grades equal to

Development of a Gesture-Based Game Applying Participatory Design ...

467

   ‚ >{  •   "  >  experience are listed below.(i)“The game is an important encouragement for inclusion and physical and emotional rehabilitation for people with physical disabilities. It is the great training of balance and coordination.” (Translated from Brazilian Portuguese)(ii)“It was very good to see the excitement of my son when he perceived that he could control and interact with a video game with a simple body movement.” (Translated from Brazilian Portuguese)

Figure 15: Answers from topic 16 of Table 9: this topic measures the facility of users on learning how to play the tested game.

Figure 16: Answers from topic 17 of Table 9: this topic measures the facility of users on getting used to the gestures requested in the game.

468

Computer Games Technology

Figure 17: Answers from topic 18 of Table 9: this topic measures the facility of users to score in game.

The boxplot in Figure 18 displays the level of enjoyment during the game. This topic received median grades greater than or equal to 8.5. {  •   "   _     €‹ was very enjoyable and likable. I enjoyed it very much. I had lots of fun and guess that the game will be a success. Congratulations to the dedication and commitment of the involved people.” (Translated from Brazilian Portuguese) (ii)“I enjoyed the game very much!” (Translated from Brazilian Portuguese) (iii)“The game is amazing for people with disabilities.” (Translated from Brazilian Portuguese)(iv)“The game was delightful! I loved it!” (Translated from Brazilian Portuguese)

Figure 18: Answers from topic 19 of Table 9: this topic measures the enjoyment proportioning the game.

Development of a Gesture-Based Game Applying Participatory Design ...

469

Moreover, we noticed that the lower scores to immersion and enjoyment   ‘ |ª|‰€   Š     > the game due to more severe cognitive or physical disabilities of some    “  >        "     €*  pleasant. However, in the case of my son, we need something more playful and interactive to capture his attention better. As he does not have body !               {  >     valuable, though.” (Translated from Brazilian Portuguese) It is possible to see in Figures 19 and 20 that the medians of the answers from topics 12 and 13, which concern physical and mental fatigue levels after playing the game, are higher for the noncodesigners group than for the codesigners group. This result is potentially due to the adjustments of game inputs implemented in Session 5. In that session, codesigners suggested more intense controls, which provided better movements in the game. On the other hand, the noncodesigners with less body control were harmed with the new inputs. In general, Figures 21 and 22 imply that most of the testers from both groups would prefer a game with even more intense movements.

Figure 19: Answers from topic 12 of Table 9: this topic measures physical fatigue after playing the game.

470

Computer Games Technology

Figure 20: Answers from topic 13 of Table 9: this topic measures mental fatigue after playing the game.

Figure 21: Answers from topic 14 of Table 9: this topic measures how much testers would prefer the movements in the game to be more intense.

Development of a Gesture-Based Game Applying Participatory Design ...

471

Figure 22: Answers from topic 15 of Table 9: this topic measures how much testers would prefer the movements in the game to be less intense.

We noticed that, even though the immersion (Figure 14) and enjoyment (Figure 18) evaluations indicated satisfying results, noncodesigners evaluated these dimensions more positively than the codesigners. We believe this is probably due to the expectation that the codesigners group had for the game since they participated in its development. Another hypothesis relies on the fact that codesigners had contact with the game along with its design process. Therefore, it was expected that the game would cause less impact on them. We collected testimonials from all testers referring to technical improvements in the game. Those improvements can be implemented in possible future Sprint Reviews. We list the testimonials below.(i)“I loved participating in the test. I think that the characters could have more intense colors. Congratulations for the initiative. I loved the project.” (Translated from Brazilian Portuguese)(ii)“I liked it. However, some things did not work well in my opinion. The movements in the House scenario, for example, were not easy or practical. Maybe you guys should change the movements   * " !*     `  it would be easier if I had more lights to follow, like a path. Lights in the arriving lane would be good too. In the Street scenario, the initial velocity was low. Maybe it could start faster.” (Translated from Brazilian Portuguese) (iii)“My son is losing his vision, and the screen was too high; he did not see the game adequately. In our home, where the television is in his eyes’ height, he makes better progress. All in all, the game was great, I loved it.

472

Computer Games Technology

It gives players a sensation of freedom. This is very good and interesting. Congratulations for the initiative.” (Translated from Brazilian Portuguese) (iv)“I have an Xbox 360 with a Kinect. This new Kinect is more sensitive than mine; then, I messed up a little bit with the controls.” (Translated from Brazilian Portuguese)(v)“Suggestion of Improvement: Improve the velocity of the vehicles. Improve the Pause Menu that is called up every time we raise the left arm.” (Translated from Brazilian Portuguese)   !     >   the tests was pleasant. We often noticed testers laughing and performing game gestures even when not playing the game. The atmosphere allowed even the shyest testers to interact and cooperate with their colleagues. We believe that the fact that we developed a game focused on the values of the users provided a friendly environment. The users seemed to feel comfortable with their representation in the game.

DISCUSSION The participants actively contributed to the data analysis and interpretation, e.g., during the discussion and adaptation of the cloud maps. The researchers participated mainly as facilitators during the sessions. As a consequence, the participants generated a few user stories concerning technology. If the researchers had opted to be more active in the discussions with the participants, perhaps some issues could have been avoided, for instance, the refactoring of the input gestures. However, this could have also led to less freedom of the participants regarding expressing their values. {       >>      > >  "       > ! !     >            >  ƒ   >      a game that could improve the quality of life of players by increasing their  =            >  in the game or other sports activities in the physical world. Although it is too early to make assertive claims, testers’ testimonials indicate the potential of the game to reach this goal. Regarding the context or user situation, we recruited participants that are already active sports practitioners, in this case, tennis. For future iterations of the design, practitioners of other sports could be included, as well as people who are less active and even practice no sports at all. { > >   ! ! >  of the game mechanics, content, aesthetics, and the base technology. Thus,

Development of a Gesture-Based Game Applying Participatory Design ...

473

all participants performed crucial roles during the design process. The base            > >  no expertise in this area. In the remaining areas, they were seen as equal >  "   >       by open-ended discussions; i.e., the researchers did not try to push through preestablished decisions. Adjustments to the design were made on a smaller scale, i.e., regarding the content of scheduled sessions, e.g., the evaluation and redesigning the gestures. Since the applied design is aligned to the Game SCRUM, major adjustments are also possible, resulting in additional iterations or sprints. As described in Section 2.2, words in the cloud map originated the user stories, which turned into game features. For each of those user stories,  "               game. Figure 23 describes a summary of how the insights for the game originated from the participatory design meetings, fed the game values, and transformed into the game features that we evaluate in our qualitative analysis. In the qualitative analysis, we were interested in how testers reacted to the game concerning the values it proportioned. In Figure 23, we also linked the testimonials and our observations about the environment’s atmosphere during the user tests.

Figure 23˜~ "         and its implication to the qualitative analysis of the game.

  >   "   ‘  ‡!   >       ! such as speed and height, originated the empowerment value of the game. It happened because, in the context that those words were mentioned during the meetings, the codesigners expressed a need for empowerment.

474

Computer Games Technology

That empowerment was translated into the excitement of testers while playing the game. The testimonial to which we linked the empowerment    >     !  ' ‡‚€!  concluded that representing the main character as a wheelchair user could also bring an empowerment value to the wheelchair in the game. Therefore, the empowerment value linked the words speed and height. Furthermore, we also realized the speed could be an essential feature of the game. Together with the codesigners nonresearchers, we decided that controlling the speed       >    ' '   Figure 23 presents the word speed is also linked to those stages, as it "     We believe that the Creative Process elements aided our work to support the elicitation and discussion of values of the users. We employed the }  ‹   >‹;     ˜|€ ! recorded conversation, (2) representation of the conversation topics into cloud maps, (3) connection of the elements in the cloud maps into user stories        >   !ª€>    participants after a period of no workload, and (5) revising the outputs of the stories. Based on such segments, the activities were structured to promote creative and effective participation of the participants. As a consequence, we developed a game that had a satisfying and near homogeneous positive player experience of wheelchair users with different characteristics. Therefore, we believe that the used tools and techniques successfully led us to represent those people’s values into the game. The Game SCRUM concepts were important to manage the design  " > > >    “      stories were a simple technique that allowed the participants to express user experience related aspects in a way that could be understood and used by the developers. Moreover, structuring the process on Game SCRUM avoided workload being applied to elements with low value to the game. As a result, we could develop a simple, but effective game. We display a graphical scheme of our design process in Figure 24.

Development of a Gesture-Based Game Applying Participatory Design ...

475

Figure 24: Flow of the process stages during game development: the framework stages overlap with Game SCRUM phases and Creative Process’ events.

The closest study to ours is the one from Gerling et al. [9] that also addressed the development of gesture-based games for wheelchair users with participatory design. The main focus of the work of Gerling et al. [9] was to explore with a qualitative analysis the value of gesture-based games for young powered wheelchair users. Different from Gerling et al. [9], we      =    > ‹;“   >  " > > >           ¢    !       were not compatible with wheelchair users with more limited body abilities. We also noticed that Gerling et al. [9] and our works represented disabilities of participants in a way that empowered them in the game. However, our representation was explicit, displaying a wheelchair user as the main character. Gerling et al. [9] empowered their characteristics with implicit     ‘   !      >   positive experience with the game. We summarized the main differences concerning the PD process and experimentation that we observed from the present work and the work of Gerling et al. [9] in Table 10.

476

Computer Games Technology

Table 10: Main differences between the present work and its most similar study.

We detected three main values that emerged during the process and that were “designed into” the preproduction prototype: (1) empowerment of wheelchair users by representing the wheelchair in the game as an apparatus to enroll in interesting activities instead of a tool to supply a particular need, (2) noncompetition in the game to amplify the friendly atmosphere of the game, and (3) motivation in terms of encouraging players to enroll in similar activities in real life. In particular, the (3) motivational value "    >   “        review meeting among participants and performed the necessary changes. {                     “    > >   Š     depicted activities in the game were more similar to the ones performed       {  >         players to enroll in those activities in real life. That situation reinforced the importance of developing the game through iterations in conformance   > “       Š   >>  enhance the prototype in the second iteration.

LIMITATIONS We applied a method with a determined social niche: manual wheelchair users that practice sports as a hobby. Also, we tested the outcome of the participatory design process, a digital game with NUI, in a testing group formed by wheelchair users and their relatives. It is worth mentioning that we evaluate the outcome, indicating a positive player experience by the testing group. It leads us to observe the effectiveness of the participatory design. However, we are aware of the importance of evaluating another audience. We acknowledge that the circumstances in which we developed      >>      "       ;     

Development of a Gesture-Based Game Applying Participatory Design ...

477

restrictions of participants, all the PD sessions took place after their training. As a consequence of the physical exercises, the mood of those codesigners was frequently high. Even though the relaxed atmosphere supported us in         " !    >  >  next sessions in varying situations to get a more diverse input from the participants. Similarly, due to resource constraints, we (the researchers and developers of the game) were responsible for applying the playing tests during the experiments in this study. Even though we focused on being as impartial as possible, potential sympathy created between the participants      "       *   ^!    conducting additional tests and evaluations with evaluators and test subjects unrelated to the design and development team. We understand that the participatory design process is expansible and "  “    > ™ '}š†`   >    session, time-spacing between sessions, and the frequency and degree of participation of researchers or design experts during sessions in opining about the design of the game. As a next step, researchers can adapt the process to Agile Software Development in general. We also believe that the process has the potential to assist in the development of games for a mixed social audience.     !        >>   >    an academic approach. We recognize that the process is liable of being applied in any commercial game development. Therefore, we believe that      >          of commercial game development projects [35, 42, 43]. As a suggestion   !    >    ^ > >   meetings, researchers can conduct two more extended meetings applying     *     !           problem, build the cloud map, and generate the user stories. The second    >         !  > >     time to incubate the idea. In the second meeting, participants prioritize the user stories, build a concept for the prototype, and submit it to critiques. We reinforce that the mentioned time periods are liable of being optimized by experimentation. In particular, the time before the illumination event and the periods of stimulating participants with design objective related content can be critical factors to evaluate.

478

Computer Games Technology

CONCLUSIONS In this work, we described a design process for the development of a gesturebased game, where manual wheelchair users (and their stakeholders) acted as participants in the sessions. Values from the participants emerged during the design sessions and were explored for the development of the game. Sessions were aligned to Game SCRUM, employing concepts such as user stories, prioritization, and sprints. Creative Process elements were also employed to stimulate the emerging of values. ƒ  >        "  the values of the wheelchair users with the sole purpose of entertainment. “  > "      !   be able, in future works, to carry out studies with physical and educational challenges, as continuity of the research described in the current study. We believe that the game can further be improved based on the results and testimonials retrieved during the experimentation. Even though we understood that the preproduction prototype of the game reached its objective in representing values of the users, it can be adapted to encompass a broader public. Possibly, the next group of people to be involved in the game content are wheelchair users with more limited functional abilities than the users that participated in the development of this study.

Data Availability Our study is the description of the development of a gesture-based game based on participatory design that reflects values of wheelchair users. The data were collected through experiments made by testers.

 !  "  The financial support that we mention did not lead to any conflicts of interest regarding the publication of this manuscript.

Acknowledgments We specially thank the CR Tennis Academy, the participants of the design process and the evaluation, and the PARAJECRIPE Team for providing us with artistic assets for the game. We also thank Heiko Hornung for our fruitful discussions about this work. This work was financially supported by Federal University of ABC–UFABC and the Coordination for the Improvement of Higher Education Personnel (CAPES, Brazil) Masters Scholarship. The

Development of a Gesture-Based Game Applying Participatory Design ...

479

authors also thank São Paulo Research Foundation (FAPESP, Brazil), proc. No. 2014/11067-1, for the equipment support.

Supplementary Materials To better illustrate the design of the Wheelchair Jecripe game, we provide supplementary material. The supplementary material is constituted by a video that illustrates the participatory design sessions with the wheelchair users and the researchers described in this study. The wheelchair users in the video have given consent for the video to be published. (Supplementary Materials)

480

Computer Games Technology

REFERENCES 1.

K. Hicks and K. Gerling, “Exploring casual exergames with kids using wheelchairs,” in Proceedings of the 2nd ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2015, pp. 541–546, New York, NY, USA, October 2015. 2. G. Fiorilli, E. Iuliano, G. Aquino et al., “Mental health and social participation skills of wheelchair basketball players: A controlled study,” Research in Developmental Disabilities, vol. 34, no. 11, pp. 3679–3685, 2013. 3. P. Paulsen, R. French, and C. Sherrill, “Comparison of mood states of college able-bodied and wheelchair basketball players.,” Perceptual and Motor Skills, vol. 73, no. 2, pp. 396–398, 1991. 4. K. M. Gerling, M. R. Kalyn, and R. L. Mandryk, “KINECT wheels,” in Proceedings of the CHI ‘13 Extended Abstracts on Human Factors in Computing Systems, pp. 3055–3058, 2013. 5. D. Wigdor and D. Wixon, Brave NUI World: Designing Natural User Interfaces for Touch and Gesture, Elsevier, 2011. 6. A. G. Szykman, J. P. Gois, and A. L. Brandão, “A Perspective of Games for People with Physical Disabilities,” in Proceedings of the the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, pp. 274–283, Parkville, VIC, Australia, December 2015. 7. K. M. Gerling, C. Linehan, B. Kirman, M. R. Kalyn, A. B. Evans, and K. C. Hicks, “Creating wheelchair-controlled video games: Challenges and opportunities when involving young people with mobility impairments and game design experts,” International Journal of Human-Computer Studies, vol. 94, pp. 64–73, 2015. 8. Y. Hutzler, A. Chacham-Guber, and S. Reiter, “Psychosocial effects of reverse-integrated basketball activity compared to separate and no physical activity in young people with physical disability,” Research in Developmental Disabilities, vol. 34, no. 1, pp. 579–587, 2013. 9. K. Gerling, K. Hicks, M. Kalyn, A. Evans, and C. Linehan, “Designing movement-based play with young people using powered wheelchairs,” in Proceedings of the 34th Annual Conference on Human Factors in Computing Systems, CHI 2016, pp. 4447–4458, USA, May 2016. 10. K. M. Gerling, R. L. Mandryk, M. Miller, M. R. Kalyn, M. Birk, and J. D. Smeddinck, “Designing wheelchair-based movement games,” ACM Transactions on Accessible Computing (TACCESS), vol. 6, no. 2, 2015.

Development of a Gesture-Based Game Applying Participatory Design ...

481

11. B. Friedman, P. Kahn, and A. Borning, “Value Sensitive Design: Theory and Methods,” University of Washington technical report 02–12, 2002. 12. O. S. Iversen, K. Halskov, and T. W. Leong, “Rekindling values in participatory design,” in Proceedings of the the 11th Biennial Participatory Design Conference, p. 91, Sydney, Australia, November 2010. 13. G. Cockton, “Value-centred HCI,” in Proceedings of the the third Nordic conference, pp. 149–160, Tampere, Finland, October 2004. 14. C. A. Le Dantec, E. S. Poole, and S. P. Wyche, “Values as lived experience,” in Proceedings of the the SIGCHI Conference, p. 1141, Boston, MA, USA, April 2009. 15. A. Borning and M. Muller, “Next steps for value sensitive design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, ACM, pp. 1125–1134, New York, NY, USA, May 2012. 16. B. Friedman, “Value-sensitive design,” Interactions, vol. 3, no. 6, pp. 16–23. 17. B. Friedman and P. H. Kahn Jr., “The Human-computer, Interaction Handbook,” in Ch. Human Values, Ethics, and Design, L. Erlbaum, Ed., pp. 1177–1201, L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 2003, http://dl.acm.org/citation.cfm. 18. G. Cockton, “A development framework for value-centred design,” in Proceedings of the CHI ’05 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’05, ACM, pp. 1292–1295, New York, NY, USA, April 2005. 19. J. Halloran, E. Hornecker, M. Stringer, E. Harris, and G. Fitzpatrick, “The value of values: Resourcing co-design of ubiquitous computing,” CoDesign, vol. 5, no. 4, pp. 245–273, 2009. 20. G. Cockton, “Designing worth—connecting preferred means to desired ends,” Interactions, vol. 15, no. 4, pp. 54–57, 2008. 21. O. S. Iversen and T. W. Leong, “Values-led participatory design,” in Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, NordiCHI ’12, ACM, p. 468, New York, NY, USA, October 2012. 22. M. Sicart, The Ethics of Computer Games, The MIT Press, 2009.View at: Publisher Site 23. M. Flanagan, D. C. Howe, and H. Nissenbaum, “Values at play: Design

482

24.

25. 26.

27.

28.

29. 30.

31.

32.

33. 34.

Computer Games Technology

tradeoffs in socially-oriented game design,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’05, ACM, p. 751, New York, NY, USA, April 2005. A. Kultima and A. Sandovar, “Game design values,” in Proceedings of the 20th International Academic Mindtrek Conference, AcademicMindtrek ’16, ACM, pp. 350–357, New York, NY, USA, October 2016. M. Flanagan and H. Nissenbaum, Values at Play in Digital Games, The MIT Press, 2014. S. Cuzzort and T. Starner, “AstroWheelie: A wheelchair based exercise game,” in Proceedings of the 12th IEEE International Symposium on Wearable Computers, ISWC 2008, pp. 113-114, USA, October 2008. K. Seaborn, J. Edey, G. Dolinar et al., “Accessible play in everyday spaces: Mixed reality gaming for adult powered chair users,” ACM Transactions on Computer-Human Interactions (TOCHI), vol. 23, no. 2, 2016. K. M. Gerling, I. J. Livingston, L. E. Nacke, and R. L. Mandryk, “Fullbody motion-based game interaction for older adults,” in Proceedings of the 30th ACM Conference on Human Factors in Computing Systems, CHI 2012, pp. 1873–1882, USA, May 2012. J. Schell, The Art of Game Design: A Book of Lenses, Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, 2008. A. Pommeranz, U. Ulgen, and C. M. Jonker, “Exploration of facilitation, materials and group composition in participatory design sessions,” in Proceedings of the 30th European Conference on Cognitive Ergonomics (ECCE ‘12), pp. 124–130, Edinburgh, UK, August 2012. N. Hendriks, K. Slegers, and P. Duysburgh, “Codesign with people living with cognitive or sensory impairments: a case for method stories and uniqueness,” CoDesign, vol. 11, no. 1, pp. 70–82, 2015. S. M. Smith, D. R. Gerkens, J. J. Shah, and N. Vargas-Hernandez, “Empirical studies of creative cognition in idea generation,” Creativity and Innovation in Organizational Teams, pp. 3–20, 2005. S. M. Smith and T. B. Ward, “Cognition and the Creation of Ideas,” The Oxford Handbook of Thinking and Reasoning, pp. 456–474, 2012. N. J. van Eck and L. Waltman, “Software survey: VOSviewer, a computer program for bibliometric mapping,” Scientometrics, vol. 84, no. 2, pp. 523–538, 2010.

Development of a Gesture-Based Game Applying Participatory Design ...

483

35. C. Keith, Agile Game Development with Scrum, Pearson Education, 2010. 36. T. J. Howard, S. J. Culley, and E. Dekoninck, “Describing the creative design process by the integration of engineering design and cognitive psychology literature,” Design Studies, vol. 29, no. 2, pp. 160–180, 2008. 37. R. Diluzio and C. B. Congdon, “Infusing the creative-thinking process into undergraduate STEM education: An overview,” in Proceedings of the 5th IEEE Integrated STEM Education Conference, ISEC 2015, pp. 52–57, USA. 38. E. Brandt, “Designing exploratory design games: A framework for participation in participatory design?” in Proceedings of the 9th Conference on Participatory Design, PDC 2006, pp. 57–66, Italy, August 2006. 39. A. L. Brandao, L. A. Fernandes, D. Trevisan, E. Clua, and D. Strickery, “Jecripe: how a serious game project encouraged studies in different computer science areas,” in Proceedings of the 2014 IEEE 3rd International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–8, Rio de Janeiro, Brazil, May 2014. 40. E. D. van der Spek, Experiments in serious game design: a cognitive approach [Ph.D. thesis], University of Utrecht, 2011. 41. J. Lessiter, J. Freeman, E. Keogh, and J. Davidoff, “A crossmedia presence questionnaire: The ITC-sense of presence inventory,” Presence: Teleoperators and Virtual Environments, vol. 10, no. 3, pp. 282–297, 2001. 42. D. Salah, R. F. Paige, and P. Cairns, “A systematic literature review for agile development processes and user centred design integration,” in Proceedings of the the 18th International Conference, pp. 1–10, London, England, United Kingdom, May 2014. 43. M. Brhel, H. Meth, A. Maedche, and K. Werder, “Exploring principles of user-centered agile software development: A literature review,” Information and Software Technology, vol. 61, pp. 163–181, 2015.

CHAPTER 17

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

Priscilla Haring , 1 Harald Warmelink,2 Marilla Valente,3 and Christian Roth 4 1

Media psychology, Amsterdam, Netherlands NHTV Breda University of Applied Sciences, Netherlands 3 Dutch Game Garden, Netherlands 4 HKU University of the Arts, Utrecht, Netherlands 2

ABSTRACT Most of the scientific literature on computer games aimed at offering or aiding in psychotherapy provides little information on the relationship between the game’s design and the player’s cognitive processes. Tis article Citation: Priscilla Haring, Harald Warmelink, Marilla Valente, Christian Roth, “Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games”, International Journal of Computer Games Technology, vol. 2018, Article ID 8784750, 9 pages, 2018. https://doi.org/10.1155/2018/8784750. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. - Link: https://www.hindawi.com/journals/ijcgt/2018/8784750/

486

Computer Games Technology

investigates the use of Bloom’s taxonomy in describing a psychotherapeutic game in terms of knowledge level and cognitive processing. It introduces the Revised Bloom Taxonomy and applies this to five psychotherapeutic games (Personal Investigator, Treasure Hunt, Ricky and the Spider, Moonboot, and Super Better) in a two-round procedure. In the first round consensus was reached on the Player Actions with Learning Objectives (PALOs) in each game. The second round sought to determine what level of knowledge and cognitive processing can be attributed to the PALOs by placing them in the taxonomy. Our low interceder reliability in the second round indicates that Bloom’s Revised Taxonomy is not suitable to compare and contrast content between games.

INTRODUCTION Over the past decade we have observed the emergence of a modest amount of psychotherapeutic games. With the term psychotherapeutic games, we refer to computer games aimed at offering or aiding in therapy for any psychological disorders or conditions (most often the precursors of depression or anxiety). Te use of psychotherapeutic board games as well as existing entertainment computer games during therapy is already widely regarded as good practice in many situations [1]. Innovative game-based therapy has shown a higher chance of engaging younger target audiences than traditional conversational and “paper-based” methods [2]. It is therefore surprising to find only limited information concerning psychotherapeutic videogames in the scientific literature relating to design and content [2, 3]. Horne-Moyer et al. focused their review on high-order design characteristics (games for health-related behaviors or individual therapy, versus games for entertainment used in individual or group therapy) and their general effectiveness [4]. Fleming et al. offered a more comprehensive review but kept their review of the games’ designs still quite basic by only briefly commenting on each game’s “rule, goals and game objectives”; “outcomes and feedback to the user”; “conflict, competition, challenge or opposition”; “interaction”; and “representation or story” [5]. A lack of insight into the design intricacies, gameplay, and their cognitive processes is problematic. It makes it hard to compare content across several games and thus discuss the state of the art of this field amongst designers, researchers, and practitioners beyond any specific game. Finally, having insight into what cognitive processes are addressed within gameplay would make it easier to spot missed opportunities for effective psychotherapeutic game design. We set out to test the application of an

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

487

analytical tool for labelling the cognitive elements of a psychotherapeutic game. We first present the tool itself—the Revised Bloom Taxonomy— and describe our approach in using this taxonomy for the analysis of five psychotherapeutic games: Personal Investigator, Treasure Hunt, Ricky and the Spider, Moonboot, and Super Better. We evaluated the content of these games independently, which allowed us to perform an interceder reliability analysis and rigorously evaluate the reliability of applying the taxonomy. We conclude with a critique on Bloom’s Revised Taxonomy, answers to our three research questions, and limitations of our own approach. Tis paper builds on an earlier exploration of using Bloom’s Revised Taxonomy as an analytical framework to identify whether psychotherapeutic games include metacognition in their games [6]. From this earlier exploration we hypothesized that Bloom’s Revised Taxonomy might be useful in different ways. Tis paper seeks to extend this earlier work by testing the robustness of Bloom’s Revised Taxonomy as an analytical framework. We wish to explore whether the framework will be useful to designers, researchers, and psychologists using psychotherapeutic games. Thus our research questions are as follows: ² ²

²

For designers: can Bloom’s Revised Taxonomy be used as a checklist during the design of a psychotherapeutic game? For researchers: can Bloom’s Revised Taxonomy allow researchers to make a more objective description of game content and allow for comparisons across psychotherapeutic games? For psychologists: can Bloom’s Revised Taxonomy support psychologists in making a more informed choice concerning psychotherapeutic games that might be included in their therapy?

METHODOLOGY Choosing Bloom’s Revised Taxonomy. Bloom’s Revised Taxonomy has already been considered as the “most popular cognitive approach to Serious Game evaluation” [7]. Bloom’s original taxonomy [8] stems from the Feld of education and consisted of categories for Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. Bloom’s original taxonomy was a popular tool for objectives-based evaluation as it allowed for a high level of detail when stating learning objectives [9]. However, the original taxonomy was criticized resulting in various revisions

Computer Games Technology

488

by different authors. See de Kock, Sleepers, and Voeten for a classification of learning environments, containing reviews of the revisions [10]. The revision of Anderson et al. [11] as well as Pint rich [12] improves the original taxonomy by including the category of metacognition. Tey also distinguish between two dimensions: a Knowledge dimension and a Cognitive Process dimension. We feel the inclusion of the metacognition knowledge level reflects the ongoing insight in the feld of psychotherapy where Cognitive Behavioral Therapy (CBT) is currently advancing into its “third wave.” The first wave of CBT started in the 1950s and applied classical conditioning and operant learning. The second wave applied information processing and brought CBT to its current worldwide status. Now, a third wave of psychotherapies is developing “...a heterogeneous group of treatments, including acceptance and commitment treatment, behavioral activation, cognitive behavioral analysis system of psychotherapy, dialectical behavioral therapy, metacognitive therapy, mindfulness based cognitive therapy and schema therapy” [13]. These three waves in CBT can be seen to move up along both dimensions of our taxonomy. Different therapy forms in CBT’s third wave are aimed at the metacognitive level and include all the cognitive processing steps up to and including Creation as part of their treatment. By applying Bloom’s Revised Taxonomy to analyze the content of psychotherapeutic games, we are approaching these games as educational content. We see all therapeutic interaction as part of a learning process; often knowledge is to be acquired, emotions are revised, and behavior is changed during psychotherapy.

Bloom’s Revised Taxonomy Bloom’s revised taxonomy consists of two dimensions with several levels each. Te levels within the dimensions have a hierarchical nature, meaning that every higher level presupposes the presence of the lower levels. On the knowledge dimension, the taxonomy distinguishes between the following levels: 1.

Factual Knowledge: the basic elements that students must know to be acquainted with a discipline or solve problems in it (a) Knowledge of terminology €½   >       2. Conceptual Knowledge: the interrelationships between the basic elements within a larger structure that enable them to function together.

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

489

(a) ½         (b) Knowledge of principles and generalizations (c) Knowledge of theories, models, and structures 3. Procedural Knowledge: how to do something, methods of inquiry, and criteria for using skills, algorithms, techniques, and methods. €½   _ =>  ^  €½   _ =>   Š    (c) Knowledge of criteria for determining when to use appropriate procedures 4. Metacognitive Knowledge: knowledge of cognition in general as well as awareness and knowledge of one’s own cognition. (a) Strategic knowledge (b) Knowledge about cognitive tasks, including appropriate contextual and conditional knowledge (c) Self-knowledge Table 1: Taxonomy Table

(1) (2)

(3) (4)

(5) (6)

Remember: retrieving (recognizing, recalling) relevant knowledge from long-term memory. Understand: determining (interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining) the meaning of instructional messages, including oral, written, and graphic communication. Apply: carrying out (executing) or using (implementing) a procedure in a given situation. Analyze: breaking material into its constituent parts and detecting how the parts relate to one another and to an overall structure or purpose (differentiating, organizing, and attributing). Evaluate: making judgments (checking, critiquing) based on criteria and standards. Create: putting elements together (generating, planning, and producing) to form a novel, coherent whole or make an original product.

Computer Games Technology

490

Seen together these two dimensions can be visualized in the Taxonomy (Table 1) [14].

Five Psychotherapeutic Games We applied the Revised Bloom Taxonomy to five psychotherapeutic games: Personal Investigator, Treasure Hunt, Ricky and the Spider, Moonboot, and Super Better. These five games have been specifically selected, as they have been published in scientific journals and are explained in sufficient detail for us to perform an analysis [15–20]. We wanted to perform an analysis that could exist outside of the game—not playing the game ourselves or observing the gameplay of the intended players. Tis approach allows us to compare games by not starting from the individual perspective—creating more bias—as well as providing a very practical limitation in research effort. Our approach is intended to be performed based on the (design) description of the game content, preferably including the goals of the game designers and/or the therapeutic goals the game content is based on.

First Round of Analysis In order to try and answer our three research questions we provide a robust measurement that might support designers and can be used to compare and contrast game content by both researchers and psychologists. In this paper we investigate the application of Bloom’s revised taxonomy by going through the process of applying it and looking for interceder reliability. Our process started with the following steps: ²

Use coders with a background in psychology and (serious) game design. ² Select and read literature concerning Bloom’s revised taxonomy. ² Select and read literature concerning the psychotherapeutic games. ² Provide instruction on applying the taxonomy. ² Apply the taxonomy individually. ² Present and discuss results at a face-to-face interceder meeting All four authors of this paper have backgrounds in psychology and/ or social science, with experience in game research, in game design, and in education. We started by independently reading the selected literature

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

491

describing the taxonomy [6, 11, 14] and the selected literature describing    >  >     |‚Ÿ‡— {    >   classify the games consisted of three steps per game: ²

Describe the possible actions by the player needed to proceed in the game. ² Place actions in the taxonomy and provide a short argument why you place it there. Please note any reservations, questions, and comments that arise while you do this. ² Create one Taxonomy Table per game. All coders independently went through these three steps. When we presented the results to each other in a face-to face meeting, it became evident that our process had yielded wildly different results concerning the    >        >   >      { Table. Our subsequent discussion focused on elaborating and clarifying the diverse interpretations of the categories in Bloom’s Revised Taxonomy. Our biggest discrepancy concerned the knowledge dimension, especially the difference in classifying an action as conceptual, procedural, or metacognitive knowledge. For the categories in the cognitive process dimension we found it easier to align our interpretations by using concrete examples. We rearmed that any discussion of the cognitive processing or knowledge levels can only discuss the lower bound, i.e., the minimal requirement in knowledge  Š>>     ^ >  Every assignment in a game could be approached from a higher knowledge level, processed with a deeper understanding and a more overt strategy than where we allocated them in the Taxonomy Table. Tere is no way of knowing this upper bound without measuring every individual during gameplay. We judged the described game content for what knowledge levels and >         €         therefore be predicted. In our discussion, this perspective was an important anchoring necessary to make any judgment. There was also discussion on the interpretation of “self-knowledge” as it might be argued that the subject of therapy is the “self” and therefore all therapeutic interactions deal with self-knowledge. To resolve this, we turned to the description of “metacognitive knowledge is knowledge of [one’s own] cognition and about oneself in relation to various subject matters...” [11]. The “knowledge of                    that is core to the CBT approach, where the patient is made an observer of

492

Computer Games Technology

his/her own (internal) behaviors. Tis makes the minimal requirement for most CBT self-observation and not necessarily self-knowledge as meant          “       Š    experiences and thoughts might be placed in the Taxonomy Table at the Factual Knowledge level, if the questions do not go beyond self-observation.

Second Round of Analysis We did not use the results in the Taxonomy Tables of this first round of assessment for further analysis. To see if Bloom’s Revised Taxonomy can be used as a basis for comparison, interceder reliability must be established. If we can establish that the same data will be coded in the same way by different observers, we can be confident that objective comparisons can be made. In order to perform any interceder reliability analysis, we had to accumulate an agreed upon list with the same amount of player actions per game. We decided to leave actions that are only necessary as part of the game literacy out of the assessment—such as retrieving a key in a game to open a game object. Overall, the activity of playing a game can be seen as belonging to Applying Process Knowledge in the Taxonomy Table “students who successfully play or operate a game are showing understanding of process and task and application of skills” [21]. In search of more homogenous way to describe game content and player actions we returned to the literature, where we found the stipulation that the taxonomy is meant as a structure for learning objectives. Tis provided a structured way of forming a description. Stating a learning objective requires a verb and an object, where the verb refers to the intended cognitive process and the object refers to the knowledge level that must be acquired or constructed [11]. We decided to use this verb/object structure to formulate Player Actions with a Learning Objective (PALO) and structured our consolidated list accordingly. In our second round of assessment our process consisted of three steps: (1) Create a consolidated list of PALOs per game. (2) Each coder individually places the PALOs in a Taxonomy Table. (3) Establish interceder reliability by means of statistical analysis. Leaving pure game actions out of the scope, merging similar descriptions through open discussion and rephrasing our descriptions of player actions lead to an agreed upon list of PALOs for every game. These PALOs were independently encoded by placing them in the Taxonomy Table and the results were shared in order to calculate interceder reliability. We now offer

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

493

   > ‹’ƒ     >  >   games.

Personal Investigator Personal Investigator is a game based on Solution Focused Therapy (SFT) and aimed at adolescent psychological patients. Coyle et al. [15] present SFT as “a structured rather than a freeform therapeutic model,” similar to CBT. The game is meant to help adolescent patients go through five different conversational steps with their therapists. These five steps are translated into five main areas in the 3D game world, where the player interacts with nonplaying characters. Initial trials proved promising, but further trials would be required to further test the game’s validity [15]. The game is a single-player 3D computer game with roleplaying characteristics. In the game the player becomes the personal investigator that “hunts for solutions to personal problems,” keeping a notebook along the way to keep a record of the hunt and the solutions found. It is played over roughly three therapy sessions, taking just over half of the one-hour session each time. During the sessions, the player plays the game on the computer, while the therapist observes and ofers explanations if requested. After discussion a consensus of 13 PALOs was reached: ² ² ² ² ²

² ² ²

The player is asked to give a detective name to his/her avatar. The player is asked to write down a problem he/she has that they would like to work on in the detective notebook. The player is asked to turn a problem into a goal they would like to achieve—this becomes the goal of the game. The player is encouraged to think about situations in which the problem that is opposite of the goal is absent or less prevalent. The player is encouraged to understand (but we do not know how) what they are doing differently when the problem is absent or less prevalent. The player is asked to set goals for repeating the behaviors that result from action 5 more often. { > ^      ž  >     situations. The player is asked to write about positive, active ways of coping that draw on their strengths and interests.

Computer Games Technology

494

² ²

² ² ²

The player is asked to identify people that can help achieve the goal (in real life). The player is asked to think about personal strengths and write down in the detective notebook things they are good at and past successes. The player is asked to draw the answer to the Miracle Question in their detective notebook. The player is asked to write down what they and others would think, feel, and do differently afar the Miracle. The player is asked to rate on a scale of 1-10 how close they are to achieving this new future.

Treasure Hunt Treasure Hunt is a game meant to support CBT for children with both internalizing (e.g., depression, anxiety) and externalizing (e.g. oppositional defiant disorder, conduct disorder) psychological disorders [2]. It specifically supports therapy “by offering electronic homework assignments and rehearsing basic psychoeducational parts of treatment” [16]. Players experience CBT support by going through six levels during gameplay, each corresponding to a certain step of the therapy. Again, initial tests proved promising, but further rigorous trials would be required to test the game’s validity [16]. The game is a single-player, 2.5D adventure computer game on an old ship inhabited by Captain Jones, Felix the ship’s cat, and Polly the ship’s parrot. The captain has found an old treasure map that he needs to decipher. The player helps by completing tasks to obtain sea stars, which will eventually allow him/her and the captain to read the map. Finally, after receiving a      >  > what he/    ! >      { > >    per therapy session, lasting roughly 20 minutes. The literature provides a very limited description of play. It does provide a clear translation between the cognitive behavioural concepts and the game metaphors chosen. The paper also stipulates the importance of the guidance of the therapist and that the game is not meant as self-help but must be embedded in therapy. After discussion a consensus of six PALOs was reached:(1)The player receives “psycho-education” within the game. The basic psychological

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

495

foundations of CBT are laid out: one’s personality is made up of thoughts,  ! ¿ •  "   • ¿  feelings can be distinguished, i.e., anger, fear, happiness, and sadness.(2) The player is asked to distinguish between helpful and unhelpful thoughts in general.(3)The player is asked to distinguish between helpful and unhelpful    >      > ª€{  >   ^      >          ‚€{  >   ^    >   >      >          –€{  >   ^     >       Captain Jones and the therapist.

Ricky and the Spider Ricky and the Spider is a game based on CBT for treating obsessive compulsive disorder (OCD) amongst children. Brezinka presents the game as “not a self-help game” but one that “should be played under the guidance of a therapist” [17]. The game’s design foundation is a “child-friendly metaphor” for understanding both OCD and the CBT approach, thereby combining “psycho-education, externalizing techniques and exposure with response prevention.” Players experience the therapy by going through eight levels during gameplay. Data gathered from therapists and patients who purchased the game revealed promising results, but, further, rigorous trials would be required to test the game’s validity more convincingly [17]. The game is a single-player, 3D adventure computer game. In the game the player is confronted by Ricky the Grasshopper and Lisa the Ladybug who (without saying it explicitly) suffer from OCD and need to confront The Spider who has been making demands that they cannot meet. They ask Dr. Owl for advice, who in turn requires the player’s help. There are eight levels of gameplay in which Dr. Owl, Ricky, and/or Lisa explain certain theories   ^  > >> { 

   >   {        on exposure tasks that are called “courage tasks” in the game. With the therapist observing, the player plays one level at the beginning of a therapy session, which takes approximately 15 minutes, and recounts the content of the level after which the therapy session continues from there. After discussion a consensus of seven PALOs was reached: (1)The player receives “psycho-education” within the game. The well-established metaphor for OCD is the thought-stream, which is discussed. As well as the four-leaf-clover with strategies for behaviour.(2)The player is asked to give

496

Computer Games Technology

the Spider (antagonist) a silly nickname.(3)The player is asked to make his/ her own compulsion map with courage tasks to complete.(4)The player is asked to practice the easiest courage task multiple times a day (outside of gameplay).(5)The player is asked to support Lisa in performing additional courage tasks.(6)The player is asked to motivate Ricky to do additional courage tasks with the four-leaf-clover strategies.(7)The player is asked to recount the gameplay in interaction with the therapist.

Moodbot Moodbot is a game for adult psychological patients recovering from conditions such as psychosis and attempts to prevent them from relapsing [18]. As such the game is not tied to a single form of psychotherapy but is a more general psychotherapeutic aid. As a relapse prevention aid, the game is based on two assumptions. The first is that “communication between a patient and his/her healthcare worker about the patients’ mental state is important for the patient’s path towards recovery.” The second is that patients exhibit various, unique signs that indicate whether they are likely to relapse that need to be recorded in so-called “alert schemes” so that they may be used to help prevent relapse. Moodbot is therefore primarily a way of identifying and communicating mental states and any indicative signs from a patient to his/her therapist between therapy sessions. The game is apparently being trialled in professional psychotherapeutic practice. As yet, there is no further information available to ascertain the game’s validity. The game is an online, multiplayer 2D computer game, although the interaction with other players is indirect (similar to well-known online, social games such as FarmVille). In the game the player is on board a highly   >   €  >^ >  towards certain islands [19]. The player can overview all the rooms of the  >   ž  > >    that might earn him/her points (dust bunnies) that can be spent to get the ship moving and to steer it towards an island. The game is played daily for >>     > |‰* >   daily updates of his/her mental state as well as signs that could be indicative of a relapse that the therapist can access in a backend interface at any time. After discussion a consensus of four PALOs was reached.(1)Before gameplay the future player is asked to decide—together with their therapist and based on their alert scheme and goals—on the labels of the dashboard. (2)The player is asked to express how they feel that day by adjusting their

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

497

moodbot, moodtube, and dashboard.(3)The player has the opportunity to go into other players’ rooms and observe their mood-state. It is possible to leave comments (tips and advice) in these rooms.(4)The game can stipulate real world challenges, provided and monitored by the therapist, which bring =     > >

SuperBetter SuperBetter is a game (or gamified platform) that is available as a webbased tool and an app for mobile devices. It appropriates game mechanics in order to provide a new narrative for accomplishing challenging health and wellbeing related goals [22]. SuperBetter is not specifically designed as a psychotherapeutic game. However, in a random controlled trial SuperBetter proved itself effective in decreasing depressive symptoms in comparison with a waitlist group [20]. In SuperBetter players give themselves a superhero-secret-identity based on their “favourite heroes.” Players then select a goal to work toward and are awarded “resilience points” throughout the game (physical, mental, emotional, and social resilience) and level up. Gameplay ends when the goal is achieved and can be continued by setting new goals. Players can take steps towards achieving their goal by performing Quests: actions that share a common theme. SuperBetter has predetermined Quests that players can select or design their own or select Quests designed by other players. Players can also undertake mood-enhancing activities (power-ups), which are simple and instantly possible actions such as drinking a glass of water or hugging yourself. The platform provides Bad guys to battle. These Bad guys belong to certain Quests or can be copied from other players or designed by the player. Finally, players gather social support (invite allies). Players can invite friends through the SuperBetter platform to help them. SuperBetter offers a mail contact form and a Facebook plug-in to do this. If the friend becomes an Ally they have access to the players’ Quests, Power-Ups, and Bad guys and can suggest new ones. After discussion a consensus of nine PALOs was reached:(1)The player is asked to create a superhero identity to play the game with.(2)The player is asked to state a goal (epic win).(3)The player has the opportunity to select and perform Quests—a series of actions that help achieve their goal.(4)The player has the opportunity to create and perform a Quest—a series of actions that help achieve their goal.(5)The player has the opportunity to select and perform a Power-Up—simple mood-enhancing activities.(6)The player

498

Computer Games Technology

has the opportunity to create and perform a Power-Up—simple moodenhancing activities.(7)The player has the opportunity to select and battle Bad Guys—behaviours that are counterproductive to achieving a Quest.(8) The player has the opportunity to create and battle Bad Guys—behaviours that are counterproductive to achieving a Quest.(9)The system asks players to invite Allies as in-game social support through social networks or e-mail.

PALOs and Intercoder Reliability In this section, we describe the results of placing the PALOs in Bloom’s Revised Taxonomy Table. The similarity in overall assessment is low. Only one PALO is scored exactly the same by all four coders (PALO 4 of Ricky and the Spider) and on five PALOs three out of four coders agreed (PALOs 1 and 5 from Ricky and the Spider, PALO 1 from Treasure Hunt, PALO 5 from Personal Investigator, and PALO 8 from SuperBetter), twelve PALO showed two coder agreements, and the remaining twenty-one PALOs had no exact agreement between coders. Concerning the attribution of a Knowledge level, the PALOs of Treasure Hunt, Personal Investigator, and Superbetter were most frequently scored as Conceptual Knowledge. Ricky and the Spider and MoodBot were most frequently scored as Procedural Knowledge. On the attribution of a Cognitive Processing level, the PALOs of Treasure Hunt and Personal Investigator were most frequently scored as “Understand,” while Ricky and the Spider and MoodBot were scored as “Apply” most and SuperBetter had most PALOs scored as “Create.” When looking at the selection within dimensions, Knowledge level had three PALOs for which all four coders chose the same knowledge level (PALOs 4-6 from Ricky and the Spider), fourteen had three coder agreements, twenty-one had two coder agreements, and only one had no agreement (PALO 9 from Personal Investigator). Within the cognitive processing dimension three PALOs (PALO 4 from Ricky and the Spider, PALO 5 from Personal Investigator, and PALO 1 from SuperBetter) had complete agreement, twelve had three coder agreements, twenty-one had two coder agreements, and three (PALO 3 from Treasure Hunt, PALO 4 from Personal Investigator, and PALO 3 from Moodbot) had no agreement. Having arrived at an equal amount of data points, an inference concerning the intercoder reliability could be calculated. The Taxonomy Tables were merged into a single matrix, containing a unique identifying number for each cell in the table per game, making the data suitable for the

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

499

calculation of Krippendorff’s alpha [23, 24]. This is a statistic that represents the reliability of a variable and its encoding process, which is suitable for cases with any number of coders and variables of all measurement levels (nominal in our case) [25]. Running a Krippendorff analysis for intercoder reliability of all the PALOs resulted in a very low alpha (see Table 2). This indicates that intercoder reliability is virtually nonexistent as the four observers rarely agree on any exact placement in the Taxonomy Table. A Krippendorff alpha of exactly zero would mean that there are no differences between the encoding results and attributing random values to the data. The alpha statistic can also be negative, in which case the encoding results are worse than random and indicate a structural error. In our results we are encoding somewhat above pure chance () but not at the level of the norm for a good reliability test (' = between .60 and .80) [25]. Our results indicate a lot of room for subjectivity and interpretation in the placement of our 39 PALOs in the Taxonomy Table. Agreement on encoding within both dimensions was a little higher than overall agreement but provided no intercoder reliability (see Table 2). Table 2: PALO encoding results.

All the analyses were rerun while excluding one coder consecutively. Although the Krippendorff alpha does alter slightly by excluding any one coder, there is no indication for a structural error and the reliability remains far below acceptable. This supports the notion that there is actual disagreement between the encoders and it is distributed equally. The overall results from the analysis indicate that there is no error in the methodology and that the results are not pure chance but that the encoding by this taxonomy is too subjective to be considered reliable.

DISCUSSION In this article we demonstrated the application of the Revised Bloom Taxonomy in the analysis of five psychotherapeutic games. In a two-round process, we managed to come to a consensus on the Player Actions with Learning Objectives (PALOs) that each game contains in its design. All four

500

Computer Games Technology

researchers coded each PALO of each game independently and compared the results and an intercoder reliability statistic was subsequently calculated. The process of assessing game content through the lens of the Revised Bloom Taxonomy turned out to be open to interpretation and resulted in            comparison of assessment could be made. However, a consolidated version of what PALOs should be assessed was achieved during discussion. We found that applying Bloom’s Revised Taxonomy   >  a structured discussion of player actions in relation to cognitive processes, knowledge levels, and design goals. We feel that such discussions would be useful in the design process of psychotherapeutic games. Describing possible player actions in terms of a PALO provided an interesting perspective on the translation of (therapeutic) goals into game content. The intercoder reliability statistic revealed that no consensus could be achieved on how to interpret the game content within the Revised Bloom Taxonomy. The rating is prone to subjectivity, making it challenging to assign PALOs to appropriate cells in the Taxonomy Table. The work of Karpen and Welch [26] also suffered from low interrater reliability (r = 0.25) and accuracy (46%) when assigning exam questions to the appropriate Bloom cells. Accuracy improved (81.8%) when limiting the encoding to three levels instead of six “a three-tier combination of the Bloom’s levels that would optimally improve accuracy: Knowledge, Comprehension/ Application, and Analysis/Synthesis/Evaluation.”

CONCLUSION For Designers (RQ1) Can Bloom’s Revised Taxonomy Be Used as a Checklist during the Design of a Psychotherapeutic Game? Our discussion to formulate PALOs provided a structured way of describing how game design connects player actions to certain objectives and might provide support during the design process of a psychotherapeutic game. Attempting to place these PALOs in the taxonomy establishes a discussion of the game content on the level of cognition. Although categorization of content based on Bloom’s Revised Taxonomy is too subjective to be used as a design checklist, we do feel the process facilitates a discussion of high value.

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

501

For Researchers (RQ2) Can Bloom’s Revised Taxonomy Allow Researchers to Make a More Objective Description of Game Content and Allow for Comparisons across Psychotherapeutic Games? The low intercoder reliability indicates that applying the Revised Bloom Taxonomy to psychotherapeutic games does not provide a robust structure for objectivity. The classification of game content remains very open to interpretation and the descriptions it provides should not be used to compare content across different psychotherapeutic games. In order to be useful as a means of comparison, the taxonomy needs to be further developed into a protocol in which designers, researchers, and therapists need to be involved until the process yields an acceptable intercoder reliability. We estimate a lot of effort needs to be applied before Bloom’s Revised Taxonomy would be suitable, and it is advisable to first investigate if other models or taxonomies might be of more value.

For Psychologists (RQ3) Can Bloom’s Revised Taxonomy Support Psychologists in Making a More Informed Choice Concerning Psychotherapeutic Games That Might Be Included in Their Therapy? By describing content in terms of Player Actions with Learning Objectives, the designers provide a better insight into how their choices relate to the intended processing by the player and the desired overall outcome of gameplay. This gives any therapist a better insight into the level of cognitive engagement envisioned (lower bound) for different game content, which would support a more informed choice. Unfortunately, placing these PALOs in Bloom’s Revised Taxonomy has not provided a reliable classification of game content and cannot be seen as valuable information.

Overall Conclusion We have established that Bloom’s Revised Taxonomy cannot be used as an objective classification of game content for psychotherapeutic games due to very low intercoder reliability. We have found the process of describing Player Actions with Learning Objectives of value, as it forces game designers to formalize their intentions. “                   language between game designers, researchers, and psychologists to describe the content of psychotherapeutic games beyond the level of any individual game.

502

Computer Games Technology

Limitations One of the limitations of this paper is that we only describe five games. Although this analysis is by no means exhaustive, we believe that an overview such as this paper can already be helpful to game designers or practitioners in the field of psychotherapeutic gaming. Moreover, analysis of game content can and should also be extended to include psychotherapeutic VR games. We are also aware that we do not provide an in-depth analysis where knowledge and actions during gameplay are minutely observed, described, and categorized. As every gameplay is a unique experience to some degree, we expect that when encoding would be based on game-play subjectivity would increase and intercoder reliability would decrease.

Data Availability The encoding data used in this research is available online through the Open Science Framework platform under the title of this paper.

 !  "  No conflicts of interest have been discovered.

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

503

REFERENCES T. A. Ceranoglu, “Video Games in Psychotherapy,” Review of General Psychology, vol. 14, no. 2, pp. 141–146, 2010. 2. V. Brezinka, “Computer games supporting cognitive behaviour therapy in children,” Clinical Child Psychology and Psychiatry, vol. 19, no. 1, pp. 100–110, 2014. 3. T. Fovet, J.-A. Micoulaud-Franchi, G. Vaiva et al., “Le serious game : applications thérapeutiques en psychiatrie,” L’Encéphale, vol. 42, no. 5, pp. 463–469, 2016. 4. H. L. Horne-Moyer, B. H. Moyer, D. C. Messer, and E. S. Messer, “The Use of Electronic Games in Therapy: a Review with Clinical Implications,” Current Psychiatry Reports, vol. 16, no. 12, 2014. 5. T. M. Fleming, C. Cheek, S. N. Merry et al., “Juegos serios para el tratamiento o la prevención de la depresión: una revisión sistemática,” Revista de Psicopatología y Psicología Clínica, vol. 19, no. 3, p. 227, 2015. 6. P. Haring and H. Warmelink, “Looking for Metacognition,” in Games and Learning Alliance, pp. 95–106, Springer International Publishing, 2016. 7. A. De Gloria, F. Bellotti, and R. Berta, “Serious Games for education and training,” International Journal of Serious Games, vol. 1, no. 1, 2014. 8. B. S. Bloom, M. D. Engelhart, E. J. Furst et al., Taxonomy of educational objectives: Handbook I: Cognitive domain, vol. 56(19), 1956. 9. J. Marzano and S. Kendall, “The need for a revision of Blooms taxonomy,” in The New Taxonomy of Educational Objectives, Chapter: 1, pp. 1–20, Corwin Press, 2006. 10. A. De Kock, P. Sleegers, and M. J. M. Voeten, “New learning and the        !Review of Educational Research, vol. 74, no. 2, pp. 141–170, 2004. 11. L. W. Anderson, D. R. Krathwohl, and B. S. Bloom, A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives, Allyn & Bacon, 2001. 12. P. R. Pintrich, “The role of metacognitive knowledge in learning, teaching, and assessing,” Theory Into Practice, vol. 41, no. 4, pp. 219– 225, 2002. 1.

504

Computer Games Technology

13. K. G. Kahl, L. Winter, and U. Schweiger, “The third wave of cognitive behavioural therapies: what is new and what is effective?” Current Opinion in Psychiatry, vol. 25, no. 6, pp. 522–528, 2012. 14. D. R. Krathwohl, “A revision of bloom’s taxonomy: an overview,” Theory Into Practice, vol. 41, no. 4, pp. 212–218, 2002. 15. D. Coyle, M. Matthews, J. Sharry et al., “Personal Investigator: A therapeutic 3D game for adolecscent psychotherapy,” Interactive Technology and Smart Education, vol. 2, no. 2, pp. 73–88, 2005. 16. V. Brezinka, “Treasure Hunt-a serious game to support psychotherapeutic treatment of children,” in eHealth beyond the horizon – Get IT there, S. K. Andersen, Ed., vol. 136, pp. 71–76, IOS Press, Amsterdam, The Netherlands, 2008. 17. V. Brezinka, “Ricky and the Spider - A Video Game to Support Cognitive Behavioural Treatment of Children with ObsessiveCompulsive Disorder,” Clinical Neuropsychiatry, vol. 10, no. 3, 2013. 18. M. Hrehovcsik and L. van Roessel, “Using Vitruvius as a Framework for Applied Game Design,” in Games for Health, B. Schouten, S. Fedtke, T. Bekker et al., Eds., pp. 131–152, Wiesbaden: Springer Fachmedien Wiesbaden, 2013. 19. Gainplay Studio. (2016). Moodbot. Last accessed September 30, 2016. http://www.gainplaystudio.com/moodbot/. 20. `š >^ !'šˆ !ƒ`š" !ˆ` ™!š? ! and B. Maxwell, “Randomized Controlled Trial of SuperBetter, a Smartphone-Based/Internet-Based Self-Help Tool to Reduce Depressive Symptoms,” Games for Health Journal, vol. 4, no. 3, pp. 235–246, 2015. 21. A. Churches, “Bloom’s digital taxonomy,” in Educational Origami, vol. 4, 2009. 22. J. McGonigal, SuperBetter: A Revolutionary Approach to Getting Stronger, Happier, Braver and More Resilient, Penguin Press, New York, NY, USA, 2015. 23. A. F. Hayes and K. Krippendorff, “Answering the call for a standard reliability measure for coding data,” Communication Methods and Measures, vol. 1, no. 1, pp. 77–89, 2007. 24. K. Krippendorff, “Computing Krippendorff’s alpha reliability,” Departmental papers (ASC), vol. 43, 2007.

Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games

505

25. K. De Swert, Calculating inter-coder reliability in media content analysis using Krippendorff’s Alpha, Center for Politics and Communication, 2012. 26. S. C. Karpen and A. C. Welch, “Assessing the inter-rater reliability and accuracy of pharmacy faculty’s Bloom’s Taxonomy  !Currents in Pharmacy Teaching and Learning, vol. 8, no. 6, pp. 885–888, 2016.

Index

animation 30, 36 appearance based morphable model 3D benchmarks 143, 150, 151, 152, 269 153 Appearance Transformation Matrix 3D environment 347 (ATM) 285 3D gaming 164 application programming interface 3D graphic representation 30 (API) 138 3D interactive visualization system Application Protocol Data Units 323, 342 205 Architectural description attributes A 42 academic achievement 28 Architectural drivers 42 acceleration techniques for crowd Architectural rationale 42 rendering 228 architecture and operating system Active Appearance Model (AAM) 165 259 Arm Extension 419, 420, 426 Active Appearance Models (AAM) arm synchronization 426 262     *€ Š  ADHD-related symptoms 395, 399 297 adjustment algorithm 145, 148 ATM (Appearance Transformation ADS characteristics 314 Matrix) 258    { {€  ª! Attributes of functional require344 ments 43 Analysis of Covariance (ANCOVA) audio/music 29 approach 396 Audiovisual Encoding 358 ANCOVA statistics 397, 398 Symbols

508

Computer Games Technology

Autistic Spectrum Disorders (ASD) 302

center of gravity (CG) 75 chronic administration of dopaminergic 413 B Circulatory model of GDF’s application 31 BAE Gear 15 Ball B-Spline Curve (BBSC) 323, classifying 489, 491 clinical settings 412, 437 324 Cloud Map Building 450 Balloon Goon Game 420, 422 ?   > |! Club building 16 CN-AY diagrams 84 314 Bayesian network 308, 309, 311, Cognitive Behavioral Therapy (CBT) 488 312 Cognitive Learning 297, 298, 301, Bayesian network tool 308, 311 306, 307 Besides implementing cave visualicognitive learning model 298, 299, zation 18 313 beyond self-observation 492 bicycle model of the automobile 74 cognitive reaction 412, 429 Bicycle Model of vehicle dynamics cohesive pedagogical approach 59 commercial off-the-shelf (COTS) 6 72 Bloom’s Revised Taxonomy 486, commercial systems 171 487, 488, 491, 492, 498, 500, communication 386, 388, 389, 390, 395, 397, 399, 401 501 community-based gaming service Bluetooth 14, 22 167 Bonus level 17 comparing 489 branching dialog menus 393 Comparison encoding timings 177 breakaway 74, 75 computer game 299, 300, 301, 302, breathing apparatus entry (BAE) 4 314  "  ª‰– computer games 321, 322, 323, 331, Butcher shop 17 333, 340, 345, 346, 347, 353, C 374, 376 Card Acceptance Device (CAD) computer graphics 298, 300, 303, 305, 344, 361, 363, 364, 365 201 >        card-side implementation 209 (CUDA) 156 CAVE Automatic Virtual Environconcepts of participatory design 443 ment (CAVE) 7 Conceptual diagram of the system. Cave Visualization 10 102 Celebration screen 457 conditional independence (CI) 309

Index

Consoles 166 countdown counter appears 423, 424 C# programming language 96, 97 CPU implementation 127 Creating active 57 Criteria for Choosing the Right GDF 32 cryptographic functionalities 213 current segment 332, 333

509

E

education 57, 58, 59, 61, 62, 64, 65, 66, 77, 79, 80, 83, 86, 90, 91, 94 educational goals 32, 38 Educational Use of VR 347 EEPROM memory 205 \     \     –! 366, 369 effectiveness of digital games for social 388 D   >>   ‡‰ª Data Acquisition Method 302 electroencephalogram (EEG) 111 Data Analysis by Bayesian Netelectromechanical systems 58 works 308 Encrypted command channel. 182 Data Availability 502 Enhanced Handheld Device (EHD) Data Model 210 173       ªª! ª‚! ‚! essentially multiplayer 456 357, 362, 363, 364, 369, 371, exchange mechanisms 214, 216 372 exemplifying 489 ;  ’ ƒ  !‰— existing real-time rendering systems deformed shape in animations 227 118, 119 Design Activity 452, 454 Experiment Redesign 80, 89 Detailed Design and ImplementaF tion 42 determining 489 Face Analysis 257, 259 device’s resolution 178 Facial Action Coding System digital games 387, 388, 402 (FACS) 261 DirectX API commands 174 facial analysis and synthesis 255, diversity 60 256, 257, 258, 259, 289 Driver Behavior 96, 113 facial synthesis domain 259 Driver’s head angle 109 facial synthesis system 256, 258 Driver’s Head Position (Yaw) 108 fast and reliable mouse picking Driving Errors 96, 113 (FRMP) 127 driving simulation 58, 59, 62, 66,

    ‚‰! 89, 90 Fitting of MOAAM 269 Driving Simulator 95, 96, 112, 113 fosters companionship 387 Dynamics Model for Branches 331 dynamic stability 75, 76

510

Computer Games Technology

Four-room apartment 17 frames per second (FPS) 143, 189 frameworks 27, 33 freezing-of-gait (FOG) 414 ‘ >    |— fundamental symptoms 412

German Research Foundation (DFG) 372 gesture-based games 442, 443, 445, 475 Gesture detection 417, 419, 420 gesture recognition controllers 167 Gestures and Animations 419, 426 G Global graphics memory structure 142 Game-based learning 346 Goon game 416, 418, 419, 421, 426, Game-based simulators, 3 429, 430, 431, 432, 433, 435 game completion 419, 425 game development framework GPU architecture 231 GPU-based visibility culling 120 (GDF) 28 GPU memory 124, 138, 139, 140, Game hardware and software 4 141, 143, 144, 145, 146, 147, Game Logic 421, 428 148, 149, 151, 152, 153, 154, game manufacturer 167 155, 156, 157, 158, 162 Game Mechanics 343, 344, 346, GPU memory sharing 138, 140, 376 144, 146, 147, 149, 151, 153, game navigation 433 158 game platform 416, 432 GPU programming framework 225, Games for Impact 387 230, 238 games-on-demand 168, 169, 170    >   ªŒ! ‚—! GPU virtualization 138, 139, 140, 141, 158, 160 351, 356, 361 ™  ½    \  GPU virtualization solution 138, 139, 141, 160 344, 348, 351, 353 GPU virtualization technology 139 ™ { \      { ™\- graphic agents 305 graphical appearance 20 iT) 344 graphical attributes 310, 311 gaming systems 62, 63 general purpose GPU computing Graphical comparison 367 Graphic Cognitive Test 305 (GPGPU) 122 graphic communication 489 General server architecture. 186 graphic programming 30 Generating User Stories 451 graphic response vector 304 Genetic Algorithm 270, 271 Geometrical model represented by Graphics Processing Unit (GPU) 176 BBSCs 327 geometric processing capability graphics translation tables (GTTs) 139 120, 122

Index

Grocery store 17 H Hall of Heroes Evaluation 400 hazardous tasks 95, 97 H-Database 282 Head Mounted Display 95, 96 Head-Mounted Display (HMD) 348 head movements 97, 108, 109, 110, 112 Hero Simulation 389 high global graphics memory 138, 140, 141, 142, 143, 145, 150, 151, 155, 158 High Level Shading Language (HLSL) 123 high-quality graphical output 164 Hotel corridor 17 HTC Vive headset 98 HTC Vive systems 363 HTC Vive Virtual reality headset 96 human actor 420, 421, 427, 428, 431 Human-Computer Interaction (HCI) 444 human face 255, 256, 261, 262, 263, 268, 282, 284, 285, 286, 289, 290, 294 Human Interface Device (HID 68

511

Institute for Software Research (ISR) 51 Institute of Education Sciences 404 integrated development environment (IDE) 36 interaction model 16, 18 Interactive Data Visualization, Inc. (IDV) 322 interactive system 255, 257, 258, 286, 287, 289, 290 Intersection detection 119 intuitive application 350 J Java 32, 35, 36, 37, 38, 39, 45, 46, 49, 51, 53, 54 Java Card 205, 209, 210, 213, 217, 218, 219 jump distance 347 K Keyboard 181 keyboard commands 181, 182, 183 Kinect-based game 414 Kinect sensor 449, 460 Knowledge Encoding 343, 344, 345, 348, 349, 350, 351, 352, 353, 362, 368, 370, 371, 372 Knowledge of terminology 488

I

L

image-based LOD representations 227 Immersive Visualization 7 Impact on Education 79, 86 individual driving ability 110 inferring 489 instancing-based rendering mechanism 229

language 28, 32, 35, 36, 37, 38, 45, 50 language learning impairments (LLI) 301 Large apartment 17 large CAD models 232 Layout of the proposed system 102 Learning database images 268

512

Computer Games Technology

Learning Quality 364, 367, 370 less-smooth driver 85, 86 Level design 22 level-of-detail (LOD) 225, 226 level of detail (LOD) technique 323 LOD and animation techniques 225 LOD representations 226, 245 low-cost consumer electronics 165 LPS (Local Processing Server) 185 M macroscopic algorithms 226 maintaining engagement 393 manipulable game object 355 Massive Multi-player Role Playing Games (MMRPGs) 118 material mechanics 323 math 28 matrix algebra 355 meaning of instructional messages 489 mechanical engineering 58, 59, 73 medications usually control symptoms 412 memory expansion policy 151, 155 Mental models 344, 378 MOAAM Fitting 273 Moderation and Mediation 349 modes of visualization and interaction 4 monitor the performance of network 187 Moodbot 496, 498, 504 Motion Simulators 64 Motivation 143 mouse commands 183 Mouse picking 117, 118 `†™ ‹  ‹   `   service 211

MUGPP and Security 213 MUGPPM architecture for J2ME devices 212 MUGPP ON AN NFC SMART CARD 208 MUG system 200, 201, 202, 207, 208, 213 Multi-Layer Visibility Queries 124 multiobjective 2.5D Active Appearance Model (MOAAM) 256 Multiobjective AAM 274 multiobjective optimization (MOO) 257 Multiplayer Ubiquitous Games (MUGs) 199, 200 Multiple Camera System 265 Multiple GPU Applications 152 Multivariate Analysis of Variance (MANOVA) 396 MultiView System 266 muscle-based animation parameters 263 N Natural User Interface (NUI) 441, 442 Network Time Protocol 180 NFC reader 205, 207, 208, 209, 210, 215, 216 NFC readers 206, 207, 208, 211 NFC Smart Card based approach 217 nonintrusive manner 424 normal progressive learning 303 Norwegian University of Science and Technology (NTNU) 27, 34, 54 no spatial organization 119

Index

O off-board monitor 67 ƒ > | On-Board Computer (OBC) 67 on-site motion simulator 64 Open develop environment 38 operating system level libraries 173 orbital mechanics 351, 378 P PALOs and Intercoder Reliability 498 Parallel Virtual Machine (PVM) 71 parents of adolescents 401 Pareto Fronts 270, 271 Parkinson’s disease (PD) 411, 412, 432 participatory design 442, 443, 444, 445, 447, 473, 475, 476, 477, 478, 479, 481, 482, 483 participatory design (PD) 443 PD-oriented multiplayer game 415 ‹;=>    ª|– Performance summary of all benchmarks. 154 permeated game design 445 persistent information system for game data 204 personal digital assistant (PDA) 165 Personal Investigator 486, 487, 490, 493, 498, 504 personal video recorder (PVR) 168 ‹    Œ!‡‡ physical machine (PM) 137, 146 Physical simulation tools 61 physics 30, 49 physiotherapy appears 413 physiotherapy via tele-rehabilitation 415

513

platform navigation 433, 434 Player Actions with a Learning Objective (PALO) 492 Player Actions with Learning Objectives (PALOs) 486, 499 Player and Agent Personality Database (PAPD) 301 Player executes 423 point system 353, 357, 371 polygonal meshes 227 polygons 125 pop-enabled 424 population and social impact 448 Post-Mortem Analysis 35, 44 power of executing a game 164 Practicing exercises 442 Precise evaluation 326 President’s Information Technology Advisory Council (PITAC) 60 Processing time comparisons 129 Procrustes analysis 264 programming 29, 30, 32, 35, 36, 37, 38, 39, 49, 50, 54 programming languages 32 progressive cognitive learning (PCL) 304, 305, 308, 310 proposed interactive system 257, 258 Prototype 453, 454, 457, 459 psycho-education 494, 495 psychologists 487, 490, 501 psychosocial distress 395, 399, 402 Q Quality of Service Optimized Transmission 183 Quality of Service (QoS) 166

514

Computer Games Technology

games 164 Single GPU Application 151 ray-triangle intersection detection single objective optimization (SOO) 119, 122 272 realistic visual appearance 223 single-view system 257, 265, 269 real smoke 10, 21 skeleton-based animations 231 Real world Gaming system Interacskills of analysis 388 tion (RGI) 200 Slope Creep game 416, 417, 418, real-world training 4 426, 427, 428, 431, 432, 433 reproduce and normalization 99 small yaw moment values 85 reverse learning 307, 308 SOAAM (Single-Objective AAM) Revised Bloom Taxonomy 485, 266 486, 487, 490, 499, 500, 501 social emotional functioning 386 RFID tags 200, 205 social skills intervention 403 RGB camera 414, 416, 421 social skills training (SST) intervenRoad Vehicle Dynamics 66, 72, 79, tion 401 80 Social skills training (SST) intervenRoad Vehicle Dynamics (RVD) 66, tions 387 72 software architecture documentation robustness achieved. 256 standards 34 Round of Analysis 490, 492 software architecture project 35, 40 Running graphics-intensive applicasoftware engineering (SE) 29 tions 138 Solution Focused Therapy (SFT) 493 S Solution of Motion 333 Safety 96 Sony’s PlayStation Eye 414 Scenario and Roadway Design. 103 Source mod 22 Scene Design 424, 431 >    |‰|         >  SST intervention 386, 391, 402 games 485 Stable driving performance 107 Screen snapshots 418, 427 Stable implementation 38 segmentation 265, 269, 273 Steering Angle/Car Angle (Yaw) Self-knowledge 489 107 set top boxes (STBs) 164, 165 Steering wheel and handling 104 simulation game 59, 61, 62, 63, 64, Strategic knowledge 489 66, 67, 72, 74, 79, 89 Successful recognition of the gessimulation games 62, 91 ture 419, 426 Simultaneous execution of video summarizing 489, 494 R

Index

SuperBetter 497, 498, 504 '      ‚ Swedish Rescue Services Agency 4, 5, 23 System Integration 185 system memory 140, 142, 156, 157 T Task Ordering 75, 81 taxonomy 486, 487, 488, 490, 491, 492, 499, 500, 501, 503, 504 Teaching Software Architecture Using XNA 40 technology 58, 60, 80, 87, 91 TEI of Crete 435 Things in the Internet of things 218 Time-Car distance traveled 105 Time Lap 106 Topological Representation 328 tracking algorithm 260, 283 Traditional social skills training (SST) programs 385 transmit the encoded video stream 179 Treasure Hunt 486, 487, 490, 494, 498, 504 Treatment 398, 399, 406 Tri-Radial Speedway 82 Two-room apartment 16 U ubiquitous computing technologies 199 Unity3D 96, 97

515

V values empowerment 459 value-sensitive design (VSD) 444 vehicle dynamics model 69 vehicle midpoint 75, 76 vehicle parameters 76, 81, 82, 83 Video Encoding 176 Video games 62, 164 Video Streaming 176, 186 View-Frustum Culling and continuous LOD techniques 229 Virtual computer technology 298 virtual environment (VE) 348 Virtual Geometry Learning 348 Virtual illustration of a human face 255 virtual machines (VMs) 137, 138 Virtual Networked Gaming (VNG) 165 Virtual Reality 95, 96, 113 virtual reality driving experiment 96, 98 Vision Image 104 W Wait-list Control 398, 399 World of Warcraft 208 WYSIWYG method 118, 120, 124, 129 X x-axis direction 333, 336

本书版权归Arcler所有