Virtual Worlds: First International Conference, VW’98 Paris, France, July 1–3, 1998 Proceedings (Lecture Notes in Computer Science, 1434) 3540647805, 9783540647805

1 Introduction Imagine a virtual world with digital creatures that looks like real life, sounds like real life, and even

123 74 8MB

English Pages 424 [425] Year 1998

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Virtual Worlds: First International Conference, VW’98 Paris, France, July 1–3, 1998 Proceedings (Lecture Notes in Computer Science, 1434)
 3540647805, 9783540647805

Citation preview

Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science Edited by J. G. Carbonell and J. Siekmann

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen

1434

3

Berlin Heidelberg New York Barcelona Budapest Hong Kong London Milan Paris Singapore Tokyo

Jean-Claude Heudin (Ed.)

Virtual Worlds First International Conference, VW’98 Paris, France, July 1-3, 1998 Proceedings

13

Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editor Jean-Claude Heudin International Institute of Multimedia Pˆole Universitaire L´eonard de Vinci F-92916 Paris La Defense Cedex, France E-mail: [email protected]

Cataloging-in-Publication Data applied for

Die Deutsche Bibliothek - CIP-Einheitsaufnahme Virtual worlds : first international conference ; proceedings / VW ’98, Paris, France, July 1 - 3, 1998. Jean-Claude Heudin (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Budapest ; Hong Kong ; London ; Milan ; Paris ; Santa Clara ; Singapore ; Tokyo : Springer, 1998 (Lecture notes in computer science ; Vol. 1434 : Lecture notes in artificial intelligence) ISBN 3-540-64780-5

CR Subject Classification (1991): I.3.7, I.2.10, I.2, I.3, H.5, J.5 ISBN 3-540-64780-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. c Springer-Verlag Berlin Heidelberg 1998  Printed in Germany Typesetting: Camera-ready by author SPIN 10637689 06/3142 – 5 4 3 2 1 0

Printed on acid-free paper

Preface

1 Introduction Imagine a virtual world with digital creatures that looks like real life, sounds like real life, and even feels like real life. Imagine a virtual world not only with nice threedimensional graphics and animations, but also with realistic physical laws and forces. This virtual world could be familiar, reproducing some parts of our reality, or unfamiliar, with strange “physical” laws and artificial life forms. As a researcher interested in the sciences of complexity, the idea of a conference about virtual worlds emerged from frustration. In the last few years, there has been an increasing interest in the design of artificial environments using image synthesis and virtual reality. The emergence of industry standards such as VRML [1] is an illustration of this growing interest. At the same time, the field of Artificial Life has addressed and modeled complex phenomena such as self-organization, reproduction, development, and evolution of artificial life-like systems [2]. One of the most popular works in this field has been Tierra designed by Tom Ray: an environment producing synthetic organisms based on a computer metaphor of organic life in which CPU time is the “energy” resource and memory is the “material” resource [3]. Memory is organized into informational patterns that exploit CPU time for self-replication. Mutation generates new forms, and evolution proceeds by natural selection as different creatures compete for CPU time and memory space. However, very few works have used an Artificial Life approach together with Virtual Reality, or at least with advanced three-dimensional graphics. Karl Sims was probably one of the first researchers working in this direction. He designed a flexible genetic system to specify solutions to the problem of being a “creature” built of collections of blocks, linked by flexible joints powered by “muscles” controlled by circuits based on an evolvable network of functions [4]. Sims embedded these “block creatures” in simulations of real physics, such as in water or on a surface. These experiments produced a bewildering and fascinating array of creatures, like the swimming “snake” or the walking ”crab”. Demetri Terzopoulos and his colleagues have also created a virtual marine world inhabited by realistic artificial fishes [5]. They have emulated not only the appearance, movement, and behavior of individual animals, but also the complex group behaviors evident in many aquatic ecosystems. Each animal was modeled holistically as an autonomous agent situated in a simulated physical world. Considering recent advances in both Artificial Life and Virtual Reality, catalyzed by the development of Internet , a unified approach seemed to be one of the most promising trend for the synthesis of realistic and imaginary virtual worlds. Thus, the primary goal of this conference was to set up an opportunity for researchers from both fields to meet and exchange ideas. In July 1998, the first international conference on virtual worlds was held at the International Institute of Multimedia. It brought together scientists involved in Virtual Reality, Artificial Life, Multi-agent Systems, and other fields of computer science and electronic art, all of whom share a common interest in the synthesis of digital worlds

VI

Preface

on computers. The diversity and quality of the work reported herein reflects the impact that this new trend of research has had on the scientific community.

2 The Proceedings The production of these proceedings was a major task, involving all the authors and reviewers. As the editor, I have managed the proceedings in a classical way. Every contribution that was accepted for presentation at the conference is in the proceedings. The program committee felt that these papers represented mature work of a level suitable for being recorded, most of them without modification, some of them requiring modifications to be definitively accepted. Besides the classical goal of a proceedings volume, the idea was to recapture in print the stimulating mix of ideas and works that were presented. Therefore, the papers are organized to reflect their presentation at the conference. There were three invited plenary speakers: Nadia Thalmann, Jeffrey Ventrella, and Yaneer Bar-Yam. Two of them choose to provide a printed version of their lecture. There were nine sessions, most of them in parallel, for a total of 36 papers in all, recorded here roughly in the order in which they were presented. The material covered in these proceedings is diverse and falls naturally into a number of categories: Virtual Reality, Artificial Life, Multi-agent Systems, Complexity, Applications of Virtual Worlds, and last but not least, Virtual Worlds and Art. This collection of papers constitutes a good sample of works that appear necessary if we want to design large and realistic virtual worlds on the Internet in the near future.

3 Acknowledgments Many people and groups contributed to the success of the conference. My sincere thanks go out to all of them. I would like to thank first all the distinguished authors who contributed to this volume for their willingness to share the excitement of a new enterprise. The committee which selected the papers included the editor along with : Michael Best (MIT Media Lab., USA), Yaneer Bar-Yam (NECSI, USA), Bruce Blumberg (MIT Media Lab., USA), Eric Bonabeau (Santa Fe Institute, USA), Terry Bossomaier (Charles Sturt University, Australia), Philippe Coiffet (Versailles & St Quentin en Yvelines University, France), Bruce Damer (Contact Consortium, USA), Guillaume Deffuant (CEMAGREF, France), Karine Douplitzky (Galimard NRF, France), Steve Grand (Cyberlife, UK), Bob Jacobson (SRI, USA), Takeo Kanade (Carnegie Mellon University, USA), Hiroaki Kitano (Sony Computer Science Lab., Japan),

Preface

VII

Jean Louchet (ENSTA, France), Nadia Magnenat-Thalmann (University of Geneva, Switzerland), Daniel Mange (EPFL, Switzerland), Jean-Arcady Meyer (ENS, France), Thomas S. Ray (ATR Human Information Research Lab., Japan), Tim Regan (Bristish Telecom, UK), Bob Rockwell (Blaxxun Interactive, Germany), Scot Thrane Refsland (Gifu University, Japan), Demetri Terzopoulos (University of Toronto, Canada), Jeffrey Ventrella (Rocket Science Games, USA), Marie-Luce Viaud (INA, France), Claude Vogel (Semio, USA), Chris Winter (British Telecom, UK). I am also grateful to Silicon Graphics (official partner of the conference), Canal+ Virtuel, and Softimage for their financial support. Special thanks are due to the New England Complex System Institute, the EvoNet Network of Excellence in Evolutionary Computation, the Contact Consortium, and the International Society on Virtual Systems and Multimedia for their support. I had a significant help for organizing and running this conference. Most of the staff of the International Institute of Multimedia fall under this category. First and foremost, I have to thank Claude Vogel who was encouraging and supporting me at every step of the project. Monika Siejka performed an enormous amount of work. She was effectively my co-organizer. It was a pleasure to work with Monika. Olga Kisseleva was another co-organizer. Olga was in charge of the artistic part of the conference. Thanks are also due to Sophie Dussault and Sylvie Perret for their help. Finally, all the staff of the Pôle Universitaire Léonard de Vinci were once again a pleasure to work with.

4 Conclusion The terms “virtual worlds” generally refer to Virtual Reality applications or experiences. In this volume, I have extended the use of these terms to describe experiments that deal with the idea of synthesizing digital worlds on computers. Thus, Virtual Worlds (VW) could be defined as the study of computer programs that implement digital worlds with their own “physical” and “biological” laws. Constructing such complex artificial worlds seems to be extremely difficult to do in any sort of complete and realistic manner. Such a new discipline must benefit from a large amount of work in various fields: Virtual Reality, Artificial life, Cellular Automata, Evolutionary Computation, Simulation of Physical Systems, and more. Whereas Virtual Reality has largely concerned itself with the design of three-dimensional graphical spaces and Artificial Life with the simulation of living organisms, VW is concerned with the simulation of worlds and the synthesis of digital universes. This approach is something broader and more fundamental. Throughout the natural world, at any scale, from particles to galaxies, one can observe phenomena of great complexity. Research done in traditional sciences such as biology and physics has

VIII

Preface

shown that the basic components of complex systems are quite simple. It is now a crucial problem to elucidate the universal principles by which large numbers of simple components, acting together, can self-organize and produce the complexity observed in our universe [6]. Therefore, VW is also concerned with the formal basis of synthetic universes. In this framework, the synthesis of virtual worlds offers a new approach for studying complexity. I hope that the reader will find in this volume many motivating and enlightening ideas. My wish is that this book will contribute to the development and further awareness of the new and fascinating field of Virtual Worlds. As of now, the future looks bright.

References 1. Hartman, J., Wernecke, J.: The VRML 2.0 Handbook – Building Moving Worlds on the Web. Reading, Addison Wesley Developers Press (1996) 2. Langton, C.G.: Artificial Life. SFI Studies in the Sciences of Complexity, Vol. VI. AddisonWesley (1988) 3. Ray, T.: An Approach to the Synthesis of Life. In Artificial Life II, SFI Studies in the Sciences of Complexity, Vol. X. Edited by C.G. Langton, C. Taylor, J.D. Farmer and S. Rasmussen. Addison-Wesley (1991) 4. Sims, K.: Evolving 3D Morphology and Behavior by Competition. Artificial Life, Vol. 1. MIT Press 4 (1995) 353–372 5. Terzopoulos, D., Xiaoyuan, T., Radek, G.: Artificial Fishes: Autonomous Locomotion, Perception, Behavior, and Learning in a Simulated Physical World. Artificial Life, Vol. 1. MIT Press 4 (1995) 327–351 6. Heudin, J.C.: L’évolution au bord du chaos. Editions Hermès, Paris (1998)

May 1998

Jean-Claude Heudin

X

Table of Contents

Table of Contents

XI

XII

Table of Contents

Real Face Communication in a Virtual World Won-Sook Lee, Elwin Lee, Nadia Magnenat Thalmann MIRALab, CUI, University of Geneva 24, rue General-Dufour, CH-1211, Geneva, Switzerland Tel: +41-22-705-7763 Fax: +41-22-705-7780 E-mail: {wslee, lee, thalmann}@cui.unige.ch

Abstract. This paper describes an efficient method to make an individual face for animation from several possible inputs and how to use this result for a realistic talking head communication in a virtual world. We present a method to reconstruct 3D facial model from two orthogonal pictures taken from front and side views. The method is based on extracting features from a face in a semiautomatic way and deforming a generic model. Texture mapping based on cylindrical projection is employed using a composed image from the two images. A reconstructed head is animated immediately and is able to talk with given text, which is transformed to corresponding phonemes and visemes. We also propose a system for individualized face-to-face communication through network using MPEG4. Keywords:Cloning, orthogonal pictures, DFFD, texture mapping, talking head, visemes, animation

1. Introduction Individualized facial communication is becoming more important in modern computer-user interfaces. To visualize ones own face in a virtual world and let people talk with given input, such as text or video, is now a very attractive research area. With the fast pace in computing, graphics and networking technologies, real-time face-to-face communication in a virtual world is now realizable. It is necessary to reconstruct an individual head in an efficient way to decrease data transmission size, and send few parameters for real-time performance. Cloning a real person's face has practical limitations in the sense of time, simple equipment and realistic shape. We present our approach to clone a real face from two orthogonal views, emphasizing accessibility for anybody with low price equipment. Our method to give animation structure on a range data from a laser scanner or stereoscopic camera is also described, but not in detail. This is because of the high price of this equipments. Therefore, it may not be a very practical idea to make use of them. The main idea is to detect feature points and modify a generic model with

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 1-13, 1998 © Springer-Verlag Berlin Heidelberg 1998

2

Won-Sook Lee et al.

animation structure. After creating virtual clones, we can use them to animate and talk in a virtual world. The organization of this paper is as follows. We give a review in Section 2 with classification for existing methods to get a realistic face reconstruction and talking head. In Section 3, we describe the idea of a system for individualized face-to-face communication through a network and our system for creating/animating talking head with given text. Section 4 is dedicated to the reconstruction process from feature detection to texture mapping and then the detailed process for talking head is explained in Section 5. Finally conclusion is given.

2. Related Work 2.1.

Face Cloning for Animation

There have been many approaches to reconstruct a realistic face in a virtual world. There are many possible ways such as using a plaster model [12][1], or interactive deformation and texture mapping [2][15], which are time-consuming jobs. More efficient methods are classified into four categories. Laser Scanning In range image vision system some sensors, such as laser scanners, yield range images. For each pixel of the image, the range to the visible surface of the objects in the scene is known. Therefore, spatial location is determined for a large number of points on this surface. An example of commercial 3D digitizer based on TM laser-light scanning is Cyberware Color Digitizer [14]. Stripe Generator As an example of structured light camera range digitizer, a light striper with a camera and stripe pattern generator can be used for face reconstruction with relatively cheap equipment compared to laser scanners. With information of positions of projector and camera and stripe pattern, a 3D shape can be calculated. Proesmans et al. [13] shows a good dynamic 3D shape using a slide projector, by a frame-by-frame reconstruction of a video. However, it is a passive animation and new expressions cannot be generated. Stereoscopy A distance measurement method such as stereoscopy can establish the correspondence at certain characteristic points. The method uses the geometric relation over stereo images to recover the surface depth. C3D 2020 capture system [3] by the Turing Institute produces many VRML models using stereoscopy method. Most of the above methods concentrate on recovering a good shape, but the biggest drawback is that they provide only the shape without structured information. To get a structured shape for animation, the most typical way is to modify an available generic model with structural information such that eyes, lips, nose, hair and so on. Starting with a structured facial mesh, Lee et al. [13] developed algorithms that automatically construct functional models of the heads of human subjects from laser-

Real Face Comunication in a Virtual World

3

scanned range and reflection data [14]. However, the approach based on 3D digitization to get range data often requires special purpose high-cost hardware. Therefore, a common way of creating 3D objects is the reconstruction from 2D photo information, which is accessible at a low price. Modification with Feature Points on Pictures There are faster approaches to reconstruct a face shape from only a few pictures of a face. In this method, a generic model with an animation structure in 3D is provided in advance and a limited number of feature points. These feature points are the most characteristic points to recognize people, detected either automatically or interactively on two or more orthogonal pictures, and the other points on the generic model are modified by a special function. Then 3D points are calculated by just combining several 2D coordinates. Kurihara and Arai [9], Akimoto et al. [4], Ip and Yin [7], and Lee et al. [16] use an interactive, semiautomatic or automatic methods to detect feature points and modify a generic model. Some have drawbacks such as too few points to guarantee appropriate shape from a very different generic head or accurate texture fitting, or automatic methods, which are not robust enough for satisfactory result, like simple filtering and texture image generation using simple linear interpolation blending. 2.2.

Talking Head

There are two approaches for the synthesis of talking heads. Pearce et al. [17] have used an approach to create an animation sequence with the input being a string of phonemes corresponding to the speech. In this case, the face is represented by a 3D model and animated by altering the position of various points in the 3D model. A the more recent work of Cohen and Massaro [18], english text is used as the input to generate the animation. The alternative is the image-based morphing approach. Ezzat and Poggio [19] have proposed a method of concatenating a collection of images, using a set of optical flow vectors to define the morphing transition paths, to create an animated talking head. In another recent work of Cosatto and Graf [20], they have proposed an automatic method of extracting samples from a video sequence of a talking person using image recognition techniques. In this case, the face image is being partitioned into several facial parts and later combined to generate a talking head. They focused to reduce the number of samples that are needed and photorealistic movements of lips. 2.3.

Shared Virtual World

There have been numerous works being done on the topic of shared virtual environments. In most of these works [23][24][25], each user is represented by a fairly simple embodiment, ranging from cube-like appearances, non-articulated human-like or cartoon-like avatars to articulated body representations using rigid body segments. In the work of Pandzic et al. [26], a fully articulated body with skin deformations and facial animation is used. However, none of these works discussed

4

Won-Sook Lee et al.

about the process of creating individual face model in an efficient manner. In other words, each individual face model usually takes hours of effort to complete. The MPEG-4 framework [30] under standardization can be used to achieve a low bandwidth face-to-face communication between two or more users, using individualized faces.

3. System Overview In networked collaborative virtual environments, each user can be represented by a virtual human with the ability to feel like “being together” by watching each other’s face. Our idea for face-to-face communication through a network is to clone a face with texture as starting process and send the parameters for the reconstruction of the head model to other users in the virtual environment. After this, only animation and audio parameters are sent to others in real time to communicate. The actual facial animation can be done on a local host and all the users can see every virtual human’s animation. This provides a low bandwidth solution for having a teleconference between distanced users, compared to traditional video conferencing, which sends video stream in real time through network. At the same time, it is able to retain a high level of realism with individualized textured 3D head. The general idea of a system for individualized face-to-face communication through network is shown in Fig. 1. There are two hosts, host1 and host2. Each host has a generic model and a program to clone a person from a given input such as two orthogonal pictures or range data. A person in host1 is reconstructed with pictures or range data obtained in any range data equipment. The parameters for reconstruction will be sent to host2 through network. They are texture image data, texture coordinates and either modification parameters or geometric position data for points on a 3D head surface. The same procedure happens in host2. Finally, both host1 and host2 have two 3D heads which are textured. Although we send a texture image, which takes large bandwidth, it is only one time process for reconstruction. After this process, only animation parameters and audio data are sent to another host through network. With animation parameters given, each host will animate two heads in their own platforms. The part of this system showing the proceedure of producing an individualized talking head is shown in Fig. 2. We reconstruct a real face from a given input. For reconstruction, only some points (so called feature points) are extracted from front and side views or only from front view if range data is available and then a generic model is modified. After reconstructing a head, it is ready to animate with given animation parameters. We use this model and apply it to talking head, which has animation abilities given text input. The input speech text is transformed to corresponding phonemes and animation parameters, so called visemes. In addition, expression parameters are added to produce final face with audio output.

Real Face Comunication in a Virtual World

5

 Only Once Expression/ phoneme parameters Parameters for shape midification Texture coordinates Generic Model

Parameters for shape midification

Expression/ phoneme parameters

Texture coordinates

Parameters for shape midification



Expression/ phoneme parameters

Texture coordinates

Parameters for shape midification Texture coordinates

Generic Model Expression/ phoneme parameters

Only Once

 Fig. 1. A system overview for individualized face-to-face communication through network.

4. Face Cloning We reconstruct a face from two kinds of input, such as an orthogonal picture or range data in VRML format. The main idea is to modify a generic animation model with detected feature points and apply automatic texture mapping. In this paper, we focus on orthogonal picture input rather than range data input since the approach based on 3D digitization to get a range data often requires special purpose (and high-cost) hardware and the process is quite similar to orthogonal picture case.

6

Won-Sook Lee et al.

We consider hair outline, face outline and some interior points such as eyes, nose, lips, eyebrows and ears as feature points.

Camera

Laser Scanner

Orthogonal Pictures

Light stripper

Stereoscopic

Range Data

Feature Detection Generic Model

Modification of Shape

Automatic Texture Fitting Input Speech Text

Text-to-Speech Synthesis

Audio

Phonemes

Expressions

Facial Animation System

Talking Head Animation with Audio Output Fig. 2. Overall structure for individualized talking head.

Real Face Comunication in a Virtual World

4.1.

7

Preparation and Normalization

First, we prepare two 2D wire frames composed of feature points with predefined relations for front and side views. The frames are designed to be used as an initial position for the snake method later. Then we take pictures from front and side views of the head. The picture is taken with maximum resolution and the face is in the neutral expression and pose. To make the head heights of side and front views the same, we measure them, and choose one point from each view to matching them with corresponding points in prepared frame. Then we use transformation (scaling and translation) to bring the pictures to the wire frame coordinate, overlaying frames on pictures. 4.2.

Feature Detection

We provide an automatic feature point extraction method with an interface for interactive correction when needed. There are methods to detect them just using special background information and predefined threshold [4][7] and then use an edge detection method and apply threshold again. In addition, image segmentation by clustering method is used [4]. However, it is not very reliable since the boundary between hair and face and chin lines are not easy to detect in many cases. Moreover color thresholding is too sensitive and depends on each individual’s facial image and therefore requires many trials and experiments. We therefore use a structured snake, which has functionality to keep the structure of contours. It does not depend much on the background color and is more robust than simple thresholding method. Structured Snake First developed by Kass et al. [8] the active contour method, also called snakes, is widely used to fit a contour on a given image. Above the conventional snake, we add three more functions. First, we interactively move a few points to the corresponding position, and anchor them to keep the structure of points when snakes are involved, which is also useful to get more reliable result when the edge we would like to detect is not very strong. We then use color blending for a special area, so that it can be attracted by a special color [5]. When the color is not very helpful and Sobel operator is not enough to get good edge detection, we use a multiresolution technique [6] to obtain strong edges. It has two main operators, REDUCE with Gaussian operator and EXPAND. The subtraction produces an image resembling the result after Laplacian operators commonly used in the image processing. More times the REDUCE operator is applied stronger are the edges. 4.3.

Modifying a Generic Model

3D Points from Two 2D Points We produce 3D points from two 2D points on frames with predefined relation between points from the front view and from the side view. Some points have x, yf, ys, z, so we take ys , yf or average of ys and yf for y coordinate (subscripts s and f mean side and front view). Some others have only x, yf

8

Won-Sook Lee et al.

and others x, ys. Using predefined relation from a typical face, we get 3D position (x, y, z). Dirichlet Free-Form Deformations (DFFD) Distance-related functions have been employed by many researchers [4][7][9] to calculate the displacement of non-feature points related to feature points detected. We propose to use DFFD [11] since it has capacity for non-linear deformations as opposed to generally applied linear interpolation, which gives smooth result for the surface. We apply the DFFD on the points of the generic head. The displacement of non-feature points depends on the distance between control points. Since DFFD applies Voronoi and Delaunay triangulation, some points outside triangles of control points are not modified, the out-box of 27 points can be adjusted locally. Then the orginal shape of eyes and teeth are recovered since modifications may create unexpected deformation for them. Our system also provides a feedback modification of a head between feature detection and a resulted head.

Generic Model

3D Feature lines

Modified Head

Fig. 3. A result of DFFD modification comparing with the original head.

4.4.

Automatic Texture Mapping

To increase realism, we utilize texture mapping. Texture mapping needs a 2D texture image and coordinate for each point on a 3D head. Since our input is two pictures, texture image generation to combine them to one picture is needed. Texture Image Generation For smooth texture mapping, we assemble two images from the front and side views to be one. Boundaries of two pictures are detected using boundary color information or using detected feature points for face and hair boundaries. Since the hair shape is simple in a generic model, the boundary of a side view is modified automatically using information of back head profile feature points detected to have nice texture for back part of a neck. Then cylindrical projection of each image is processed. Two projected images are cropped at a certain position (we use eye extremes because eyes are important to keep at a high resolution), so that the 0 range of combined image to make the final assembled image is 360 . Finally, a

Real Face Comunication in a Virtual World

9

multiresolution spline assembling method is used to produce one image for texture mapping preventing visible boundary for image mosaic. Crop Lines

Cylindrical Projection of every points on a head

BaryCentric Coordinates of every point on a head corresponding to Voronoi triangles of feature points

Multiresolution technique

6 extra points added

Voronoi Triangulation of feature points on a head. 6 Points added for a convex hull containing every point on a head

Texture Coordinates of feature points and extra 6 points followed same projection and cropping lines with texture image

Final Texture Coordinate of every point on a head. Eyes and Teeth Coordinates are decided earlier and used for same coordinates for every individual head.

Fig. 4. Texture mapping process. Texture Fitting The main idea for the texture fitting is to map a 2D image on a 3D shape. Texture coordinates of feature points are calculated using detected position data and function applied for texture image generation. The problem for texture fitting is how to decide texture coordinates of all points on a surface of a head. We first apply a cylindrical projection of every point on a 3D head surface. Extra points are added to make a convex hull containing all points, so that a coordinate of every point is located on an image. Then the Voronoi triangulation on control (feature) points and extra points are processed and the local Barycentric coordinates of every point with a surrounding Voronoi triangle are calculated. Finally the texture coordinates of each point on a 2D-texture image are obtained using texture coordinates of control points and extra points and correspond Barycentric coordinate. 4.5.

Result

A final textured head is shown in Fig. 5 with input images, whose process from normalization to texture mapping takes a few minutes.

10

Won-Sook Lee et al.

Fig. 5. A final reconstructed head with two input images in left side. The back of head has proper texture too.

5. Talking Head and Animation In this section, we will describe the steps for animating a 3D face model according to an input speech text, that has been reconstructed as described in the previous section. Firstly, the input text is being provided to a text-to-speech synthesis system. In this case, we are using the Festival Speech Synthesis System [21] that is being developed at the University of Edinburgh. It produces the audio stream that is subsequently played back in synchronization with the facial animation. The other output that is needed from this system is the temporized phoneme information, which is used for generating the facial animation. In the facial animation system [22] that is used in our work, the basic motion parameter for animating the face is called Minimum Perceptible Action (MPA). Each MPA describes a corresponding set of visible features such as movement of eyebrows, jaw, or mouth occurring as a result of muscle contractions and pulls. In order to produce facial animation based on the input speech text, we have defined a visual correlation to each phoneme, which are commonly known as visemes. In other words, each viseme is simply corresponding to a set of MPAs. In Fig. 6, it shows a cloned head model with a few visemes corresponding to the pronunciation of the words indicated. With the temporized phoneme information, facial animation is then generated by concatenating the visemes. We have limited the set of phonemes to those used in the Oxford English Dictionary. However, it is easy to extend the set of phonemes in order to generate facial animation for languages other than English, as this can be easily done by adding the corresponding visemes. The facial animation described so far is mainly concerned with the lip movement corresponding to the input speech text. However, this animation will appear artificial without the eyes blinking, and movement of the eyes and head. Therefore, we have also included random eyes blinking and movement of the eyes and head to add more realism into the generated animation. An analysis of the input speech text can also be done to infer the emotional state of the sentence. Then, the corresponding emotional facial expression can also be applied to the animation. For example, the eyebrows can be raised when the emotional state is surprise.

Real Face Comunication in a Virtual World

11

Fig. 6. A cloned head model with some examples of visemes for the indicated words. Our talking head system utilizes 44 visemes and some facial expression parameters.

An example of cloned persons interacting in a shared virtual environment is shown in Fig. 7. Two kinds of generic bodies, female and male, are provided in advance. The generic bodies can be adjusted according to several ratios using Bodylib [28]. We connect individualized heads onto bodies by specifying a transformation matrix. Basically we need four types of data, namely the geometrical shape, animation structure, texture image and texture coordinates. In our case, every individualized head shares the same animation structure. Final rendering is then produced in real time. Our whole process from individualization to final talking head with body in a virtual world takes only few minutes.

Fig. 7. Cloned persons interacting in shared virtual environments.

6. Conclusion and Future Research In this paper, we proposed an efficient way of creating the individual face model, in terms of time needed for the creation and transmitting the information to other users, and sending parameters for facial animation. The input of two orthogonal pictures can be easily obtained from any kind of conventional camera. The process of individualization consists of several steps including feature detection on 2D pictures,

12

Won-Sook Lee et al.

Dirichlet Free-Form Deformations for modification of a generic model and automatic texture mapping. The reconstructed heads connected to given bodies are used in a virtual environment. The result is then used immediately to produce an individualized talking head, which is created from given text and viseme database. This whole process can be achieved together with the MPEG-4 framework [27] to create a low bandwidth face-to-face communication between two or more users. In the television broadcasting industry, the reconstruction method that we have described in this paper will allow actors to be cloned rapidly. Then, the television producer will be able to direct the virtual actors to say the lines and have the appropriate facial expressions in a virtual studio. This will obviously reduce the cost of production.

7. Acknowledgment We are grateful to Chris Joslin for proof reading this document.

8. References [1] Meet Geri: The New Face of Animation, Computer Graphics World, Volume 21, Number 2, February 1998. [2] Sannier G., Magnenat Thalmann N., "A User-Friendly Texture-Fitting Methodology for Virtual Humans", Computer Graphics International'97, 1997. [3] Exhibition On the 10th and 11th September 1996 at the Industrial Exhibition of the British Machine Vision Conference. [4] Takaaki Akimoto, Yasuhito Suenaga, and Richard S. Wallace, Automatic Creation of 3D Facial Models, IEEE Computer Graphics & Applications, Sep., 1993. [5] P. Beylot, P. Gingins, P. Kalra, N. Magnenat Thalmann, W. Maurel, D. Thalmann, and F. Fasel, 3D Interactive Topological Modeling using Visible Human Dataset. Computer Graphics Forum, 15(3):33-44, 1996. [6] Peter J. Burt and Edward H. Andelson, A Multiresolution Spline With Application to Image Mosaics, ACM Transactions on Graphics, 2(4):217-236, Oct., 1983. [7] Horace H.S. Ip, Lijin Yin, Constructing a 3D individual head model from two orthogonal views. The Visual Computer, Springer-Verlag, 12:254-266, 1996. [8] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active Contour Models, International Journal of Computer Vision, pp. 321-331, 1988. [9] Tsuneya Kurihara and Kiyoshi Arai, A Transformation Method for Modeling and Animation of the Human Face from Photographs, Computer Animation, Springer-Verlag Tokyo, pp. 45-58, 1991. [10] Yuencheng Lee, Demetri Terzopoulos, and Keith Waters, Realistic Modeling for Facial Animation, In Computer Graphics (Proc. SIGGRAPH), pp. 55-62, 1996. [11] L. Moccozet, N. Magnenat-Thalmann, Dirichlet Free-Form Deformations and their Application to Hand Simulation, Proc. Computer Animation, IEEE Computer Society, pp. 93-102, 1997.

Real Face Comunication in a Virtual World

13

[12] N. Magnenat-Thalmann, D. Thalmann, The direction of Synthetic Actors in the film Rendez-vous à Montrèal, IEEE Computer Graphics and Applications, 7(12):9-19, 1987. [13] Marc Proesmans, Luc Van Gool. Reading between the lines - a method for extracting dynamic 3D with texture. In Proceedings of VRST, pp. 95-102, 1997. [14] http://www.viewpoint/com/freestuff/cyberscan [15] LeBlanc. A., Kalra, P., Magnenat-Thalmann, N. and Thalmann, D. Sculpting with the ‘Ball & Mouse’ Metaphor, Proc. Graphics Interface '91. Calgary, Canada, pp. 152-9, 1991. [16] Lee W. S., Kalra P., Magenat Thalmann N, Model Based Face Reconstruction for Animation, Proc. Multimedia Modeling (MMM) ’97, Singapore, pp. 323-338, 1997. [17] A. Pearce, B. Wyvill, G. Wyvill, D. Hill (1986) Speech and Expression: A Computer Solution to Face Animation, Proc Graphics Interface ’86, Vision Interface ’86, pp. 136140. [18] M. M. Cohen, D. W. Massaro (1993) Modeling Coarticulation in Synthetic Visual Speech, eds N. M. Thalmann, D. Thalmann, Models and Techniques in Computer Animation, Springer-Verlag, Tokyo, 1993, pp. 139-156. [19] T. Ezzat, T. Poggio (1998) MikeTalk: A Talking Facial Display Based on Morphing Visemes, Submitted to IEEE Computer Animation ’98. [20] E. Cossato, H.P. Graf (1998) Sample-Based Synthesis of Photo-Realistic Talking Heads, Submitted to IEEE Computer Animation ’98. [21] A. Black, P. Taylor (1997) Festival Speech Synthesis System: system documentation (1.1.1), Technical Report HCRC/TR-83, Human Communication Research Centre, University of Edinburgh. [22] P. Kalra (1993) An Interactive Multimodal Facial Animation System, PH.D. Thesis, Ecole Polytechnique Federale de Lausanne. [23] Barrus J. W., Waters R. C., Anderson D. B., "Locales and Beacons: Efficient and Precise Support For Large Multi-User Virtual Environments", Proceedings of IEEE VRAIS, 1996. [24] Carlsson C., Hagsand O., "DIVE - a Multi-User Virtual Reality System", Proceedings of IEEE VRAIS '93, Seattle, Washington, 1993. [25] Macedonia M.R., Zyda M.J., Pratt D.R., Barham P.T., Zestwitz, "NPSNET: A Network Software Architecture for Large-Scale Virtual Environments", Presence: Teleoperators and Virtual Environments, Vol. 3, No. 4, 1994. [26] I. Pandzic, T. Capin, E. Lee, N. Magnenat-Thalmann, D. Thalmann (1997) A Flexible Architecture for Virtual Humans in Networked Collaborative Virtual Environments, Proc EuroGraphics ’97, Computer Graphics Forum, Vol 16, No 3, pp C177-C188. [27] I. Pandzic, T. Capin, N. Magnenat-Thalmann, D. Thalmann (1997) MPEG-4 for Networked Collaborative Virtual Environments, Proc VSMM ’97, pp.19-25. [28] J. Shen, D. Thalmann, Interactive Shape Design Using Metaballs and Splines, Proc. Implicit Surfaces `95 , Grenoble, pp.187-196.

Animated Impostors for Real-Time Display of Numerous Virtual Humans Amaury Aubel, Ronan Boulic and Daniel Thalmann Computer Graphics Lab, Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland {aubel, boulic, thalmann}@lig.di.epfl.ch

Abstract. Rendering and animating in real-time a multitude of articulated characters presents a real challenge and few hardware systems are up to the task. Up to now little research has been conducted to tackle the issue of realtime rendering of numerous virtual humans. However, due to the growing interest in collaborative virtual environments the demand for numerous realistic avatars is becoming stronger. This paper presents a hardware-independent technique that improves the display rate of animated characters by acting on the sole geometric and rendering information. We first review the acceleration techniques traditionally in use in computer graphics and highlight their suitability to articulated characters. Then we show how impostors can be used to render virtual humans. Finally we introduce a concrete case study that demonstrates the effectiveness of our approach.

1 Introduction Even though our visual system is not deceived yet when confronted with virtual humans, our acceptance of virtual characters has greatly improved over the past few years. Today’s virtual humans faithfully embody real participants in collaborative virtual environments for example and are even capable of conveying emotions through facial animation [1]. Therefore, the demand for realistic real-time virtual humans is becoming stronger. Yet, despite the ever-increasing power of graphics workstations, rendering and animating virtual humans remain a very expensive task. Even a very high-end graphics system can have trouble sustaining a sufficient frame rate when it has to render numerous moving human figures commonly made up of thousands of polygons. While there is little doubt that hardware systems will eventually be fast enough, a few simple yet powerful software techniques can be used to speed up rendering of virtual humans by an order of magnitude. Because 3D chips were not affordable or did not even exist in the 80s, video game characters, human-like or not, were then represented with 2D sprites. A sprite can be thought of as a block of pixels and a mask. The pixels give the color information of the final 2++D image while the mask corresponds to a binary transparency channel. Using sprites, a human figure could easily be integrated into the decor. As more comJ.-C. Heudin (Ed.): Virtual Worlds 98 LNAI 1434, pp. 14-28, 1998 © Springer-Verlag Berlin Heidelberg 1998

Animated Impostors for Real-Time Display of Numerous Humans

15

puting power was available in the 90s, the video game industry shifted towards 3D. However, the notion of sprites can also be used in the context of 3D rendering. This has been successfully demonstrated with billboards, which are basically 3D sprites, used for rendering very complex objects like trees or plants. In our opinion imagebased rendering can also be used in the case of virtual humans by relying on the intrinsic temporal coherence of the animation. Current graphics systems rarely take advantage of temporal coherence during animation. Yet, changes from frame to frame in a static scene are typically very small, which can obviously be exploited [9]. This still holds true for moving objects such as virtual humans providing that the motion remains slow in comparison with the graphics frame rate. We present in this paper a software approach to accelerated rendering of moving, articulated characters, which could easily be extended to any moving and/or self-deforming object. Our method is based on impostors, a combination of traditional level-of-detail techniques and image-based rendering and relies on the principle of temporal coherence. It does not require special hardware (except texture mapping and Z-buffering capabilities, which are commonplace on high-end workstations nowadays) though fast texture paging and frame buffer texturing is desirable for optimal performance. The next section gives an overview of the existing schemes that are used to yield significant rendering speedups. Section 3 briefly describes our virtual human model and discusses the limitations of geometry level-of-detail techniques. The notion of dynamically generated impostors is introduced in section 4 with the presentation of an algorithm that generates virtual humans from previous rasterized images. In section 5 the duration of the validity of the image cache is discussed. The following section then shows how this method can be embedded in a large-scale simulation of a human flow. Finally section 7 concludes the paper with a summary of the results we obtained and leads on to some possible future work.

2 Background There exist various techniques to speed up the rendering of a geometrical scene. They roughly fall into three categories: culling, geometric level-of-detail and image-based rendering which encompasses the concept of image caching. They all have in common the idea of reducing the complexity of the scene while retaining its visual characteristics. Besides, they can often be combined to produce better results, as shown in sections 3 and 4. Culling algorithms basically discard objects or parts of objects that are not visible in the final rendered image: an object is not sent to the graphics pipeline if it lies outside the viewing frustum (visibility culling) or if it is occluded by other parts of the scene (occlusion culling). Luebke and Georges described an occlusion culling system well suited for highly occluded architectural models that determines potentially visible sets at render-time [3]. One major drawback of occlusion culling algorithms is that they usually require a specific organization of the whole geometry database: the scene is typically divided into smaller units or cells to accelerate the culling

16

Amaury Aubel et al.

process. Therefore, they perform poorly on individual objects. Because of this restriction it is no use trying to apply occlusion techniques to virtual humans if they are considered individually as we do. Nevertheless, occlusion culling might be contemplated at a higher level if the virtual humans are placed in a densely occluded world, as could be the case when simulating a human crowd for instance. Geometric level of detail (LOD) attempts to reduce the number of rendered polygons by using several representations of decreasing complexity of an object. At each frame the appropriate model or resolution is selected. Typically the selection criterion is the distance to the viewer although the object motion is also taken into account (motion LOD) in some cases. The major hindrance to using LOD is related to the problem of multi-resolution modeling, that is to say the automatic generation from a 3D object of simpler, coarser 3D representations that bear as strong a resemblance as possible to the original object. Heckbert and Garland [4] presented a survey of mesh simplification techniques. More recently, some more work has been done to provide visually better approximations [5,6]. Apart from the problem of multi-resolution modeling, it is usually quite straightforward to use LODs in virtual reality applications. Funkhouser and Séquin described a benefit heuristic [15] to select the best resolution for each rendered object in the scene. Several factors are taken into account in their heuristic: image-space size so as to diminish the contribution of distant objects, distance to the line of sight because our visual system perceives with less accuracy objects on the periphery, motion because fast moving objects tend to appear blurred in our eyes. In addition, objects can be assigned different priorities since they all may play not an equal role in the simulation. Finally, hysteresis is also taken into consideration in order to avoid the visually distracting effect that occurs when the object keeps switching between two LODs. Recently Pratt et al. [14] have applied geometric LODs to virtual humans in large-scale, networked virtual environments. They used heuristic to determine the viewing distances that should trigger a LOD switch. Each simulated human has four different resolutions ranging from about 500 polygons down to only three polygons. However, their lowest resolutions are probably too coarse to be actually used unless the virtual human is only a few pixels high, in which case another technique than polygon rendering may be considered. Finally, image caching is based on the principle of temporal coherence during animation. The basic idea is to re-use parts of the frame buffer content over several frames, thus avoiding rendering for each frame the whole scene from scratch [2]. The hardware architecture Talisman [7] exploits this temporal coherence of the frame buffer: independent objects are rendered at different rates into separate image layers which are composited together at video rate to create the displayed image. In addition, the software host is allowed to maintain a priority list for the image layers, which means that objects in the foreground can be updated more frequently than remote objects in the background. Maciel et al. [8] presented a navigation system of large environments, in which they introduced the concept of impostors. An impostor is some extremely simple geometry, usually texture-mapped planes, which retains the visual characteristics of the original 3D object. In their navigation system they replaced clusters of objects with textured planes (i.e. impostors), one of the first attempts at software image-based rendering. But they do not generate the textures on

Animated Impostors for Real-Time Display of Numerous Humans

17

the fly. Hence their system requires a pre-processing stage during which the scene is parsed: N textures - where N corresponds to the number of potential viewing directions - are generated for every object and stored in memory. This approach is thereby likely to consume rapidly the texture memory. Shaufler and Stürzlinger first introduced the notion of dynamically generated impostors [9]. They managed to solve the problem of texture memory usage since textures are generated on demand in their system. Since then two other software solutions using image-based rendering have been proposed concurrently and independently to accelerate walkthroughs of complex environments [10,11]. Lastly, Sillon et al. used impostors augmented with threedimensional information for the visualization of urban scenery [18]. Although impostors appear to be one of the most promising techniques their use has mainly been restricted so far to walkthroughs of large environments. We believe however that this technique can bring a substantial improvement to virtual humans too.

3 A Multi-Resolution Virtual Human In this section we show how a multi-resolution virtual human can successfully be constructed and animated. We have designed a human modeler that solves the traditional multi-resolution problem described above. By relying on implicit surfaces we avoid resorting to complicated algorithms and automatically generate multiple lookalike resolutions of a model. Finally, we explain why a multi-resolution human does not provide a sufficient gain in rendering time. 3.1 Multi-Resolution Modeling Our real-time virtual human model consists of an invisible skeleton and a skin. The underlying skeleton is a hierarchy of joints that correspond to the real human main joints. Each joint has a set of degrees of freedom, rotation and/or translation, which are constrained to authorized values based on real human mobility capabilities [12]. Unlike other attempts [14] we did not model a multi-resolution skeleton because our purpose was not to demonstrate the effectiveness of animation level-of-detail. Hand joints can be replaced with a single joint though. The skin of our virtual human is generated by means of an in-house modeler that relies on a multi-layer approach (Fig. 1): ellipsoidal metaballs and ellipsoids are employed to represent the gross shape of bone, muscle and fat tissue [13]. Each primitive is attached to its proximal joint in the underlying human skeleton. The set of primitives then defines an implicit surface that accurately approximates the real human skin. Sampling this implicit surface results in a polygonal mesh, which can be directly used for rendering. This polygonal mesh constitutes the skin of our virtual human model. Sampling the implicit surface is done as follows: we start by defining contours circling around each limb link of the underlying skeleton. We then cast rays in a star-

18

Amaury Aubel et al.

shaped manner for each contour, with ray origins sitting on the skeleton link. For each ray, we compute the outermost intersection point with the implicit surface surrounding the link. The intersection is a sample point on the cross-section contour. Once all the sampling is done it is quite straightforward to construct a mesh from the sample points. Thus, we can generate more or less detailed polygonal meshes by simply varying the sample density i.e. the number of sampled contours as well as the number of sampled points per contour. And yet, since the implicit surface remains the same no matter the sampling frequency, the generated meshes look very similar (Fig. 2).

Fig. 1. Multi-layered model [13]

Fig. 2. Body meshes of decreasing complexity

As for the head, hands and feet, we still have to rely on a traditional decimation technique to simplify the original mesh. Manual intervention is still needed at the end of this process to smooth the transition between LODs. The body extremities can cleverly be replaced with simple textured geometry for the lowest resolution (Fig. 3), which dramatically cuts down the number of triangles.

Fig. 3. Levels of detail

Animated Impostors for Real-Time Display of Numerous Humans

19

3.2 Animation The skeleton of our virtual human comprises a total of 74 DOFs corresponding to the real human joints plus a few global mobility nodes, which are used to orient and position the virtual human in the world. In the broad lines, animating a virtual human consists in updating this skeleton hierarchy including the global mobility joints at a fixed frame rate. There exist several techniques to feed the joints with new angle/position values. Motion capture for instance is used to record the body joint values for a given lapse of time. The motion can later be played back on demand for any virtual human. Key frame animation is another popular technique in which the animator explicitly specifies the kinematics by supplying key-frame values and lets the computer interpolate the values for the in-between frames. During animation the appropriate resolution is selected for each individual according to the euclidian distance to the viewpoint. Therefore far objects and those on the periphery contribute less to the final image. Note that we could also take the general motion of the virtual human into account. Finally at a higher level, typically the application layer, virtual humans could be assigned different rendering priorities too. For example, in the context of a human crowd simulation where individuals move and act in clusters the application could decide to privilege some groups. 3.3 Limitations of Geometric LODs LOD is a very popular technique probably because of its simplicity. It is no wonder that LOD has widely been used in computer graphics, whether in virtual reality applications or in video games. There are some limitations to LOD though. First, shading artifacts may arise when the geometry is extremely simplified. This is conspicuous when Gouraud shading is used because the shading function is evaluated for fewer points, as the geometry is decimated. Popping is another recurrent problem: however look-alike two successive LODs may be, the switch can sometimes be obvious to the viewer. The most widespread solution to this problem relies on a transition zone to smooth the switch. In this special zone both LODs images are blended using a transparency channel. However, this method presents the disadvantage of displaying twice the geometry for a given lapse of time. Lastly, geometric LOD has physical limits in the sense that it is not possible to represent properly a human being with fewer than a few hundred polygons. For example, if we consider textured bounding boxes as the lowest resolution, the number of triangles for our model still amounts to about two hundred in all. Such a representation allows a scene that is made up of dozens of virtual actors to be refreshed at a high frame rate - typically 20 Hz - but to the detriment of the visual quality that drops to a level that is barely acceptable. For all that, LOD remains a very efficient technique that can be used without difficulty in conjunction with impostors.

20

Amaury Aubel et al.

4 Animated Impostors for Articulated Characters As first defined by Maciel et al. [8] an impostor is in essence some very simple geometry that manages to fool the viewer. As traditionally the case in the existing literature, what we mean by impostor is a set of transparent polygons onto which we map a meaningful, opaque image. More specifically our impostor corresponding to a virtual human is a simple textured plane that rotates to face continuously the viewer. The image or texture that is mapped onto this plane is merely a “snapshot” of the virtual human. Under these conditions if we take for granted that the picture of the virtual human can be re-used over several frames then we have virtually decreased the polygon complexity of a human character to a single plane (Fig. 4).

Fig. 4. A Brazilian football player and its impostor

4.1 Impostor Refreshment Approach The texture that is mapped onto the transparent plane still needs to be refreshed from time to time because of the virtual human’s mobility or camera motion. Whenever the texture needs to be updated a snapshot of the virtual human is taken. This process is done in three steps: 1. Set up an off-screen buffer that will receive the snapshot 2. Place the virtual human in front of the camera in the right posture 3. Render the actor and copy the result into texture memory The first stage typically comes down to clearing the buffer. The buffer should be a part of the frame buffer so as to benefit from hardware acceleration. The purpose of the second step is to set up the proper view to take a picture: first, we let the actor strike the right pose depending on joint values. Second, it is moved in front of the camera or alternately, the camera itself is moved. Note that the latter is preferable because the skeleton hierarchy remains unchanged in this case, which may save some

Animated Impostors for Real-Time Display of Numerous Humans

21

time. In the last stage the virtual human is rendered and the resulting image is copied into texture memory. Once the texture has been generated there only remains to render the textured billboard to have a virtual human on screen. The whole process is hardly any slower than rendering the actual 3D geometry of the virtual human for the following reasons: setting the off-screen buffer up can be done once for all in a preprocessing step. It chiefly consists in adjusting the viewing frustum to the virtual human. Furthermore, this buffer can be re-used to generate several textures of different actors providing that they have approximately the same size. Clearing the buffer is definitely not a costly operation. Letting the virtual human assume the right posture and rendering it would also have to be carried out if the real geometry was used instead. Finally, if the hardware features frame buffer texturing, the frame buffer to texture memory copy is performed within no time. If not, clearing the off-screen buffer and transferring its content still remain all the less costly that the buffer size is small, typically 128 by 128 pixels or less. As a consequence even in the worst cases (very high texture refreshment rates), impostors prove not to be slower than rendering the actual 3D geometry. 4.2 Impostor Viewing and Projection For the sake of clarity, when speaking respectively in the following of texture windows and scene window, we will mean the off-screen buffers where textures of virtual humans are generated and the window where the final, global scene with impostors is rendered. a

Articulated characters rendering Articulated Characters

Subjective Camera coordinate system

b

World coordinate system

Drawing a. represents a top view of a scene containing two articulated characters. Each character has its own coordinates system. The subjective camera coordinate system defines the user’s viewpoint and corresponds to the camera in the scene window. All coordinates systems are expressed with respect to the world.

Impostors rendering Impostors

Subjective Camera coordinate system

World coordinate system

Drawing b. shows a top view of the same scene rendered with impostors. Each character’s geometry is replaced with a single textured plane. Each plane is oriented so that its normal vector points back to the eye i.e. the origin of the subjective camera coordinate system.

22

Amaury Aubel et al.

c

Impostors viewing and projection Articulated Characters

       

Vo

World coordinate system

Subjective Camera coordinate system

d

Building the vi ewing transformation Articulated Characters coordinate system

D

Ta

Vo To

World coordinate system

Ho Orthographic Camera coordinate system To=Ta-DV o

R o=(V o*U s,Vo,U o) viewing vector Vs Subjective Camera orientation : Rs =(H s,Vs,U s)

Hs

Drawing c. explains how the textures mapped onto the planes in drawing b. are generated i.e. how snapshots of virtual humans are taken. In the drawing there are two orthographic camera coordinates systems corresponding to the cameras in the texture windows associated with each articulated character. The viewing direction Vo of the orthographic camera is determined by the eye’s position and a fixed point (later referred to as Ta) of the articulated character, as will be explained. Drawing d. is a more detailed version of the previous drawing. It mainly shows how the orthographic camera coordinate system is constructed. Vo is the unit vector derived from Ta and the position of the subjective camera. The two other unit vectors that give the orthographic camera’s orientation Ro are determined as follows: the first vector Ho is the cross product of Vo and Us where Us is the normalized up vector of the subjective camera’s frame. The cross product of Ho and Vo then yields Uo. Note that this method ensures we obtain a correct frame (Ho0) as long as the field of view is smaller than a half sphere.

Finally, the camera of the texture window (termed orthographic camera so far) must be positioned so that the virtual human is always entirely visible and of a constant global size when we take a snapshot. We proceed in two steps to do so. First, an orthographic projection is used because it incurs no distortion and actual sizes of objects are maintained when they are projected. Second, a fixed point named Ta is chosen in the body (it corresponds to the spine base) so that it is always projected onto the middle of the texture window. Practically, Ta is determined to be the “middle” point of the virtual human assuming a standing posture in which its arms and hands are fully stretched above its head. Note that this posture is the one in which the virtual human’s apparent size is maximal. The orthographic camera is placed at distance D from Ta (D is a scalar). During the simulation, the camera is then moved at To = Ta – D.Vo. Hence Ta’s projection necessarily coincides with the center of the texture window. On account of the orthographic projection, D actually does not influence the virtual human’s projection size. On the other hand the viewing frustum does. Consequently, it must be carefully chosen so that no parts of the virtual character are culled away. This is done in a pre-processing stage in which the viewing frustum’s dimensions (in

Animated Impostors for Real-Time Display of Numerous Humans

23

the texture window) are set to the maximal size of the virtual human (i.e. with arms raised above its head). 4.3 Several Texture Resolutions All the work has been realized on Silicon Graphics workstations using the Performer graphics toolkit. Because of hardware/software restrictions concerning textures, texture windows have dimensions that are powers of two. Several texture windows of decreasing size can be associated with every virtual human. As the virtual human moves farther, which leads to smaller image-space size in the scene window, a smaller texture window can be used thus reducing texture memory consumption. Practically, we allocated 256x256, 128x128, 64x64 and 32x32 pixel texture windows for a simulation running at a 1280x1024 resolution. The largest windows were only used for close-up views. As always, it can be a bit difficult to strike the right balance between texture memory consumption and visual realism. In addition to better managing texture memory, using several texture windows of decreasing size also helps to reduce visual artifacts. As a matter of fact large textures of virtual humans that are mapped onto very small (far) planes might shimmer and flash as the impostors move. When several texture resolutions are employed, these artifacts tend to disappear because the appropriate texture resolution is selected according to the impostor imagespace size (in pixels). In a certain way this mimics the well-known effect obtained with “mip mapping”. Finally, when a texture is generated the appropriate LOD of the geometry can be chosen based on the texture window size. 4.4 Main Current Limitation Replacing a polygonal model of a virtual human with a single textured plane may introduce visibility problems: depth values of the texels are unlikely to match those of the actual geometry, which may lead to incorrect visibility, as illustrated in figure 5. We did not address this issue in this paper.

Fig. 5. Incorrect visibility

24

Amaury Aubel et al.

5 Image Cache Duration For static objects camera motion is the one and only factor to take into consideration for cache invalidation [9,11]. The algorithm that decides whether a snapshot of a virtual human is stale or not is obviously a bit more complex. However, that algorithm has to execute very quickly because it must be performed for every virtual human at each frame. In our approach we distinguish two main factors: selfdeformation of the virtual actor and its global motion with respect to the camera. 5.1. Virtual Humans as Self-Deforming Objects Virtual humans can be considered as self-deforming objects in the sense they do not keep a static shape, i.e. they can take different postures. The basic idea for a cache invalidation algorithm is that the viewer need not see every posture of the virtual humans to fully understand what action they are performing. A few key postures are often meaningful enough. We propose a simple algorithm that reflects the idea of subsampling the motion. The idea is to test distance variations between some pre-selected points in the skeleton. Using this scheme the virtual human is re-rendered if and only if the posture has changed significantly. Once we have updated the skeleton hierarchy we have direct access to joints’ positions in world space. It is therefore quite straightforward to compute distances (or rather squared distances to avoid unnecessary square roots) between some particular joints. In concrete terms we compare four distances (Fig. 6) with those stored when the texture was last generated. As soon as the variation exceeds a certain threshold the texture is to be re-generated. Of course the thresholds depend on the precision the viewer demands. Furthermore, length variations are weighted with the distance to the viewer to decrease the texture refreshment rate as the virtual human moves away from the viewpoint.

Fig. 6. Posture variations to test

We found out that four tests suffice to reflect any significant change in the virtual human’s posture. Nevertheless, other tests should be performed if the simulation is meant to underscore some peculiar actions e.g. grasping of an object requires additional testing of the hand motion. Similarly, it might be necessary to increase or decrease the number of tests when using non-human characters. As a rule the number of limbs gives the number of tests to make.

Animated Impostors for Real-Time Display of Numerous Humans

25

5.2. Relative Motion of the Actor in the Camera Frame Instead of testing independently camera motion and actor’s orientation we have come up with a simple algorithm that checks both in a single test. The algorithm’s main idea stems from the fact that every virtual human is always seen under a certain viewing angle which varies during simulation whether because of camera motion or actor’s motion. Yet, there is basically no need to know what factor actually caused the variation. In practice we test the variation of a “view” matrix, which corresponds to the transformation under which the viewer sees the virtual human. This matrix is plainly the product of the subjective camera matrix (camera in the scene window) and that of the articulated character. The cache invalidation algorithm then runs in pseudo-code as follows: For every virtual human at frame 0 /* Initialization stage */ Generate texture Store View Matrix End for For every virtual human at frame N>0 /* Simulation */ Compute new View Matrix Compute View Variation Matrix M (from previously stored view matrix to the newly computed one). Compute the amplitude of the corresponding axis/angle If angle>threshold Generate texture Store current View Matrix End if End for

Building the axis-angle representation of a rotation matrix M consists in finding its equivalent rotation axis as well as the angle to rotate about it [16]. Finally, the threshold that indicates when the texture is to be regenerated can be weighted once again with the euclidian distance from the impostor to the viewer.

6. The Human Flow Test Bed Our work on impostors originates from earlier research on human crowd simulation that was carried out in our laboratory. However, simulating a human crowd introduces many parameters that alter the frame rate results. We preferred to use a simpler environment in order to assess reliably the gain of impostors on geometry. In our simulation twenty walking virtual humans keep circling. They all move at different yet constant speeds, along more or less long circles (Fig. 7). A fast walking engine handles the motion of the actors, collision between characters are not detected and finally every articulated character always lies in the field of vision so that visibility culling does not play a role.

26

Amaury Aubel et al.

The first graph shows the influence of the posture variation thresholds used in the texture cache invalidation mechanism. The other factor, that is the viewing angle, was deactivated throughout this test. Along the horizontal axis are noted the posture variation thresholds beyond which a texture is re-generated while the vertical axis shows the amount of time required for rendering the complete scene. The vertical axis is normalized with respect to the amount of time needed to render the whole scene using the real geometry. When the threshold is set to zero every change in the posture triggers a re-computation of the texture, in which case the rendering time logically exceeds the reference rendering time. Note that there is only a marginal difference of 15% though, which clearly shows that impostors are hardly slower than the actual geometry even in the worst cases. Rendering time plummets when the variation threshold is increased: a threshold of 40%, 60% and 80% cuts respectively by two, three and five the rendering time. In practice there is no noticeable difference in the animation of the actors as long as the threshold is smaller than 15%. However, it makes perfect sense to set the threshold to a much higher limit (between 40% and 80%) because the motion remains absolutely understandable. On the other hand it becomes hard to grasp that the actors are walking beyond 90%.

1.2

Normalized Rendering time (geometry=1)

Normalized rendering Time (geometry=1)

Fig. 7. Simulating two football teams (left: impostors, right: real geometry)

1 0.8 0.6 0.4 0.2 0 0%

20%

40%

60%

80%

100%

Posture Variation Threshold

1.2 1 0.8 0.6 0.4 0.2 0 0

10

20

30

40

Viewing Angle Variation Threshold

Graph 1.: Rendering speedup as a function of posture variation threshold

Graph 2.: Rendering speedup as a function of the viewing angle variation threshold

Animated Impostors for Real-Time Display of Numerous Humans

27

The second test focuses on the impact of the viewing angle on the rendering time. Like previously, the vertical axis is normalized with respect to a reference rendering time (that needed to render the whole scene with the actual geometry). On the horizontal axis are marked the viewing angle thresholds in degrees. Similarly to the first test we disabled the other factor in the cache invalidation mechanism. The critical threshold is reached for a viewing angle variation of two degrees only. Viewing angle thresholds of 5, 8 and 17 degrees cut respectively by two, three and five the rendering time. We consider that there is no real degradation of the animation up to 20 degrees. Refreshment of the actors’ texture becomes too obvious beyond 30 degrees.

7. Conclusion This paper shows that image-based rendering can be applied successfully to virtual humans. In particular, we have explained how to generate a texture representing a virtual human and proposed an image cache invalidation algorithm that works for any articulated character and executes reasonably fast. Two major issues, which were beyond the scope of this paper, could be addressed in future work: 1. Our texture refreshment algorithm performs on an individual basis. From the application point of view, the image cache invalidation mechanism should also consider virtual characters as a whole. Above all this would help achieve a quasiconstant frame rate, which is often regarded in virtual reality applications as more important than a high peak frame rate. 2. Because depth information is lost when complex 3D geometry is replaced with an impostor, especially with a plane, visibility may not be correctly resolved. Schaufler recently showed how to correct the visibility by directly modifying the depth value for every texel [17]. However, faster techniques could be investigated in the specific case of virtual humans. For example, the concrete problem depicted in figure 5 could be solved by decomposing the impostor’s plane into several planes e.g. one for each major body part.

Acknowledgements We wish to thank Soraia Raup-Musse for preliminary tests on the use of billboards for the representation of virtual humans. Part of this work has been sponsored by the European ACTS project COVEN.

28

Amaury Aubel et al.

References 1. Capin, T.K., Pandzic, I., Noser, H., Magnenat-Thalmann, N., Thalmann, D.: Virtual Human Representation and Communication in VLNET. In: IEEE Computer Graphics and Applications, Vol. 17, No. 2, March - April 1997 42-5 2. Regan, M., Post, R.: Priority Rendering with a Virtual Reality Address Recalculation Pipeline. In: Computer Graphics (SIGGRAPH ’94 Proceedings) 155-162 3. Luebke, D., Georges, C.: Portals and Mirrors: Simple, Fast Evaluation of Potentially Visible Sets. In: 1995 Symposium on Interactive 3D Graphics 105-106 4. Heckbert, P., Garland, M.: Muliresolution modelling for fast rendering. In: Proceedings of Graphics Interface’94, 43-50 5. Popovic, J., Hoppe, H.: Progressive Simplicial Complexes. In: Computer Graphics (SIGGRAPH ’97 Proceedings), 217-224 6. Garland, M., Heckbert, P.S.: Surface Simplification Using Quadric Error Metrics. In: Computer Graphics (SIGGRAPH ’97 Proceedings), 209-216 7. Torborg, J., Kajiya, J.T.: Talisman: Commodity Real-time 3D Graphics for the PC. In: Computer Graphics (SIGGRAPH ’96 Proceedings), 353-363 8. Maciel, P.W.C., Shirley, P.: Visual Navigation of Large Environments using Textured Clusters. In: 1995 Symposium on Interactive 3D Graphics, 95–102 9. Schaufler, G., Stürzlinger, W.: A Three Dimensional Image Cache for virtual reality. In: Proceedings of Eurographics’96, C-227 -C-234 10. Shade, J., Lichinski, D., Salesin, D.H., DeRose, T., Snyder, J.: Hierarchical Image Caching for Accelerated Walkthroughs of Complex Environments. In: Computer Graphics (SIGGRAPH’96 Proceedings), 75-82 11. Schaufler, G.: Exploiting Frame to Frame Coherence in a Virtual Reality System. In: Proceedings of VRAIS’96, Santa Cruz, California (April 1996), 95-102 12. Boulic, R., Capin, T.K., Huang, Z., Kalra, P., Lintermann, B., Magnenat-Thalmann, N., Moccozet, L., Molet, T., Pandzic, I., Saar, K., Schmitt, A., Shen, J., Thalmann, D.: The HUMANOID Environment for Interactive Animation of Multiple Deformable Human Characters. In: Proceedings of Eurographics’95, Maastricht (August 1995) 337-348 13. Shen, J., Chauvineau, E., Thalmann, D.: Fast Realistic Human Body Deformations for Animation and VR Applications. In: Computer Graphics International'96, Pohang, Korea (June 1996) 166-173 14. Pratt, D.R., Pratt, S.M., Barham, P.T., Barker, R.E., Waldrop, M.S., Ehlert, J.F., Chrislip, J.F.: Humans in Large-scale, Networked Virtual Environments. In: Presence, Vol. 6, No. 5, October 1997, 547-564 15. Funkhouser, T. A., Séquin, C.H.: Adaptative display algorithm for interactive frame rates during visualization of complex virtual environments. In: Computer Graphics (SIGGRAPH’93 Proceedings), 247-254 16. Pique, M. E.: Converting between Matrix and Axis-Amount representations. In: Graphics Gems (Vol. 1), Academic Press, 466-467 17. Schaufler, G.: Nailboards: A Rendering Primitive for Image Caching in Dynamic Scenes th In: Proceedings of 8 Eurographics workshop’97. St. Etienne, France, June 16-18 1997, 151-162 18. Sillon, F., Drettakis, G., Bodelet, B.: Efficient Impostor Manipulation for Real-time Visualization of Urban Scenery. In: Proceedings of Eurographics’97 (1997) C-207-C-217

Can We Define Virtual Reality? The MRIC Model i i

n

n

l in

um

ENST, Dpartement Informatique, 46, rue Barrault 75634 Paris cedex 13, France {verna, grumbach}@inf.enst.fr This work benefits from the financial support of DRET/DGA

Abstract. In this paper, we propose a reasoning model aimed at helping to decide on the virtual status of a given situation, from a human point of view rather than from a technological one. We first describe how a human and his environment interact. The notion of “reality” will be seen through this description. Then, we propose a set of possible “cognitive deviations” of reality leading to situations of virtual reality. This model provides three major benefits to the field of Virtual Reality: first, a global definition and a systematic mean of categorizing related situations; secondly, the ability to discuss on the virtual status of real situations and not only synthetic, computer generated ones; thirdly, a demonstration on how the field of Tele-Operation is heavily related to virtual reality concepts, and some perspectives on future tele-operation intelligent user interfaces.

1

Introduction

ing t on t ntg owt of tivity in t l of i tu l lity iti im po t ntto noti t t1/ lt oug vi tu l lity y t m ig n fo u y um n t nolog i l i mu mo vn t n tu i on t og nitiv p t n 2/ t l of i tu l lity l k g n l g m nt on t t m to u n t m ning to g iv to t m . om p i iption of n p opo m o l ll MRIC1 w i “ lity tof int tion involving um n ing n i nvi onm nt n tof po i l “ og nitiv vi tion llowing l itu tion to om vi tu l. P ovi wit t i m o l w l to i w t u o u itu tion ( v n lon ) n l l i tu l lity. u t m o t i m o l ow t tt l of l p tion i tu lly vily onn t to i tu l lity n g iv u p p tiv on w t oul t futu int llig nt u int f fo t l op tion. 1

MRIC is a French acronym for “Model Representing Cognitive x Interaction”

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 29–41, 1998. c Springer-Verlag Berlin Heidelberg 1998 

30

2

Didier Verna and Alain Grumbach

Reality as a Set of Cognitive Interactions

2.1

The MRIC-r Descriptive Model

u ing i n n

i

n n n i n

n n

ing ny

u

n ing u n

operator in n i

n i

Model Agents n i u i n in Mental Agent n Physical Agent n g n n in y i g n i i ing y i i in y i j n i n n n y i g ning n n y i g n in n Operating agent n n External Agent ing g n n y i u ny u n i n n n i n n i u y i iy ug y n n i y in n i n n n g n i y i n i n g n in Control Agent n Transitional Agent n g n i n in ig n n ii n n ii n g n n i i i ik fl i n i n n ing n i u n

Mental Agent Transitional Agent

Control Agent

I

Physical Agent Operating Agent

PTC

PPT

A CT

A TP

External Agent

PEP

A PE

I

Information processing module Information storage area

E

Information path

Fig. 1. MRIC-r M

E

Knowledge, awareness Intelligence, Intentions Environment state

iy

Information Processing i i i n in g u i n n A P ing i i i n n in i i ig in n in i n g n

i n in i n i n g n i in g

Can We Define Virtual Reality? The MRIC Model

31

v l info m tion p o ing t k m inly vot to t n l ting t output of t p viou g ntinto inputfo t n xton . o x m pl int ntion of w lk ing i t n l t into qu n of ACT → AT P → AP E n v im pul w i in tu n p o u l o y m otion. lig t m itt y n o j tp o u opti l n v PEP → PP T → PT C tivity w i in tu n i int p t t “p n of t o j t. Notion of Interaction n ition to t o t ig t onn tion t g u 1 ow fou loop k in t info m tion p t i p nt t op to i ion p o . o in t n PT C → ACT t op to n i to g no j t u t o j ti t m y u n it. i p nt t tion of t nvi onm nt. o in t n AP E → PEP w n g t n im g of ou own o y olli ing wit n o j tp v ntu f om m oving t w y w w nt. i p nt t p op io ptiv f k . o in t n in AT P → PP T p n ntly f om t nvi onm ntit lf w k now ou o i nt tion in p . i p nt fl x ( ) tion t t i tion un t k n PP T → AT P wit out ny ontolof will. MRIC interaction i m n in p ti ul t t g nt in int tion m u t onn t k to ot t oug n info m tion p o ing m o ul . n ou m o l w n v fou k in ofint tion t ontol g nto t n ition l g nt wit ot t p ting g nt n t xt n l g nt. n int tion t u im pli om k in of “ i log tw n t on n g nt in w i t info m tion nt k p n p tly on t info m tion iv . i i w y w o t t t tt p ting g ntint t wit t xt n l g nt. t w oul y t tt y in mutual action. 2.2

Notion of Reality

y ing to n “ lity i notou pu po in itwoul m o of p ilo op i lm tt . n t w will t t t t lity n n t oug t m o l w ju t i . n ot wo w on i t tt MRIC-r m o l i . i pointof vi w ing outtwo im po t nt p t of t notion Agent classes. g ntin t m o l p i l uty p i int n l it tu n p ti ul t of onn tion wit t ot g nt. f tu n t g nt l . Agent Instances. o g nt l v l in t n n p ovi wit out lt ing t m o l int g ity. o in t n um n o y n o ot oul two i ntin t n of n p ting g nt.

32

3 3.1

Didier Verna and Alain Grumbach

What is Virtual Reality? Sources of Virtuality

p opo u l int p t tion of t xp ion “ vi tu l lity l ing u to t following nition i t w on i t t m p t ly. t t m “ lity i to n i in t p viou tion t ti t oug t MRIC-r m o l. lity i t n lity w i look lik t op to utw i i not tu lly i . i tu li ing lity n m n pl ing o m o ify ing on o v l om pon nt of t l itu tion ut till p ving om m on vio . u vi tu li tion p o ll Type 1 Virtuality Sources. n u l pp o w t t t tt xp ion “ vi tu l lity nnot plit. om t i pointof vi w t vi tu li tion p o no long im tim it ting t l om pon nt of t itu tion ut t tp opo ing to t lly n w on ving i ntly n in p ti ul nott u u l w y. u vi tu li tion p o ll Type 2 Virtuality Sources. 3.2

Type 1 Virtuality Sources

Augmented Reality o on int p t tion of t t m ug m nt lity m n m ixing l n vi tu lo j t in t m n fo in t n 4 y tm ig n to in lu in ltim vi tu lo j t onto liv vi o. pu po i l o to llow p ti ip nt to int twit t o j t w ll wit t l nvi onm nt. i k in of vi tu li tion p o om pli wit ou nition of t ty p 1 vi tu lity ou t ulting itu tion i not x tly t l on uti n im it tion of l nvi onm nt(t vi tu lo j t oul p f tly w ll l). M o ov t im ing to llow int tion wit t vi tu l o j t too t ulting nvi onm nt i xp t to v t m wy l nvi onm ntwoul . tu t itu tion y ing n w o j t t t v t m wy l o j t woul n p ving om m on vio . ti i l lity i oft n f to t ility to t Artificial Reality wo l t t o not xi t u t ti ti lwo l n to i pl y t m lik 3 o j t. ollowing t i t m inolog y t two t g o i of i tu l lity ppli tion woul 1/ im ul ting t l wo l n 2/ ting u t t wo l . i l i tion i not l v nt o ing to ou og nitiv vi ion of t on pt. p nting t t t i om t ing ol t n vi tu l lity t nolog i . ti now p m itt wit t t nolog i i to n vig t in u wo l . ow v t i k in ofvi tu li tion p o f ll into t tgo y of ty p 1 vi tu lity ou on i t t u vi tu l t ti ti l wo l oul xi t lwo l fo in t n if w uilt p y i l tof u p nting t t ti ti lv lu n putt m tog t in oom . om og nitiv pointof vi w ti i l lity tu lly on i t of im it ting p i l of l nvi onm nt. fo w ju tin f ontof not of ty p 1 vi tu li tion p o .

Can We Define Virtual Reality? The MRIC Model

3.3

33

Type 2 Virtuality Sources

Augmented Reality P viou ly w pok of ug m nt lity in t m of ing vi tu l to l n . w on i on p t ing vi tu l to t l n . n nt p p f om N g o n ug m nt lity y t m g iv to t op to “ ... t f ling of t lk ing to t o j tit lf ... . n t op to look t ook i notonly g iv n t in info m tion out t ook ut t ility to t lk i tly to t ook n k it f w m o qu tion . i x m pl of ug m nt lity o pon x tly to w t w ll ty p 2 vi tu li tion p o . t nvi onm nti notju tim it t utit vio i m o i .M o n mo im il p o to u nt y t m .g .in om put i ug y w fo in t n t op to i g iv n t ility to t oug t p ti nt k ull n in t nk to vi tu l m on t l im g of t op ting on . Released Reality g n l pu po of ug m nt lity i n m ly to ug m nto j t iliti p ption info m tion t . im il ly t ount p tof ug m nting iliti i im ini ing i iliti . i m n t t not po i ility fo ty p 2vi tu li tion p o li in t ility to “ l l wo l on t int fo in t n t in ility to v tim to p f om l w of p y i t. v l l t i n l y n in om y t m k nown ug m nt lity y t m . p f t xp ion “ l lity . om x m pl in v l M ( i tu l lity M o ling ng u g ) vi w t u n n l / i l olli ion t tion tog g l w lk /fly n vig tion m o . o option lik g oing t oug w ll o fly ing o t n o pon to l ing v lp y i l on t int. im il ly in v lvi tu l lity vi o g m o vi tu lm u um vi iting y t m t u n jum p i tly f om on oom to not l ing t i w y t t m po l on t intof m ov m nt. o p o ty p 2vi tu li tion p o in t l vio i not im it t utm o i . 3.4

Definition of Virtual Reality

y to p k ofvi tu l ving om vi tu lity ou in g iv n itu tion i n lity utnot u i nt. u m o lof lity f tu tof g nt on long ing to it own l . n ou pointof vi w t t m “ lity i nti lly o i t wit t notion of g nt l . i m n t t ll w t vi tu l o not oul follow t m o l tu tu n tu p tt g nt l . im il ly t t m “ vi tu l m o to o wit t g ntin t n . woul t n lity t t onfo m wit t MRIC m o l ut wit vi tu l g nt in t of lon . on i tion l u to p opo t following nition tu t k itu tion V to x m in wit t MRIC m o l n will ll

.

i itu tion m u t onfo m itu tion of i tu l lity

34

Didier Verna and Alain Grumbach

if n only if on o m o of it g nt i f om t o pon ing l on ( ) n if ll u g nt (t n ll “ vi tu l g nt ) n o t in f om l on t nk to vi tu lity ou of ty p 1 n /o 2. in in ny t g nt l qui to t m t topog p y of t onn tion tw n t g nt m u t p v w ll. on qu n t fou ty p ofint tion i li n y om pon nt of ny i tu l lity y t m . i pointof vi w lt oug ou ption of t t m “ int tion i p i li i om m only g on.

4

Immersion and Virtual Reality

m ong t ll on pt notion of “ im m ion t on pt of m m m on t t ow ou wo l too. 4.1

n i involv in u ntvi tu l lity y t m t i t m o t wi ly u . n t i tion w ow ow ion t wit ou nition of i tu l lity n w m o l n l u to i on t vi tu l t tu of l

Immersion and MRIC

Overview m o tw ll k nown ofim m ion i t on in w i t op to volv in om put g n t wo l t nk to t uit M ount i pl y ( M ) t g lov ... n f l p y i lly p ntin t i vi tu l nvi onm nt. n u itu tion t um n o y n p ting g nt i in ont t wit n w xt n l g ntim it ting t l nvi onm nt. n o to k p o ntp y i lf ling of p n i lim m iv y t m oul on qu ntly i onn tt m nt l g ntf om i l nvi onm nt ut till n l p f t m o l of t um n o y n it “ m utu l tion wit t im ul t nvi on m nt (p y i l on t int olli ion vi u l o y f k ...). n f om MRIC pointof vi w t m nt l g nt oul om pl t ly i onn t f om it own p y i l g nt n link to om pl t ly vi tu lon in t . i i l itu tion n m o li own in g u 2. n t i m o l t vi tu l it u tion i o t in y “ vi ting t info m tion p t AT P n PP T tow n im it tion of t l p y i l g nt. i m o li tion om pli wit ou nition of i tu l lity in t w ol p y i l g nti vi tu l on (it i f om t lon ) n t i g nti o t in t nk to ty p 1 vi tu lity ou . tu p i t tw not ug g ting ( n w o not t ink ) t t it oul po i l to p t m in f om o y. utw n y t m p ovi n i nt im m iv f ling to t op to n fo in t n p o u li ti y nt ti im g of t o y t n t op to om lik ly to “ li v t im g ( ) n ptt f k () g t l on . i i w t ou m o l p nt.

Can We Define Virtual Reality? The MRIC Model

Mental Agent Control Agent

I

35

Physical Agent

Transitional Agent

PTC

PPT

A CT

A TP

Operating Agent

External Agent

PEP

E

Virtual World A PE

Fig. 2. MRIC-i M o

l of m m

ion

Limitations tt i tim u ntim m iv y t m f f om ing i l u t y k two im po t ntinfo m tion p t of t MRIC-i m o l n not om pl t ly i onn tt op to f om i own nvi onm nt. n vi tu l o g m t o pl y i in ont twit t lg oun vn if li v iti t o ou t utt i itu tion i pt l only long t vi tu lg oun o pon to t lon . n t i itu tion t info m tion not om pl t ly vi t qui y ou p t om p i ing AP E n PEP nition of i tu l lity. itu tion in w i t p op io ptiv f k i im po t nt in flig t im ul tion n ing t t of g vity o l tion i i ult to i v ut oul p ntfo li ti n ing . info m tion loop ontol g nt) i u ntly ok n. i t t k AT P PP T (in i t ny vi tu l lity y t m oul n l p op ly ll int tion om pon nt in lu ing p op io ption om pli wit t pointof vi w of u u . f t MRIC oning m o li on n w nnotoppo t no log i l g um ntto t o ti lm o l. fo w t ink t tt im ul tion y t m it ov o v lop tow vi tu l lity y t m ut y tim p f t u to t nolog i l on t int. 4.2

Can Real Worlds be Virtual?

w v only pok n of y nt ti nvi onm nt untilnow n im po t nt f tu ofou m o li to g iv u t ility to xt n t fl tion to lwo l too. n t n xt p g p w will xpl in w y n ow l wo l n vi tu lun t in i um t n n g in w willpo ition t t ition lly m ig uou of ug m nt lity.

36

Didier Verna and Alain Grumbach

Real Immersion n t p viou tion w ltwit im m ion in nvi onm nt (“ vi tu l im m ion ). N ow w woul lik to ow t tt p t of t im ul tion i not n y to v n im m iv vi tu l lity y t m . on i itu tion of im m ion in y nt ti nvi onm nt w i i im t p o u ing t x t lity. y g ui tou in vi tu l m u um w t op to n n vig t in oom ( ut till olli wit w ll ) n look tt p inting .N ow on i itu tion in w i n op to i g iv n x tly t m int f ( m ount i pl y joy ti k ...) uti int ting wit l( i t nt) m u um . ti l t tf om t op to pointof vi w t itu tion m in t m . l itu tion n ll “ vi tu l lity u lik in t im ul t w p ovi om pl t ly n w p y i l g nt ( l in t of y nt ti t i tim ) wit it own ontol n xt n l g nt. i n w g nti in vi tu l u non of t tu lp y i l g nt om pon nt uppo to p v .M o ov in t i vi tu lp y i l g nt i tu lly l on itf ll in t t g o y of ty p 1 vi tu lity ou . tu vi t t m w y in ot n info m tion p t AT P PP T t m o lp nt in g u 2i till v li p nt tion of t i itu tion. i i w y w t ink t t t i tin tion y nt ti / l i not l v ntto l l itu tion “ vi tu l lity . og nitiv ly p k ing i tu l lity m u t not ti t to om put im ul tion. i pointof vi w i f om t on of u 1 n u u fo w om vi tu l lity y t m m u t on om put g p i fo ny t ing l .

Augmented Reality ug m nt lity itu tion int ting u t y oft n m ix l n y nt ti om pon nt. in w ju t w t t l n n un t in i um t n vi tu l lity itu tion t qu tion of t vi tu l t tu of ug m nt lity i . t ink t tt i t tu p n on t itu tion ft op to in i t ont twit t l n (u t l op ting i t nt o ot o vi iting i t nt m u um ) t itu tion m y tu lly m ix of l n y nt ti im m ion o t in t nk to ty p 1 vi tu lity ou . n iti pt l to l l t itu tion i tu l lity. twoul tu lly mo pp op i t to p k of Augmented Virtual Reality in l im m ion lon i l y itu tion of vi tu l lity w ju t w. f t op to in i t ont twit t l n n t g o l i notto ng t i (lik volving wit t n lu nt g l w i up im po o j t onto t l n ) w n no long t lk of im m ion in t op to i xpli itly qui to t y in ont twit i l nvi onm nt. i f t n o n t om ply wit ou nition of i tu l lity. w n p k of ug m nt lity utnotof i tu l lity.

Can We Define Virtual Reality? The MRIC Model

5

37

Examples

o illu t t fu t ow w n u t MRIC m o l to i lity t tu of u o u itu tion l tu g iv t outlin on two ont i to y x m pl . 5.1

on t of t

“ vi tu l oning

Example 1

n t i x m pl 2 n op to i im m in vi tu l nu l pl nt n i g iv n t ility to n vig t in t pl nt n p fo m v l op tion . o pon ing l itu tion i t on in w i t op to t in n im il nu l pl nt. it p tto t MRIC-r m o l t n w g nt follow ontol g nt m in t m in t l itu tion. n t op to till i w tto o n ontol t vi tu l o y. it littl t ining (l t ining m n tt y t m ) t op to will l to u t joy ti k y fl x to m ov oun . i im pli t t t t n ition l g nt m in t m in t l itu tion w n t y tm int f i p f tly im il t y t op to . M oving oun in t vi tu l nvi onm nt oul in om o viou w lk ing in t l oom . xt n l g nt om t nu l pl nt om put m o l. i i n im it tion of l nvi onm nt. t u in f ontof ty p 1 vi tu li tion p o . p ting g nt om t um n o y om put m o l. lt oug t i m o l i notp f t( .g . only t op to n i vi u lly p nt ) t ultim t g o l i l ly to im it t t op to o y w i i ty p 1 vi tu li tion p o . now v t m in t o pon ing l long wit t n w g nt involv in t itu tion. tu k t v li ity oft g nt o ing t t l p i tion im po y t MRIC m o l. op to i g iv n ot t ility to m ov i vi tu l o y n to p iv it( tl tt n ) n p ting t AT P n AP T info m tion p t . n w p ting g nt t ility to g o j t in t vi tu l pl nt n olli ion t tion i im pl m nt n n ling o tly t AP E n AEP info m tion p t . n v itu tion in w i n w (vi tu l) p y i l g nti p ovi . i g nt n it int tion wit t m nt l g nt onfo m wit t MRIC m o l qui m nt.M o ov t i g nti o t in t nk to ty p 1 vi tu lity ou . t u in f ontof itu tion of vi tu l lity n m o p i ly itu tion of “ vi tu l im m ion . 5.2

Example 2

tu t k g in t x m pl p opo nw n in li y n tim nv lly int twit it i tly.

y N g o . u wit t n lu nt look t ook t oug t n t u i l y in lli y

38

Didier Verna and Alain Grumbach

ot i no ot o pon ing l itu tion. y t m o not tu lly p ovi ny n w g nt in t op to i xpli itly qui to t y in i own nvi onm nt. n w n l y on lu t tt i itu tion woul n t ll i tu l lity. u nition p i t t tl ton of t m o l g nt oul pl w i i not t . ow v t i x m pl illu t t t f tt tw n v vi tu lity ou in itu tion wit outit ing i tu l lity itu tion p op ly p k ing . n g iving t “ f ling of t lk ing to t ook it lf on titut ty p 2 vi tu lity ou . om t op to point of vi w n o j t(t ook ) i vi tu li y m o ify ing it no m l vio . i itu tion long to t t g o y of ug m nt lity (ty p 2) ut o ing to ou nition notto i tu l lity y t m .

6

Virtual Reality and Tele-Operation

nti tion w m on t t t tt l oft l op tion n vi tu l tong ly l t n w ow ow ou m o l n ing out v lint p p tiv on w t oul futu int f fo t l op tion y t m ll“ t l op tion itu tion in w i n op to i m ot ly wo i l . u itu tion u ti i l tool to in i t nt2 o in t n m ito to t m nipul tion vi ( .g . o ot) n iv f f om it. 6.1

lity ting k ing ot k

Sources of Virtuality in Tele-Operation

Immersion ti im po t ntto noti t tm o n m o t l op tion y t m u vi tu l lity int f tt p ption l v l. ft u ing n to i pl y t i t nt n t l op to p ovi wit M llowing t m to g t p ti lvi w of t n v n to m ov t i pointof vi w. n u y tm t i i tu lly to p ovi n im m iv nvi onm ntto t op to . iving t op to t f ling of ing p nt in t i t nt m k u y t m f llinto t t g o y ofvi tu l lity y t m u t y tu lly im pl m nt of l im m ion w i w p ov to of i tu l lity. i on titut t t l tion tw n t l of i tu l lity n l p tion. Augmented/Released Reality p o ofvi tu li ing t l op tion it u tion o not top t t im m ion l v l. M ny nt ppli tion v own t utility of ug m nt lity on pt ( u p ovi ing y m oli info m tion up im po on t vi u l l ) fo t l op tion . ow v t on pt of ug m nt lity xt n f t t n ju tt p ption l v l t m in i n tw n lop tion n im ul t op tion i t tin l t op tion u u lly i v i l . i on t intl to v lop t ining y t m w ll utonom ou ( n op fully o u t) o ot. 2

What we call “distance” here means spatial distance as well as temporal distance.

Can We Define Virtual Reality? The MRIC Model



n u ing t ining y t m t t l op to i tu lly g iv n v tim in of m i t k . i i f ll into t t g o y of lity n ty p 2vi tu li tion p o .

6.2

39

n l

to

Virtualizing the Mental Agent

U ntil now w v m on t t ow vi tu li tion p o n tt xt n l g nt t op ting g nt n tm o tt w ol p y i l g nt. og i lly t qu tion of t m ining w t vi tu li ing t ot g nt in t m o li n i l i . i qu tion m k n if on i l tiv ly to t l of t l op tion. Virtual Control Agent y ing utonom ou i ion p o in t l op tion y t m w nt n w l of vi tu lity w tu lly p ovi n w ( n vi tu l) ontol g nt in t y tm oll o ting wit t op to . om MRIC pointof vi w t i i t ttim t t vi tu lity ou lt wit t m nt l g ntit lf. op to tu lly g t t f ling of ontolling notonly n op ting g nt ut n op ting ug m nt wit m inim um of int llig n t u m o ify ing t om m on vio of lop ting g nt. i i of ( y p 2) ug m nt i tu l lity notonly y p ption not only y tion ut y fl . Virtual Transmutation U ntil now w on i two m in itu tion m m ion in w i t op to i m nipul ting t nvi onm ntwit “ i own n n l p tion in w i t nvi onm nt i m o i in i tly t oug o ot i tiv . n t i l t n im po t ntp o l m i to p ovi y tm y to l n wit t ining p io o t po i l . wp ntly t t long t op to o n t t i tly wit i own o y o wit n i om o p i y nt ti v t lik in n im m iv itu tion p io of pt tion i n y to l to u t int f (joy ti k t g lov ...) wit out ving to t ink outit. wit u on t int t ultim t g o l oul to p ovi vi fo w i t woul of t l op tion y t m no m o t ining p io w t op to oul g iv o only y m oving i own o y. on i m ly o ot y p l of illing n wing op tion . op to i g iv n y nt ti vi ion of i m up im po on t im g of t l n . i w y () o n t tu lly noti t p n of o ot utin t g t t f ling of wo k ing wit i own n . n illing op tion i n t op to ju tpoint ng to t illing on n t o otim m i t ly t k l ill n op t qu t (g ing t ill m y not i pl y in iti nott nti lp toft op tion). im il ly vi tu lly g ing w n pulling itup to t ol woul m k t o otl v t ill g w n p fo m t op tion i tly. ti int ting in t i o t ining int f i t tt op to only n to k now t n tu lm ov m nt n oing ll i lif . n v y t ing o u if t o ot m t op to n w op ting g nt.

40

Didier Verna and Alain Grumbach

i vi tu li tion p o i tu “ vi tu l t n m ut tion fo t op u tituting o otto t um n o y i ty p 2vi tu li tion p o t vio of t op ting g nti om pl t ly ng .

Mental Agent

to . in

Physical Agent

Control Agent

I

PTC

A CT

Transitional Agent

Operating Agent

External Agent

PPT

PEP

A TP

A PE

Fig. 3. MRIC-d nt ntion

E

t tion

Intentions Detection n n v n m o ultim t vi w t i l itu tion i t on in w i t op to o not v to wo k ny m o ! i m n t tifw w ntt op to to k p ontolov t v nt t im oft y t m woul to t t f t po i l t op to int ntion n p o utom ti lly. il t v y futu i ti vi w of u itu tion i m nt l om m uni tion tw n t op to n t y t m t i g o l n till pp o wit n “ int ntion t tion y t m 9 m inim i ing t p y i l wo k of t op to . o in t n g iv n t p n of n o j tin t nvi onm nt n g iv n t t t op to i u ntly m oving i m tow t o j t n int ntion of g ing t o j t i om ing m o n m o lik ly n oul in tu n t t . n t m of vi tion f om t MRIC m o l t y t m om m uni t wit t op to tt ig tpo i l m nti l v l t t i i tly wit t ontol g nt. i itu tion i m o li on g u 3 n p nt t ig tl v lof vi tu lity tt in l w il k ping n op to in t itu tion.

7

Conclusion

n ti p p w uilt og nitiv m o l of int tion t oug w i t on ptof lity i n. p opo t notion of “ vi tu lity ou im t m o ify ing l om pon nt of itu tion in o to m k t m om vi tu l.

Can We Define Virtual Reality? The MRIC Model

41

notion of i tu l lity w t n n t vi tu li tion of on o mo g nt of t m o l. t ink t tou vi ion of i tu l lity t m jo n t u m o l p ovi y t m ti w y of i ing on t vi tu l t tu of ny p ti ul itu tion. i will op fully lp l ify ing t notion n it l t on pt. i t g o i tion p o i notlim it to y nt ti wo l ut l o xt n to lwo l . u m o l m on t t to w i xt n t l of l p tion n i tu l lity onn t n p ovi v l int ting p p tiv on futu int llig ntt l op tion int f . n ou opinion t m in i tin tion tw n i nt i tu l lity itu tion o notli in t y nt ti o l p toft itu tion ut t in t o p t. il ty p 1 vi tu lity ou m inly l t to t notion ofim m ion it m t tty p 2vi tu lity ou woul f mo lpfulin t l op tion y t m . ow v t illu t tion it long t i p p t n to ow t tty p 1 vi tu lity ou u ntly m u m o v lop t n ty p 2. n in pt on t notion of l p tion i tn 1 m y of om lp to o tt i im l n .

References 1. Burdea, G., Coiffet, P.: La R´ealit´e Virtuelle. Herm`es, Paris (1993) 2. Fertey, G., Delpy, T., Lapierre, M., Thibault G.: An industrial application of Virtual Reality: an aid for designing maintenance operations in nuclear plants. In proceedings of: L’interface des mondes R´eels et Virtuels, pp. 151-162. Montpellier, France (1995). 3. Grumbach, A., Verna, D.: Assistance cognitive ` a la t´el´eop´eration en monde virtuel. In proceedings of: Journ´ees Nationales R´ealit´e Virtuelle, GT-RV, GDR-PRC, pp. 38-46. Toulouse, France (1996) 4. Kansy, K., Berlage, T., Schmitgen, G. Wikirchen, P.: Real-Time Intergration of Synthetic Computer Graphics into Live Video Scenes. In proceedings of: L’interface des mondes R´eels et Virtuels, pp. 151-162. Montpellier, France (1995). 5. Mazeau, J.-P., Bryche, X.: Un espace interactif dispers´e destin´e ` a supporter des rencontres sportives. In proceedings of: L’interface des mondes R´eels et Virtuels, pp. 189-198. Monpellier, France (1995) 6. Mellor, J.-P.: Enhanced Reality Visualization in a Surgical Environment. M.I.T. Technical Report No. 1544. 7. Nagao, K., Rekimoto, J.: Ubiquitous talker: spoken language interaction with real worlds objects. (1995) 8. Qu´eau, P.: Le Virtuel. Champ Vallon, Seyssel (1993) 9. Verna, D.: T´el´eop´eration et R´ealit´e Virtuelle: Assistance a ` l’op´erateur par mod´elisation cognitive de ses intentions. In proceedings of: IHM’97. Poitiers, France (1997). 10. Verna, D.: Smantique et Catgorisation de l’Assistance en Ralit Virtuelle. In proceedings of: Journ´ees Nationales R´ealit´e Virtuelle, GT-RV, GDR-PRC. Issy-lesMoulineaux, France (1998).

Distortion in Distributed Virtual Environments Matthew D. Ryan, Paul M. Sharkey Interactive Systems Research Group Department of Cybernetics, The University of Reading, Whiteknights, Reading RG6 6AY, UK, tel: +44 (0) 118 9875 123, fax: +44 (0) 118 9318 220 {M.D.Ryan, P.M.Sharkey}@reading.ac.uk http://www.cyber.reading.ac.uk/ISRG/ Abstract. This paper proposes a solution to the problems associated with network latency within distributed virtual environments. It begins by discussing the advantages and disadvantages of synchronous and asynchronous distributed models, in the areas of user and object representation and user-to-user interaction. By introducing a hybrid solution, which utilises the concept of a causal surface, the advantages of both synchronous and asynchronous models are combined. Object distortion is a characteristic feature of the hybrid system, and this is proposed as a solution which facilitates dynamic real-time user collaboration. The final section covers implementation details, with reference to a prototype system available from the Internet. Keywords: distributed virtual environments, network latency, collaboration, interaction, multi-user, virtual reality.

1 Introduction Within a distributed virtual environment users interact with objects and state change events are generated. These events are normally transmitted to other users within the virtual environment across a network. The timing of events can be managed using either synchonised or asynchronous clocks.

1.1 No Clock Synchronisation (Asynchronous) In this model, the environment contains no uniform, global time. Each host has a local clock, and the evolution of each event is described with respect to this local time. Because there is no global time, all event messages are sent without reference to when they occurred, and so events are replayed as soon as a user receives them. In this way, users always perceive remote users as they were some time in the past. 1.2 Global Time (Synchronous). When using synchronous time, as events are transmitted, they are time-stamped according to a global time that is synchronised between all users [1]. When events are received by a user, they are replayed at the correct global time. This usually requires

J.-C. Heudin (Ed.): Vrtual Worlds 98 LNAI 1434, pp. 42-48, 1998 © Springer-Verlag Berlin Heidelberg 1998

Distortion in Distributed Virtual Environments

43

the model to be rolled back to the correct time of the event – time must then be advanced quickly to the current time, to allow real-time local interaction.

1.3 Network Latency The network connecting users within the virtual environment induces latency of information interchange, which causes problems in the areas of user and object representation. The following two sections and Fig. 1 describe the problems, and suggest that different time management techniques are best for the two areas. Time t=1

View seen by remote user

0

1

2

3

x

X is moving along a path {z=0,x=t}. Xa (modelled using asynchronous time) is seen delayed by 1 second, Xs (modelled using synchronous time) is consistent with X.

z

t=2 0

1

2

3

x

z

t=3

0 z

1

2

3

x

Local user input causes X to follow a new path (shown by dotted line), this message begins transit to remote user. Xa is delayed by 1 second, Xs begins to become inconsistent with X. Message about path change is received by remote user. X still follows new path. Xa still delayed by 1 second, so now the new path can be started without any discontinuity. For the time t=[2,3] Xs is still following linear path, so is inconsistent with X. Now Xs has to discontinuously jump to X.

Fig. 1. User positions as seen by a remote user. X is the local user, Xa is the user modeled asynchronous time, Xs is the user modeled using synchronous time. The network delay,  , is 1 second

1.4 Asynchronous Time for User Representation As Fig. 1 shows, user representations will appear discontinuous if a synchronous model is used. The asynchronous model, however, does not lead to discontinuities so is advantageous for user representation. 1.5 Synchronous Time for Object Representation If asynchronous clocks are used, a local user will always see the past state of remote users and objects, as Fig. 1 shows. The model of objects at the local site will therefore be different to the model at the remote site. If objects appear in different places to different users, the users cannot interact with those objects. The only

44

Mattew D. Ryan and Paul M. Sharkey

solution to this problem is to use synchronised time to model objects, so the positions of objects and the interaction responses will be consistent to all users. 1.7 Proposed Solution The technique described in this paper proposes a hybrid synchronous/asynchronous model, combining the advantages of each method. A global time must be maintained between all participants to ensure consistency and allow interaction between users and objects. As in an asynchronous model, remote users are displayed at a previous time to ensure no discontinuities arise. The technique employs the causal surface, which facilitates collaborative interaction through the use of object distortion.

2 The Causal Surface In the 3½-D perception filter [2][3], objects are delayed by different amounts as they move around space. Asynchronous model for remote users. Remote users are shown delayed by  to remove discontinuities, as in an asynchronous model. Objects are delayed by  in the local user’s view when coincident with the remote user. Therefore, objects that are being interacted with by a remote user are displayed at the same time as that user and the interaction is viewed without discontinuity. Synchronous model for local user/object interaction. Objects are displayed in real time close to the local user to allow interaction in real-time. The variable delay is achieved by generating a causal surface function, =S(x,y,z), that relates delay to spatial position [4]. This surface is smooth and continuous to avoid object position/velocity discontinuities. Fig. 2 shows an example surface, generated using radial basis functions [2].

B

Fig. 2. The causal surface divides space-time into the various causal regions. This figure represents a local user, A, at the position (0,0), and three remote users (the peaks on thesurface). The vertical heights at positions coincident with each user represent the time it takes interactions from the local user to propagate to that remote user. For example, the surface has a value of =1.5 at user B, showing that it takes 1.5 seconds for information to propagate between A and B

Distortion in Distributed Virtual Environments

45

In space-time it is impossible for information to reach a point under the causal surface, so the surface represents the bound on the causality of events occurring at the local user. This causality restriction applies to all information in the environment, thereby placing limits on the speeds of both objects and users.

3 Distortion of Objects 3.1 Distortion in the Real World In the real world, every event takes time to propagate and affect remote objects. Einstein proposed that all physical phenomena was limited by the speed of light. This critical speed limits the propagation of all causal influences, such as changes in gravitational and magnetic forces [5]. Although the speed of light limits all interactions, the nature of some interactions result in an effective critical speed less than the speed of light. Examples of apparent slower critical speeds are the speed of sound, or the speed of wave propagation along a spring. Generally it can be said that all interactions are limited by a critical rate of propagation, although the critical rate may not be constant, and is dependent on the type of signal and on the material being propagated through. Fig. 3 shows how a flexible object results in a critical speed. B

A

B

A

A (a )

B

B

(b )

(c )

A (d ) Tim e

Fig. 3. Distortion of flexible object. Two people, A and B, are holding either end of the object. A is moving the end back and forth at frequency, f. When A moves to the left (a), B is initially unaware of that movement. Some time later (b) the wave will reach B and B will be forced to move left. If A now moves to right (c) it will be some time later (d) before B feels this motion

The critical rate can be defined as a propagation time  – the time it takes information to propagate through the object. From standard control theory, the propagation time will result in a maximum frequency of interaction, fR, above which instability occurs, where (1) fR = 1/(4  ) . As the frequency of interaction approaches this critical frequency (in the absence of any visual feedback) then each user’s motions will be out of phase leading to resonance making co-operative collaboration impossible. A very good example of resonance occurring can be illustrated if two people try to use a skipping rope without viewing the other person (this object has very low stiffness and hence a discernible propagation delay). By looking only at their hands it is possible to initiate a low frequency of rotation but it is impossible for higher frequencies. Brief

46

Mattew D. Ryan and Paul M. Sharkey

experimentation showed that standing waves occurred at multiples of the resonant frequency (an expected result from the control theory [6] ). 3.2 Distortion in the Virtual World The delay associated with propagation of force along objects in the real world is derived from length and stiffness of the material. In the virtual world, however, the delay is derived from the network. The propagation of force as waves of distortion through objects in the real world is therefore analogous to propagation of event messages through the network. By showing the force propagation through objects in the form of distortion, users are made aware of the delay that is present, and can adapt to the maximum achievable rate of interaction. 3.3 Collaborative Interaction As a user changes the state of objects within their vicinity, those state changes will be reflected immediately on parts of the object close to that user. As the state change messages are transmitted across the network, the state changes initiated by the user will be seen to propagate along the object. When the state changes reach the remote user, the remote users interaction is taken into consideration, and resultant actions are determined [7]. These actions are sent continually as state changes back along the object (and also the network) to reach the local user again. This interaction occurs as a dynamic closed loop feedback system.

4 Implementation 4.1 Object States Each object in the view seen by a local user will be displayed at a time t-, obtained from the causal surface function  = S(x,y,z). The causal surface is continuous and dynamically changes as users move and as network delays vary, such that the value of the causal surface is always greater than the actual network delay at each user. Given that  is bounded within the range [0, max{S}], all the states of each virtual object must be defined over the range [tmax{S}, t], where t is current time. The trajectory of each object state can be implemented as a function or set of n functions:

X (t )  fi (t )

ti  t  ti1

(2)

for i = 1, 2, ... n, where ti is the start time of the i function, and t1  (tmax{S}). The functions fi(t) are stored in a temporally ordered list of functions, the most recent function being held at the head of the list. A function fi(t) can be removed from the list if a function fj(t) exists in the list such that tj < (t-max{S}). Every defined parameter of each object (the states of the objects) must be specified over the interval [tmax{S}, t]. The states of object may include, for th

Distortion in Distributed Virtual Environments

47

example, position, orientation, colour, temperature, but all types of interaction available to the users must be defined parametrically. A separate list of functions is used for each object state. 4.2 Virtual States The previous sections have concentrated on distortion of shape. Object states define all dynamic parameters of the object, including all the states that can be influenced by interaction. Therefore, as each point on the object is delayed by a different amount, all dynamic states of the object will change across its surface. For example, if the temperature of an object is increasing over time, a delayed part of the object will be at a lower temperature than a part closer to the (local) heat source. 4.3 Demonstration Program A simple demonstration program, showing the use of the causal surface is available from the ISRG website. The program allows two users to interact with objects within the same world. A simulated communication channel between the users allows variable network latencies.

5 Conclusion This paper discusses an alternative solution to prediction for many of the problems associated with network latency within distributed virtual environments. Remote users, and objects close to them are viewed by local users as they were some time in the past, thereby removing state discontinuities, and allowing visualisation of remote interaction. Objects close to local users are displayed in real-time, such that interaction is possible. Objects are delayed by a smoothly varying amount between the local and remote users. Distortion occurs due to the varying delay, and this has strong parallels in the real world – state changes are propagated through objects at a finite rate. In the virtual world, this rate is derived from the network delay, and the distortion allows visualisation of information transfer to allow real-time dynamic interaction. Acknowledgement. This work is supported by a Postgraduate Studentship Award from the UK’s Engineering and Physical Sciences Research Council. References 1. Roberts, D. J., Sharkey, P. M.: Maximising Concurrency and Scalability in a Consistent, Causal, Distributed Virtual Reality System, Whilst Minimising the Effect of Network Delays. Proc IEEE 6th Workshops Enabling Tech: Infrastruct. Collab. Enterprises (WETICE ‘97), Boston (Jun. 1997)

48

Mattew D. Ryan and Paul M. Sharkey

2. Sharkey, P. M., Ryan, M. D., Roberts, D. J.: A Local Perception Filter for Distributed Virtual Environments. Proc IEEE VRAIS '98, Atlanta, GA, (Mar. 1998) 242-249 3. Sharkey, P. M., Roberts, D. J., Sandoz, P. D., Cruse, R.: A Single-user Perception Model for Multi-user Distributed Virtual Environments. Proc. Int. Workshop Collaborative Virtual Environments, Nottingham, UK, (Jun. 1996) 4. Ryan, M. D., Sharkey, P. M.: Causal Volumes in Distributed Virtual Reality. Proc. IEEE Int. Conf. Syst Man Cybernetics, Orlando, FL (Oct. 1997) 1064-1072 5. Ellis, F. R., Williams, R. M.: Flat and Curved Space-Times, Clarendon Press, Oxford (1988) 6. Franklin, G.F., Powell, J.D., Emami-Naeini, A.: Feedback Control of Dynamic Systems. Addison-Wesley (1986) 7. Broll, W.: Interacting in Distributed Collaborative Virtual Environments. Proc. VRAIS ‘95, North Carolina, (Mar. 1995)

VRML Based Behaviour Database Editor John F. Richardson SPAWAR SYSTEMS CENTER San Diego, Code D44202 53560 Hull Street, San Diego California 92152, USA [email protected]

Abstract. This paper describes a set of models and related data elements that represent physical properties needed by the simulation algorithms in a virtual environment. This paper also describes a VRML based editor called the SimVRML editor. The data elements represent physical properties used by common models that perform general simulation within virtual environments. The data element class definitions described in this paper can be used by model algorithms and model classes related to the following simulation categories: Vehicle operations, ship and submarine operations, aircraft operations, sensor operations (electronic and visual), communications, role playing, combat operations, industrial, chemical and biological. The SimVRML editor is a virtual environment for editing simulation properties used within general virtual environments that support physical behaviors.

Introduction This paper describes research into data element definitions and model related objects useful for Virtual Reality simulations using the VRML (Virtual Reality Modeling Language) specification. This paper also describes a VRML based user interface for editing static and dynamic state information for VRML based simulation via the Internet or any network in general. The concept of VRML simulation is commonly referred to as VRML behaviors. The VRML 2 event model, routes and script nodes, in combination with high level programming languages, allows for the use of arbitrary simulation algorithms and models inside of a VRML virtual environment. The high level languages predominantly used for VRML simulation are Java, Javascript, C++ and C. The VR system for editing and updating simulation static and dynamic state information is the SimVRML editor. The SimVRML editor provides a virtual environment for assigning values to physical properties that are used in VR simulation models and algorithms. The definitions of the data elements operated on by the SimVRML editor are designed from an object oriented perspective and based upon Java as the VRML J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 49-62, 1998 © Springer-Verlag Berlin Heidelberg 1998

50

John F. Richardson

scripting language. The data element definitions and model objects can be directly mapped to Javascript or C++ data structures. Previous research on physical properties for distributed network simulation required by general model algorithms has included the DIS (Distributed Interactive Simulation) protocol [ 1 ]. VRML research related to DIS is being carried out by the DIS-VRML-JAVA working group associated with the VRML Architecture Group. An alternative to DIS based VRML is the virtual reality transfer protocol [2]. The research presented in this paper is about VRML simulations that are completely generic. It provides an extensible framework for the editing of physically based properties used by multifunctional sets of model algorithms.

Data Definitions for VR Simulation This research is based upon the following metaphor. Physically based simulations require models and algorithms to implement the simulation. Large numbers of models exist whose algorithms involve physical properties that are well understood, standardized and can be approximated by a Level Of Detail (LOD) paradigm. Such a „simulation LOD“ paradigm allows VR simulation models to use approximate algorithms to manage the computational complexity of simulations within a virtual environment. Managing simulation complexity is critical when immersed inside a VRML base virtual environment that includes multi-user behaviors. Consider the following example of the simulation LOD paradigm applied to a VR/VE or VRML sensor simulation. In the case of a Radar sensor simulation, the coarsest simulation LOD involves the solution of the generic radar equation. The next finer LOD for radar simulation involves a more physically based model of ground clutter, weather and multipath effects. Finer radar model LOD’s can be generated by considering more realistic models of system losses, electromagnetic interference, aspect dependant cross sections, antenna directivity, statistical techniques and even hostile actions among role playing fanatics. Descriptions of the equations for various radar model LOD’s and their associated model properties can be found in various sources [ 3 , 4, 5 ]. These various radar model LOD’s require well defined properties that need to be initialized to realistic values and updated as the VE simulation progresses. The SimVRML editor provides a VRML based interface that addresses the need to initialize and update VR/VE physical properties. These properties involve sensor models along with a set of models that provide for the simulation of common behaviors in a VR/VE/VRML universe. The models that have properties defined for the SimVRML editor described in this paper are models for vehicles, ships, subs and aircraft motion, along with models for communications, sensors, role playing, combat operations, industrial, chemical and

VRML Based Behaviour Database Editor

51

biological. The data element definitions for the properties simulated in the above models are defined as a series of Java classes. The base class for the SimVRML editor deals with the variables and methods needed to manage data extraction, insertion, descriptive names for the JDBC objects and pointers to default geometries and model algorithms for a set of user defined LOD’s. The SimVRML editor has a class that deals with the variables and methods needed to manage data extraction and insertion of dynamic data such as simulation time, pointers to the methods and models used in the simulation, along with status, position, motion parameters and other common simulation housekeeping data. The SimVRML editor operates on many common Objects that are hierarchical in structure. Such hierarchical objects consist of data elements describing physical properties and also data elements that describe equipment that in turn are described by data elements within other subclasses. Thus, a Vehicle object may have a radio object and an engine object as components of its hierarchical structure. The SimVRML base class and its subclasses have a public variable that is an enumerated type. The enumeration’s are simply ASCII descriptions (less than 33 characters) of the various properties that are shared by the common objects that are simulated via the above models. There are also methods that are used to extract and insert valuations for the properties in the SimVRML base class and its subclasses. Using derived classes and virtual functions and providing an API for the JDBC fields, the SimVRML editor can be made extensible. Thus, a bigger and better model algorithm that needs more sophisticated properties can be „plugged“ into the SimVRML editor. Other designers could then just publish their enumerations and „get and put“ virtual functions. Subclasses of the SimVRML base class consist of classes related to physical properties for the following object categories. Hierarchical high level objects like aircraft, ships and submarines, vehicles and people. Low level object categories of the SimVRML editor are sensors, weapons, facilities (Architectural objects), machinery, structural elements of objects, computer hardware and software, chemical and biological. There is also an interaction class that is used by the engagement model and to also keep track of static properties that can be applied to current behavior interactions between objects. The SimVRML base class has common physical properties that are used by all the model algorithms.

Model Related Data / Implementation for VR Simulations In the SimVRML editor system, the number and sophistication of the properties in the enumeration public variable in the various classes is driven by the sophistication of the model algorithms. The simulation LOD for the SimVRML system is different for the various models.

52

John F. Richardson

The sensor models and the SimVRML sensor classes representing the various sensor properties have a highly realistic LOD. The sensor model requires properties related to the advanced radar, active and passive sonar, jamming, electrooptical and lidar equations. The radar model requires properties for the basic radar equation, plus properties related to advance features including electromagnetic interference, ground clutter, antenna configuration, variable cross section, countermeasures and multipath effects. The radar model produces a signal to noise ratio that is generated from a discretization of probability of detection curves versus false alarm rate and number of pulses illuminating the detected object. Physical properties are needed for all the input parameters of the various LOD’s of the radar equation. Some radar equation parameters for low level radar calculations lump multiple physical functions into one parameter and approximate the parameters physical value. Realistic radar calculations for high level LOD radar models deaggregate the lumped physical properties for advanced radar model features. These high level radar models require appropriate properties for input into the radar algorithms. The sonar model requires properties for input into the active and passive sonar equations, sonobuoys, towed arrays and variable depth sonar’s [ 6 ]. Sonar calculations depend upon the marine environment as much as they depend upon the sonar equipment physical properties. For sonar calculations, the environment is divided into 3 sound propagation zones. Environment effects due to transmission / reverberation loss properties are arrays of values for various ocean conditions and target position relative to the 3 sound propagation zones. The simplest sonar model LOD has table lookup properties for good, fair and bad ocean environments. Sonar classification models require individual target noise properties for subcomponents of an object. Likely subcomponents are machinery (primarily engines and propellers), underwater sensors (especially active and towed) and architectural structures (hull). Target noise properties are required for various speed values. Probability of detection is calculated for individual noise components by dividing the calculated signal to noise ratio by a standard deviation of a normal distribution related to the noise component and then comparing to a normal distribution with mean 1. The primary problem in sonar modeling is generating reasonable physical values for sonar environment and target noise. The basic Visibility model operates on object detectability properties. Sufficient weather effects are built into the VRML specification for very simple visual calculations. An advanced visibility model based upon electrooptical algorithms [ 7 ] requires calculation of the number of sensor pixels encompassing the detected object. Detection probability is generated via an exponential probability distribution that can be discretized into a table look up. The table look up function of the model is based upon optical signal to noise thresholds for the various categories of objects in the virtual environment. If an Infrared or FLIR model is needed then the thresholds require temperature dependence factors. The Communications model has a moderately realistic LOD. The communications model requires properties related to the basic communications and jamming equations, which are similar to the radar equations without a cross section

VRML Based Behaviour Database Editor

53

factor. Properties related to advanced communications features that depend upon clutter, antenna configuration, interference, multipath and terrain are similar to those properties needed by the radar equations to calculate similar radar effects. The Communications network model has a low LOD to calculate path delays. Path delays are calculated by assuming that delay points behaves according to M/M/1 single server queue with exponentially distributed service rate and interarrival time [ 8 ]. Properties needed by the communications network models are mostly dynamic properties related to compatibility of network nodes and communications equipment operational status. Such properties reside in the classes used for simulation housekeeping. The motion model is a low to medium LOD model that requires properties related to basic ballistics, navigation, navigation error, along with maximum and minimum properties (weight, speed, altitude…). The Flight, ship and submarine operations models are similar to the queuing and probabilistic elements of the communications network model but require interaction with the motion model, fuel consumption properties and specific motion constraints. Fuel consumption algorithms require speed and fuel factors for minimum fuel use and also fuel factors to interpolate fuel use at speeds other than the minimum fuel usage velocity. Ship and submarine operations are similar to aircraft flight operations but with different constraints. Motion constraints can be thought of as a set of rules depending on the type of simulation object and the model applied to the object. Navigation errors are drawn from a normal distribution with mean zero and standard deviation specific to the navigational sensor object. The motion model and all the other models require system failure calculations. For failure calculations, we have equation 1 where A is mean time before failure and B is probability of failure. If B = 0 then the failure time is the current time plus some appropriately large constant. Failure time = A [(Log(1-rand(x))/(Log(1-B))] + current time

(1)

The engagement model is a combat operations model. The engagement model requires properties related to engagement probabilities (failure, detection, acquisition, hit) and equipment characteristics. Air related engagements are based upon simple probabilistic equations. Other properties related to air engagements and air to ground engagements are sensor activation time, sensor scanning arc, lists of target sensors, lists of targets and weapon lifetime. Ground based direct fire engagements are related to a discretization of the Lancaster equations [ 9 ] based upon lookup tables. The lookup tables are indexed based upon the characteristics of the objects engaging. Ground engagements between hierarchical objects, such as groups of vehicles are indexed based upon group characteristics. The results of the lookup table searches are passed to a simple probabilistic model. Ground based indirect fire, which is engagement between objects separated farther than a user specified threshold, is modeled via a probabilistic model. Indirect fire assets such as artillery are also included in the direct fire model.

54

John F. Richardson

Engagement damage is related to a set of probability factors for failure, deception and repair along with defensive and offensive characteristics. Damage factors are calculated for the following categories of an object: structural, fuel, equipment, and operational status. Damage factors are simple multipliers that are compared to thresholds that enable/disable/ or repair/destroy components of the object. It is very hard to generate standard properties for people. This is due to the complexity of people and diversity of opinion on how to approximate human behavior. In the SimVRML system, basic person physical properties would be compared to environmental constraints that would be managed by a basic simulation housekeeping process. Beyond constraint checking, such social models would be relegated in the SimVRML system to managing verbal communication within the VR scene and input / output of commands within the VR simulation. In the SimVRML system, models for structural, chemical, biological processing and any other remaining behavior functions are queuing models augmented by very low level algorithms. These low level algorithms use the current set of properties as thresholds that either disable / enable the object or switch from one geometry LOD to another. As an example, if the melting point is exceeded then the object consisting of a solid geometry would be replaced by an indexed face set of the melted object. Another example would be the failure of an object that would be used by a damage model to affect the state of other objects in the virtual environment.

Model Interaction and Object Interrelationships In order for the extendable features of the SimVRML editor to function optimally, the interaction of the model algorithms and the object interrelationships need to be defined. The level of complexity of the interrelationships of the object physical properties in an extendable and hierarchical VE generated by SimVRML class schemas is just the local complexity of the data / model interaction. If the virtual environment is distributed or if control is shared in a local virtual environment then several factors influence the complexity of the object / model interactions. The primary factors are time, transfer / acquisition of control, transfer / acquisition of editing rights to the properties and control / acquisition of model algorithms and model LOD’s. Time synchronization depends upon agreement on how to broadcast time, when to declare a time stamp valid and how to interpolate time stamps. Much of the preliminary agreement on time is present in the VRML 97 specification [ 10 ] as it applies to nodes and simple behaviors. In a complex VRML simulation one method of agreement about the overall virtual environment within a distributed simulation is to adopt the rules inherent in the DIS specification. The SimVRML editor adopts a hybrid approach consisting of the

VRML Based Behaviour Database Editor

55

DIS rules and methods to override the rules or disallow overriding based upon model / object „rights“. The SimVRML base class has methods to extract data sets from multiple object schemas and to set the structure of the data requirements based upon the model algorithms and model interactions. The results of the model and data interactions requires that model execution, data storage and data extraction be synchronized. One way to affect global control of various model and physical property interactions is to treat the system as though it was a parallel simulation environment. The housekeeping management scheme chosen for the Virtual Environment generates a set of simulation semaphores. These simulation semaphores are used to make sure that data is extracted or stored at the correct times and that models for the physical behaviors execute in a realistic sequence that produces common sense results. The SimVRML system is designed to be an extendable system with user defined models and physical properties added to a „base system“ of models and physical properties. In this case there can be multiple simulation semaphore sets that can be activated or deactivated based upon virtual environment rights possessed by controlling avatars or simulation umpires. Suppose you have a VR simulation that has user defined models appended to a base collection of models and their associated physical properties along with their defined interrelationships. If you start with a realistic simulation semaphore set that produces realistic simulation results, how do you produce a realistic simulation semaphore set for the extended VE containing the user defined models. One way to satisfy the requirement that the extended system produce realistic results is to require a sanity check on the results. The simplest form of sanity check would require the simulation state to be outside of obvious error states. Further sanity checks would compare the simulation state for conformation to test parameter limits. Such sanity checks require well defined test simulations within test virtual environments. Consider the model and physical property interactions for simple linear motion. First, consider scale. Aircraft and ship positions are best stored in geodetic coordinates while ground motion is best served by Cartesian or Universal Transverse Mercator (UTM) coordinates. Motion inside a building is most likely to be Cartesian. Motion over several kilometers is probably best represented by UTM coordinates. Next, consider weather and environmental conditions. Object states are needed for daylight calculations. Navigation properties have to be combined with object motion characteristics. Motion requires awareness of the surrounding objects that exceeds simple collision detection. Motion requires collection of data regarding the visibility status of objects and the properties of the collecting sensors. Thus, data must be extracted from multiple object classes. Motion requires execution synchronization between the motion, navigation and detection algorithms at the minimum. If objects are interacting in a manner that alters the properties and geometry of the interacting objects then interaction algorithms must execute. Results of physical object interactions need to be propagated to the other motion algorithms under the control of motion semaphores.

56

John F. Richardson

One solution for the simplification of synchronization of model algorithms is to separate behaviors that must be event driven and simulation cycle based. Cycle based algorithms can be applied to all objects in a simulation subject to pre defined conditions. Processing VR behaviors via a Simulation cycle and then broadcasting the results to the VR objects affected by the behaviors solves some of the problems introduced by object and model interactions. Processing via simulation cycles introduces other problems. One problem is related to a requirement that user actions should be able to trigger model execution asynchronously. Event based algorithm execution requires awareness of all other active and possibly planned events and a set of rules and rights to umpire simulation logjams. Very simple model LOD’s can be implemented so that events can be reliably precomputed and queued. Complex model LOD’s with significant numbers of property and model interrelationships require constant updating of the event queue. Consider the detection phase of the motion calculation in the event based portion of the VR simulation. The object in motion would have an instantiation in the VR world along with an instantiation of the detector such as eyes or a night vision goggle. There would also be an instantiation of an object that represents the model that is being executed to process the detection. The methods of the „detection model“ object would extract the relevant physical properties from various object classes and spawn events to manage and localize the effects of the interactions between the models and physical properties. Users can make their own constructors / destructors and methods for model objects. Thus the user can attempt to manage the problems posed by model and property interrelationships by incorporating management of the interrelationships in the objects representing the models. One paradigm for managing the relationships between physical properties and also the simulation model algorithms is to use standard object oriented design (OOD) methodologies such as OMT [ 11 ] and Shlaer-Mellor [ 12 ]. OMT and other OOD methodologies attempt to generate correct relationships between physical properties, events and simulation model algorithms. By applying OMT or other such OOD methodologies to the SimVRML base classes and simulation model algorithms, a reasonable management scheme for the property / model interrelationships of a baseline SimVRML system can be produced. By using OMT and other equivalent OOD methodologies, users planning to extend the baseline SimVRML system can minimize effects of their user defined physical properties and models upon the SimVRML baseline system. Similarly, baseline rules for simulation cycle based execution within the SimVRML system can be produced by standard methods of semaphore design. The Shlaer-Mellor Methodology has been analyzed mathematically to examine the question of correctness of OOD for event driven simulations [ 13 ]. The SimVRML editor can be used to initialize the state of a Virtual Environment containing realistic general behaviors. This initialization procedure is performed by editing the physical properties that are dynamic. The default VRML description of a virtual environment contains a wealth of geometry, position and VR sensor information. Beyond geometry and standard VRML sensors and interpolators,

VRML Based Behaviour Database Editor

57

what do you put in the housekeeping classes to attempt to produce a simulation semaphore set that provides accepted results under test conditions? The housekeeping properties should include enough simulation state data to keep track of object states used by the models, simulation rights and the defined semaphore sets. In addition to the geometry inherent in VRML nodes, these housekeeping properties keep track of simulation commands and data that can be used to map the model algorithms to the physical properties. In addition to managing data and model relationships, housekeeping properties are used to store descriptions of multi-user interactions and pointers to static physical properties for interactions. These interactions are different from the model and physical property interactions described above. Multi-user interactions describe behavior effects between objects that are actually engaging in simulated virtual behavior. One of the fundamental multi-user interactions between objects in a virtual environment is information communication. Housekeeping properties are used to track communications text and dynamic data communications within the virtual environment. One function of the Housekeeping properties is to track the simulation state of speech simulations and to track spoken simulation command input. By keeping track of simulation LOD’s in the housekeeping properties, the correct set of static physical properties can be extracted by a generic virtual getModel method. By combining low level state information with high level interaction and model LOD state information a virtual environment simulation can truly become more than the sum of its parts.

SimVRML Editor Interface The SimVRML editor was created using 3D Website Builder [ 14 ] and Cosmo Worlds [ 15 ]. Although there is still no integrated development environment for VRML behaviors that is affordable, acceptable components are now becoming available. 3D Website Builder is used to create the geometry of the SimVRML editor interface. Cosmo Worlds is used for polygon reduction, VR/VE/VRML sensor creation, VRML routing and scripting within the script node. VRML scripting was done in Javascript [ 16, 17 ] and Java [ 18, 19 ]. The SimVRML editor interface is a compromise between the goals of a 3D anthropomorphic VR/VE/VRML interface and the cardinal goals of webcentric virtual reality: speed, speed and more speed. A fully anthropomorphic virtual reality behavior editor interface could be realistically composed of 3D subcomponents constructed using thousands of polygons. For example, the average power property used by the sensor model algorithms could have a realistic model of a generator unit with clickable parts in an anthropomorphic version of the editor interface.

58

John F. Richardson

The design paradigm applied to the SimVRML editor interface reduces the anthropomorphic aspect of the interface to texture maps. Photo images or rendered images of 3D models representing the various simulation model properties are applied to 3D palettes constructed using simple solid primitives. For properties representing high level hierarchical subcomponents of an object, such as equipment, this texture map strategy works very well and is illustrated in Figure 1.

Fig. 1. Equipment Palette SimVRML Interface for Hierarchical Object

For more esoteric and abstract properties with no isomorphism’s between anthropomorphism and algorithmic functionally, a different reduction strategy was developed. In the case of abstract properties, an anthropomorphic texture map or maps, representing an aggregate of related properties was generated. Hot spots representing the individual properties of the aggregate texture map are used by the editor to edit property values. Consider the basic radar equation. Sets of texture maps of generic radar properties are placed near to a texture map of a text version of the generic radar equation. The equation texture map is subdivided and overlain upon 3D primitives. Subdivision is necessary to avoid browser environments that do not support the VRML image texture or material nodes transparency features. Since the Virtual Reality Modeling Language is extensible, extending the SimVRML editor interface by users who have added extra model properties is a simple task. Extending

VRML Based Behaviour Database Editor

59

the SimVRML editor interface is primarily a case of image processing and 3D geometry building User input is through a series of knobs and sliders. This input strategy is made necessary due to the lack of a keyboard sensor in the VRML specification. The 3D Input interface for both sliders and knobs is similar. The interface consists of a VRML model of a slider or knob constructed from primitives and indexed face sets, along with a 3D indicator and a 3D control. 3D indicators are used to display the current value of the property. The user modifies the property values using the movable portions of the slider or knob by clicking or dragging. The user can also use the 3D control to edit property values by clicking on subcomponents of the control, such as arrows. The radar equation texture map palette is illustrated in Figure 2.

Fig. 2. Aggregate Texture Maps for Radar Parameters in SimVRML Editor

Sliders, knobs, controls and indicators have their own standard set of „SimVRML editor properties“. These interface properties are properties for minimum / maximum input values, data type of values, current value, type / size / color of 3D text for display of the current values, scaling factors and position / orientation relative to the slider / knob interface components. Just as the simulation models require a set of standard properties, there is a need for some sort of standard set of input properties

60

John F. Richardson

for virtual environments. The geometry of the input widgets is up to the input interface designer. The SimVRML editor interface implements what I consider a minimal set of input properties. Figure 3 illustrates scalar data types. Arrays and other structures could be displayed by duplicating and offsetting VRML group nodes representing the various types of scalar data types displayed via geometry primitives and VRML text. The SimVRML editor has an option for 2D operation. This option is enabled from within the 3D virtual environment and is based upon Java applets. The 2D option has a SimVRML applet for each of the various object classes. The 3D user input virtual environment is also illustrated in Figure 3

Fig. 3. Slider and Knob Input Interface With Control (arrows) and Indicator

One of the most important considerations related to editing properties for VR simulations is the realism of the data values. The majority of properties in the subclasses of the SimVRML base class have well known published reference sources. Commercial instances of the various abstract objects are particularly well documented. Military and adventure instances of simulation objects are not well documented. Potential sources for such adventure data are user guides for various game simulators, documents published by Jane’s Information Group and manufacturers advertising brochures, repair manuals and user’s manuals. Many physical properties can be obtained through standard scientific reference works such

VRML Based Behaviour Database Editor

61

as those published by the Chemical Rubber Company. Fascinating and generally correct sources of physical properties are newsletters and publications of hobby organizations such as the American Radio Relay League.

6 Future Work The SimVRML editor is used to edit static characteristics for physical properties used by VRML behavior models. The next obvious extension is to create a SimVRML layout editor to operate on the variables and methods of the dynamic properties. As an example, consider a Yacht with no engine, antennas, communications or navigation equipment. Using such a layout editor, the user could drag and drop engines, radio and other equipment onto the VRML yacht. The equipment would be from palettes containing thumbnails created by querying the VRML simulation behavior database. The Model algorithms and the methods for extracting data need for the algorithms have to be implemented through a SimVRML behavior editor. There is also a need for a SimVRML command editor to dynamically input parameters into the VR behaviors. One obvious enhancement to the various model algorithms is to use the fact that a VR/VE/VRML simulation is intrinsically in 3 dimensions. The current damage model simply decides on the fate of a particular VR object. An enhanced damage model could also change the geometry of the object using the createVRMLFromURL method or by inserting and deleting nodes. Physically relevant 3-D dials and indicators would replace two-dimensional textual status boards. In addition a set of OMT diagrams for the baseline SimVRML editor and related models have to be generated.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

Wicks, J., Burgess, P.: Standard for Distributed Interactive Simulation – Application Protocols. IST-CR-95-06, Institute for Simulation and Training (1995) Dodsworth, C.: Digital Illusions. Addison-Wesley (1997) nd Skolnik, M.: Radar Handbook (2 Ed.). McGraw-Hill Publishing Company (1990) Interference Notebook. RADC-TR-66-1, Rome Air Development Center (1966) nd Wehner, D.: High-Resolution Radar (2 Ed.). Artech House (1995) nd Horton, J.W.: Fundamentals of Sonar (2 Ed.). United States Naval Institute (1959) Holst, G.: Electro-Optical Imaging System Performance. JCD Publishing (1995) Kleinrock, L.: Queuing Systems (Volume 1). John Wiley & Sons (1975) Przemieniecki, J.S.: Introduction to Mathematical Methods in Defense Analyses. American Institute of Aeronautics and Astronautics, Inc. (1990)

62

John F. Richardson

10. VRML 97 ISO/IEC 14772-1:1997 11. Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., Lorensen, W.: Object-Oriented Modeling and Design. Prentice-Hall (1991) 12. Bowman, V.: Paradigm Plus Methods Manual. Platinum Technology (1996) 13. Shlaer, S., Lang, N.: Shlaer-Mellor Method: The OOA96 Report. Project Technology, Inc. (1996) 14. Scott, A.: Virtus 3-D Website Builder Users Guide. Virtus Corporation (1996) 15. Hartman, J., Vogt, W.: Cosmo Worlds 2.0 Users Guide, SGI Inc. (1998) 16. Bell, G., Carey, R.: The Annotated VRML 2.0 Reference Manual. Addison-Wesley (1997) nd 17. Ames, A., Nadeau, D., Moreland, J.: VRML 2.0 Sourcebook (2 Ed.). John Wiley & Sons, Inc. (1997) 18. R., Matsuda, K., Miyashita, K.: Java for 3D and VRML Worlds. New Riders Publishing (1996) Lea, 19. Brown, G., Couch, J., Reed-Ballreich, C., Roehl, B., Rohaly,T.: Late Night VRML 2.0 with Java. Ziff-Davis Press (1997)

The Scan&Track Virtual Environment u h n h u K um r 1

m w l1 n

un Oh y

2

Department of Computer Science, University of Colorado at Colorado Springs, Colorado Springs, CO, 80933-7150, USA [email protected] 2 Head, Department 1, ATR MIC Laboratories, ATR Media Integration and Communication Research Laboratories, Japan, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan 619-02 [email protected]

Abstract. We are developing the Scan&Track virtual environment (VE) using multiple cameras. The Scan&Track VE is based upon a new method for 3D position estimation called the active-space indexing method. In addition, we have also developed the geometric-imprints algorithm for significant points extraction from multiple camera-images. Together, the geometric-imprints algorithm and the active-space indexing method, provide a promising and elegant solution to several inherent challenges facing camera-based virtual environments. Some of these are: (a) correspondence problem across multiple camera images, (b) discriminating multiple participants in virtual environments, (c) avatars (synthetic actors) representing participants, (d) occlusion. We also address a fundamental issue: can virtual environments be powerful enough to understand human-participants.

1

Motivation

irtu l nvironm nt ( ) po v r r tri tion on tr k in l orith m u to th or m o tr quir m nt o r l-tim int r tion. h r h v n vr l tt m pt to tr k p rti ip nt in virtu l nvironm nt. h n ro ly ivi into two m in t ori n um rin n non- n um rin 1 23 . v ri ty o n um rin tr k in vi h v n v lop or x m pl ou ti 1 opti l 5 6 m h ni l 78 9 10 io- ontroll n m n ti tr k r 2311 . o to th m r -im y t m r x m pl o un- n um rin t h nolo y 12. h r o n um r n l o m tt r outi -look in -in y t m or opti l tr k in r l n um rin th n th in i -look in -out y tm u th u r w r to r ivin on in t o our m r pl on th u r h . h to on r li h t r in om p ri on to w rin our m r 1 5 . n ition opti l n m r h v pro l m o o lu ion 1 2 . in i t y 13 m o tly m n ti tr k r h v n u in virtu l nvironm nt th y r ro u t r l tiv ly in xp n iv n h v r on l work in volum or tiv - p . ow v r m n ti tr k r n xh i ittr k in rror o upto 10 m i not li r t prop rly 13. n ti tr k r r J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 63–80, 1998. c Springer-Verlag Berlin Heidelberg 1998 

64

Sudhanshu Kumar Semwal and Jun Ohya

lo t y m n ti o j t pr ntin th urroun in . n ti n or pl on v r lpl on th o y o th h um n p rti ip nt n u or t tin th m otion o th p rti ip nt. h m otion i th n u to ontrol y nth ti h um n- orm or n v t r o th p rti ip nt. om olution to riv n v t r u in m inim l n or r in 11 2223. h r olution n ur y o m r y tm i p n ntupon th iz o th pix l. ti i ultto i r nti t tw n ny two point i th iz o th pix l i l r n th proj tion o oth point ll on th m pix l. om tim ppli tion t rm in th m o o tr k in . i l ppli tion r quir tr k r to m or pr i n ur t . or h um n- nt r ppli tion 18 non- n um rin tr k in m th o r p rti ul rly uit l no wir or t r pl on p rti ip nt. 1.1

Unencumbered Virtual Environments

m r t h niqu r w ll uit or v lopin un n um rin pplition . n xp n iv vi o- m r r l o r ily v il l . h i r i w ll tu i n K ru r work 122021 i w llk nown or virtu l nvironm ntint r tion u in 2 ontour o ilh ou tt o h um n p rti ip nt n int rpr tin th m in v ri ty o w y . h n 2 m r -im r u th r i no depth in orm tion v il l with th im . uh r rh h n on in th r o om put r vi ion 1 15 1617 t r o-vi ion 1 19 o j t-r o nition 15 n m otion n ly i 1625 to r ov r th pth in orm tion. h pro l m o o lu ion i inh r ntin m r y t m 1 . in 2 m r im r u to t rm in th p rti ip nt po ition iti i ultto in r wh th r th p rti ip nt h n i h i n h in th ir o y or i in th ir po k t. h o lu ion pro l m i m or v r wh n m ultipl p rti ip nt r pr ntin m r virtu l nvironm nt. o th to our k nowl th r r no m r virtu l nvironm ntwh r th o lu ion pro l m h n u ully olv . olor i rim in tion wh i h tr k th h n in olor r notro u t h ow n li h tin h n m k m piri l l orith m il. noth r m th o to olv th o lu ion pro l m i to on i r pr viou r m to in r th tth h n i now in th po k t. h w llk nown orr pon n pro l m i v r-pr ntwh n pur ly vi iony tm r u . m ntion in 13 th r r v r l om prom i m to r olv th i u o orr pon n n r o nition. v n th n it i i ult to fi n topolo i l orr pon n tw n th m ultipl ontour xtr t rom th m ultipl m r -im o th m n . n th fi l o om put r vi ion n ph oto r m m try th r i w lth o r r h on m otion tr k in 1 15 1617. h r h n muh r rh ortin th r o unr t n in n r o nition o i l xpr ion o y m ov m nt tur n z o th p rti ip nt 2021 . h irtu l K uk i y t m 26 u th rm l im rom on m r to tim t th p rti ip nt pl n r m otion in 3 nvironm nt. N on-pl n r 3 m otion i nottr k . n tim t o th jointpo ition i o t in in r l-tim . Oth r point u h y pply in 2 - i t n tr n orm tion to th rm l im

The Scan&Track Virtual Environment

65

th k n n th l ow r tim t u in n ti l orith m 26. On xtr t th m otion i m pp on to k uk i- tor. h y t m tr k pl n r m otion o p rti ip ntin r ltim . lo m o l 2 r u in th fi n r y tm v lop tth i L . h y t m u on m r n work with on p rti ip ntin th nvironm ntu in t ti n uh no . nt r tin ly pth p r ption i lim it to 2 or x m pl wh n th p rti ip ntjum p iti on i r m ov kw r . o nti lly th y t m tim t only th 2 in orm tion. o to th m r th ir ju m ntupon th olor in orm tion n tim t th 3 po ition o th p rti ip nt y u in th in orm tion v il l rom m ultipl m r . h pro involv olvin th ontourin pro l m orr pon n pro l m n i nifi ntpoint xtr tion 13. m r y tm lo th inh r nt m i uity o 2 - m r proj tion . h i n illu tr t y rr n in fi n r o th tth h ow proj t on th w ll n or x m pl look lik o t. h n m ultipl m r r u th m i uity pro l m i om wh tr u w h v m ultipl vi w v il l . ow v r th orr pon n pro l m ro m ultipl im m u t r olv . xtr tion o only w k y - tur or i nifi ntpoint rom n im h n inv ti t in r t t il or th i purpo . h i h in i nifi nt point xtr tion i th t m ll oll tion o point m y provi nou h in orm tion outth int ntor po o th p rti ip nt. r k in th tr j tory o r h in 25 . n ultipl Li h t i pl y xp rim nt n o j ti th o u o r m ultipl on o li h t r u to n ly z th m otion o h um n p rti ip nt 6. im pl li h t i pl y pl on th p rti ip nt r tr k . t i r u th ttw lv point r u i ntto iv n im pr ion o tu l m otion. h i i int r tin itm y l o provi n n w r to th m inim l i nifi ntpoint on th h um n- o y th t r u i ntto un r t n th m otion o th p rti ip nt. v ri ty o ontour- xtr tion m th o h v n v lop or tur un r t n in 27. h m m r iv -vi o 29 ppli tion u olor v r in o m ultipl m r -im to t rm in th olor o 3 ri -point. n k 28 u m inim iztion n plin ontourin to lo k onto n o j to int r t in th im . ntly th r h lo n rowin int r tin u in u ul 2 n 3 t tru tur or r ov rin th pth in orm tion. in t. l. 29 om in th vi ion n r ph i t h niqu or pro in th vi o- tr m offlin nonint r tiv ly o th t m ultipl p rti ip nt n vi w th pro n rom i r nt n l t r pr pro in h n om pl t . n im m r iv vi o th olori rim in tion i u tw n r m to i nti y pix l o int r t wh i h r th n proj t to fi n th vox l in 3 - ri p on th olor o th r ion (pix l) o int r t. h m r h in u l orith m i th n u to on tru tth i o- ur o int r t 29 . h i m th o o nottr k th p rti ip nt int r tiv ly in virtu l nvironm nt. n t itpro th vi oqu n rom m ultipl m r n llow p rti ip nt to vi w th pro t.

66

2

Sudhanshu Kumar Semwal and Jun Ohya

The Scan&Track System

r v lopin th n r k virtu l nvironm nt u in th om tri im print l orith m n tiv - p in xin m th o . h om tri im print l orith m i upon th ppro h th t n -point o th y lin ri l o y p rt r m o tlik ly to vi i l ro m ultipl m r n th r or r m or lik ly to i ntifi u in th ontour o th im . h m in i rn tw n th propo n r k y t m n oth r xi tin m r virtu l nvironm nt i th t( ) olor i notth m in o u or i nti y in i nifi ntpoint on th p rti ip nt o y in th n r k n ( ) i nifi nt point h n with th po in th om tri -im print l orith m n th r or m y i r nt p n in upon th po . n h r m w k wh r r th tip o th y lin ri l2 - h p in th m r -im . o th om tri -im print ppro h i un m nt lly i r ntth n th vi ion y t m wh r th m i nifi ntpoint r ou h tin v ry r m . m jor v nt o th tiv p in xin m th o i th t po ition tim tion r provi without ny k nowl o m r -ori nt tion. x t m r ori nt tion i riti l or oth r xi tin m r y t m to- t 29 . h tiv p in xin m th o i nov l m th o or tim tin th 3 po ition u in m ultipl m r -im . urin pr pro in th y tm u th r m r to r or pl n r- li ont inin im pl p tt rn rr n on r ul r ri . h pl n r li i th n m ov tr ul r int rv l n th orr pon in m r -im r tor . n th i m nn r active-space or th 3 - p tw n th p r ll l li i nn . urin pr pro in th nn mr im r pro to r t n tiv - p in xin m h ni m wh i h m p 3 point in th tiv - p to th orr pon in proj tion on th m r -im . On th tiv - p in xin m h ni m h n r t th lo tion o 3 point n tim t y u in th point proj tion on m ultipl m r -im . h tiv - p in xin m th o i l l th same 3 p oul nn or oth th low n th h i h - n y t m . in th iz o th 3 tiv - p n l o v ry th 3 tiv - p work in volum it l i l l . n om p ri on to oth r tr k in m th o th tiv - p in xin m th o voi th om pli t m r li r tion op r tion . th tiv p in xin m th o provi 3 p -lin rity or po ition tim tion th i tortion u to m r proj tion r utom ti lly voi . n th ollowin tion w xpl in oth th om tri -im print l orith m n th tiv - p in xin m th o .

3

The Geometric-Imprints Algorithm and Associated Results

o y po tur xpr m otion n ontour- r win in i ur 1 oul

r v l our inn r- l. or x m pl th int rpr t n po . im il rly

The Scan&Track Virtual Environment

67

i ur 1 oul m n th tth p r on h ju t n vi toriou . h h r t ri ti o oth th po n ptur y om k y -point on th ontour h own in i ur 2. Our om tri -im print m th o i on th o rv tion th th um n o y p rt r m o tly y lin ri l in n tur p i lly th o wh i h r th i o rti ul t m otion. h n w look tp rti ip nt rom m ultipl m r -im th tip o th y lin ri l o y - h p i xp t to vi i l rom m ny m r . h tip-point fi n th geometric-imprint o ontour in th n r k y tm . h om tri -im print m th o xtr t in orm tion outth y lin ri l h p rom ontour. n o i ur 1 fi n r tip r th lo i l n in o th y lin ri l h um n rm n th n tth l ow i quit o viou n h oul t t . im il rly w h v h own om point in i ur 1 to in i t th om tri urv n in n th lo i l- n o y lin ri l o y p rt. wi h to ptur th point o th y lin ri l n in o 2 ontour rom th h um n- ilh ou tt h own in i ur 3 - . om oth r point m y in lu p n in upon th urv tur o iv n ontour. on i r th i t o point th geometric-imprint o th im . hi i u th point n th ir topolo y ilit t t rm in tion o th po o th p rti ip nt. Our m otiv tion or v lopin th om tri -im print m th o w to l o r u th om pl xity o th orr pon n pro l m . on i r 2 - ontour xtr t rom m ultipl m r -im look in t y lin ri lo j t( i ur 2). h tip o th y lin ri lo j twoul proj t tip o 2 - y lin ri l urv in m o to th 2 - ontour . ti m u h i r to orr pon xtr m iti o urv ro m ultipl m r im 38 . om tri -im print r p n ntprim rily upon th n ly i o ontour o n im th u our ppro h i i r ntth n oth r xi tin m th o wh r m k y - tur o th im in v ry r m m u t tim t . or x m pl in m ny ppro h w ll our rli r ort 26 th m joint r tim t in v ry r m . h i i rt inly u ul wh n th i pl y pro r m i k l tonn th jointin orm tion i n or pl in y nth ti h um n- tor in ir po . n th om tri -im print l orith m wh n th p rti ip nt um i r ntpo ition th h p o p rti ip nt ilh ou tt on th m r -im l o h n . h r or w xp tth om tri -im print tto v ry rom on r m to noth r p n in upon th h p o th urv in th m r -im rom on m om ntto noth r. h v im pl m nt n automatic r ur iv plittin l orith m wh i h n u ully l with r itr ry (m ultipl ol in ) urv . i ur 3 h ow th xp t orr pon n tw n om tri -im printo on po to noth r. i ur h ow th r m r -im ( - ) o th m po n th ir om tri im print. or t il o th i l orith m r in 39 .

68

Sudhanshu Kumar Semwal and Jun Ohya

E

C

L4 F

2 B

L2

C

A 1

3

B

L3 L1

D (b) Elation

A (a) Dancing

Figure 1: emotions

D

4

5

E

Figure 2: The Correspondence Problem

Postures express

Figure 3: (a) Spread out pose.Five geometric imprints. (b) Dancing pose. Seven geometric imprints.

(b)

(a)

(a) starting point (b) starting point

234

(c)

234

234 5

1 (d)

1 (e)

1 (f)

Figure 4: Geometric−imprint points of three camera images for the same pose. (a−c) three camera−images for the same pose, (d) 4 (e) 4 and (f) 5 geometric−imprint points.

Figure 5: (a) Scan−line algorithm for automatic extraction of the contour. (b)Geometric−imprints algorithm using the automatically extracted contour

Planer Slice Q P

PB QB

QA Q

P

Q A

P

PBQB

PA

A

QL

QL

PL

QC P C

PL

QC PC

Q R

PR

PR Q R

Figure 6: The relationship of points P and Q changes depending upon the view as the cameras move around them.

Figure 7: The relationship of the point on a slice remains same for a variety of planar views in the same hemisphere w.r.t. to the slice.

The Scan&Track Virtual Environment

69

Only on h oul r pointin i ur h n i ntifi th om tri point. h im pl m nt tion i ntifi fi v y lin ri l n -point on th i om pl x urv in i ur n . n i ur w h v u th m po or th im ptur rom th r m r .N oti th t llth i nifi ntpoint m rk on to fi v in i ur - r om m on in th th r fi ur .On xtr l ow-pointi l o om tri -im printpointin i ur . n our im pl m nt tion w h v m inly h own th tip o y lin ri l h p h ow v r th pl wh r ol in o ur n lo th om tri -im printpoint xpl in in 39 . h own in i ur im il r topolo i l urv n r t im il r om tri im print t. On th om tri im printh n o t in iti m u h i r to fi n orr pon n . N ot th t om tri -im printpoint m rk 1 in i ur - orr pon to th m 3 -point(ri h t-h ip). im il rly point m rk 2 3 orr pon to th tip o th ri h th n h po ition n th tip o th l th n r p tiv ly. N ot th tth ixth point n i r y u in th tiv - p in xin m th o xpl in low. h im -im print o th ixth point in i ur m th with ny im -point in th oth r two m r -im will r ult in no 3 point orr pon n u in th tiv - p in xin m th o . h u th orr pon n pro l m in th n r k y t m i im plifi . i ur 5 h ow om tri -im print o t in y u in n automatic n-lin l orith m 0. olor im r t http://www.cs.uccs.edu/~semwal/VW98.

4

4.1

The Active-Space Indexing Method and Associated Results Camera Calibration, Space-Linearity and Over-Constrained Systems

h r h v n m ny tt m pt to tim t th 3 m otion o th p rti ip nt with m inim l num r o point u or m r - li r tion or x m pl th r point r u i nt u t y llm n 6. ntly th r h v n m ny tu i wh r fi v or m or point r u or t r o-m t h in n r tin nov l vi w 2021 . o to th y t m wh i h u m inim l num r o point r u u lly un r- on tr in in th n th t ny rror in m r - li r tion or m r ori nt tion tim t m y r ultin v r r i tr tion n ur y rror 13. n im pl m ntin th tiv - p in xin m th o w u v r l point urin pr pro in n r t p ti l3 - t tru tur u in th point. Our m otiv tion to u v r lpoint i to u ivi th tiv - p o th tonly w point r u lo lly to tim t th po ition urin tr k in im il r to th pi -wi i n or ur n ontour . n y t m u in m inim l num r o point lo l li r tion i u n th r or it r t n un ron tr in y t m 13. h non-lin rity n tr k in rror r m or pro oun u to th u o n un r- on tr in y tm . n our ppro h y u ivi in th 3 tiv - p into m ll i joint vox l or 3 - ll w u only m ll num r o point wh i h r th v rti

70

Sudhanshu Kumar Semwal and Jun Ohya

o th 3 -vox l or tim tin 3 po ition in i th tvox l. n th i w y only w point tim t th 3 po ition. h m jor v nt o u in lr num r o point i th tth non-lin rity u to m r li r tion i r u . n p rti ul r w n um th t lin r m otion in i th m ll 3 -vox l p will l o proj tlin rly on m r -im . h u th t o m r i tortion r voi . or ur t tim tion with in th 3 - llor vox l r po i l th i um ption l o llow th u o lin r int rpol tion with in th 3 - ll or vox l. 4.2

Estimating Depth Using Multiple Cameras and Slices

on i r th proj tion o two point n in i ur 6. p n in upon th vi wpoint n m ultipl m r in th two point n th proj tion o th two point h n p rti ul rly th ir r l tion h ip h n w m ov rom l tto ri h t rom im pl n to im pl n vi pl n L (L t) ( nt r) n (ri h t) h own in i ur 6. h p ti l in orm tion tw n tu lly th two point i lo t. ow n w in r th tpoint QA n QB r th m point in 3 p N xt on i r two point n on pl n rli n th m to m ultipl m r h own in i ur 7. N ot th tth p ti l r l tion h ip o proj tion o point n in ll th pl n in th m h m i ph r w.r.t.th pl n r li r m in m . h u iti m u h i r to lwith point in pl n r- li . n ition zoom -in or zoom -out l o o not h n th r l tion h ip o th vi i l point. n p rti ul r w.r.t.to two lin L1 n L2on pl n r itr ry zoom in o th m r or h n in th ori nt tion o th m r in th m h m i ph r h own in i ur 8 o not h v n t on th i r l tion h ip. n p rti ul r point r m in to th ri h t o proj t -lin L1 n point to th l t in oth th proj t -im . oth point r ov th proj t -lin L2 xp t . n now xt n th i to m ny lin . ri o lin p rtition th li into v r l i joint ri - ll in i ur 9. i ur 9 l o h ow m r im wh i h i in pl n r li t n n l pointin li h tly upw r . h r i p r p tiv orm tion o th lin y tth r l tion h ip o point n i on i t ntwith th pl n r li n th 2 ll-in x o oth point i m . ll th r m r r pl in u h w y th tth y r in th m h m iph r w.r.t. th tiv - p li . L t 1 2 n 3 th proj tion o 3 point on th l t nt r n ri h t m r im r p tiv ly. h i i h own in i ur 9. ll th tripl t( 1 2 3) n im print- t or point . r 1 2 n 3 r th 2 pix l oor in t o th proj tion o point r p tiv ly. 3 pointi vi ion th l t nt r n ri h t m r -im l rom m ultipl m r th n th lo tion o th im printo th 3 pointin m ultipl m r -im n u to tim t th 3 po ition o . iv n th im -im print( 1 2 3) th active-space indexing method fi n th 3 - ll or vox l ont inin th point . n our pr ntim pl m nt tion th tripl t r provi y th u r.

The Scan&Track Virtual Environment

4.3

71

Preprocessing to Create Active-Space Indexing

ultipl pl n r li n now t k or tim tin th pth in orm tion. h tiv - p li o upy p ll n tiv - p volum or im ply n active-space. h li ivi th tiv - p into t o i joint 3 ll or vox l . m r -im or v ry li r pro on y on . urin pr pro in on m r i pl u h th tth vi w-norm li p rp n i ul r to th wh it - o r u or our xp rim nt. h oth r two m r r to th l t n ri h to th fi r t m r h own in i ur 9. h v h o n 12 y 12 ri p tt rn wh r ri int r tion r h i h li h t y m ll l k ir l . h ri -p tt rn o upi 55 m y 55 m p on wh it - o r . h qu r o th ri i 5 m y 5 m . h wh it - o r i ph y i lly m ov n r or y th r m r tth m tim . i h t u h r or in r ultin i h t li or v ry m r . h int r- li p in i 10 m . h wh it - o r n th ri -p tt rn on it r vi i l rom llth th r m r . h tiv - p volum i u o 5 5 m y 5 5 m y 70 m . h tiv - p i lr nou h to tr k th n upp r o y o th p rti ip nt or two t o xp rim nt w h v on u t o r. l r r tiv - p n ily on tru t y u in lr rp n l in t o th m ll r wh it - o r u or our xp rim nt. ll th 12 y 12 ot on i h to th li r h own in i ur 11 - or ll th th r m r . u o th p lim it tion w o not xpl in th pr pro in l orith m h r . or t il pl r r to 38 . 4.4

Position Estimation

o tim t th lo tion o 3 point iv n it im print- t( 1 2 3) w h v im pl m nt th ollowin l orith m . h tiv - p in xin m h ni m fi n th 3 lo tion on th im print ttripl t. h tripl tin our im pl m nt tion i provi y th u r y u in m ou -pi k on th r p tiv m r m r im . or x m pl i th tip o th no i vi i l in ll th th r im or our xp rim nt th n w n li k on th tip o th no in ll th th r im to o t in tripl t( 1 2 3) or th no . h i i h own in i ur 10 - wh r 1 2 n 3 r th proj tion on th l t nt r n ri h t m r or th point . iv n im -im print( 1 2 3) orr pon in to 3 -point w rh llth i h t li n fi n th vox l ont inin point . u to p lim it tion w h v not xpl in th t il o th i l orith m h r . t il r in 38 . i ur 10 - h ow th r ulto our im pl m nt tion u in ix im print- t. orr pon in 3 vox l r l o h own y vi u lizin th m onn t k l ton fi ur . h v h i h li h t th orr pon in point in th im in i ur 10 n 10 . noth r li h tly i r nt to point i h own in i ur 10 . i ur 10 h ow th p rti ip nt po y u in n in-h ou y nth ti tor v lop in our l or tory. h v l o t t our tiv - p in xin m th o y u in two i r nt m r ttin th u r tin i r nt to li n in xin m h ni m n h v o t in im il r r ult.

72

Sudhanshu Kumar Semwal and Jun Ohya

P

Q

Planar slice

S

L2

Slice and its dot pattern

L1 QB

PA

PB

QA PL2 PL1

S1

PL2 PL1

Left

S2

Middle

S3

Right

Active space creation

Figure 8: Projection of two points on two planes.

(a)

Figure 9: Imprint−set (S1,S2,S3) for point S. S1, S2, and S3 are 2D points on the respective camera−images.

(b)

(a) Center camera

(c)

(b) Left camera

(d) Nose−tip Neck left−bicep

Chest

Solar− plexus right−bicep (e)

(f)

Figure 10: a−c: Selecting six image−imprints on the middle, left and right camera images. (d) associated 3D−cells connected by a simple skeleton. (e) Another skeleton representing a different set of six points. (f) A simple synthetic actor mimics the pose of the participant.

(c) Right camera Figure 11: Active−space points for the three images.

The Scan&Track Virtual Environment

Correspondence

R C1

Im1 L

C Im2

C3

Im3

Significant Points Correspondence & Matching

Scene, color, and Previous Poses

Im1 Im2

Significant Points (SP) Determination

C2

Contour Extraction

Active Space Scanning and Capturing System

User

Im3

S1,S2,S3 Input

S1,S2,S3

Synthetic Actor/Avatar Display and interaction with participant

Active Space−Tracking Figure 12: Block Diagram for the Scan&Track System

ASpace6 D

ASpace7

ASpace5

E

F

ASpace4 G C

ASpace8

A ASpace1

B ASpace2 ASpace3

Figure 13: Multiple participants Figure 14: World of active−spaces

73

74

Sudhanshu Kumar Semwal and Jun Ohya

h t h niqu work u th r i nou h h i tin th proj tion o th point u to th m r po itionin th tth pth i rim in tion i po i l . nl th th r m r n l r i nti lor th ri i v ry wi iti h i h ly unlik ly th tm or th n two li h v th m r orm y 1 2 n 3. n irn to our m th o th i woul m n th tth r olution o th li tiv p n to fi n r or m r n l n to h n . t n on lu th t iv n th r im print-point uniqu 3 ll-in x n oun . in it or olor -im r thttp://www.cs.uccs.edu/~semwal/VW98.

5

Block Diagram of the Scan&Track System h

lo k i r m o th y t m i h own in i ur 12. h n r k y tm n outi look in -in y t m 1 . t on i t o (a) the Correspondence y t m n (b) the Active Space Tracking y t m . h r r our u - y t m to th orr pon n y tm (i) Contour extraction m r to n ly z n n outlin o th ilh ou tt i xtr t rom th im .L t ontour 1 2 n 3r pr ntth h um n- ilh ou tt rom th r m r -im m 1 m 2 n m 3 r p tiv ly. h i i w ll r rh r o om put r vi ion. h v v lop n-lin l orith m or utom ti xtr tion o ontour 03031 . (ii) Significant points determination xpl in rli r th om tri im printi xtr t y th om tri -im print l orith m . (iii) Significant points correspondence and matching h om tri im print o on ontour r orr pon to th om tri -im print o ontour rom oth r m r -im pturin th m po . h out om o th m t h in l orith m i to provi n im print- t tripl t( 1 2 3) or v ry point in th om tri -im print xtr t rom th m r -im . (iv) Scene, color, and previous poses database h n in orm tion olor o om tri -im print point n pr viou po r to u or r fi nin th orr pon n n m t h in o th om tri -im print ro m ultipl m r -im . h tiv p r k in y t m i u to tim t th po ition o point iv n th tripl t( 1 2 3). h tiv p r k in y t m h th ollowin two m in u - y t m (i) Active-space scanning and tracking system xpl in rli r w u th tiv - p in xin m h ni m wh i h llow u to t rm in th lo tion o 3 point iv n it om tri -im print( 1 2 3). tth i tim w p i y th orr pon in point on m ultipl - m r im u in m ou -pi k . n utur th point woul automatically n r t y th orr pon n y tm h own in i ur 1. r inv ti tin nov lp tt rn ( . . h k r o r p tt rn ) to utom ti lly fi n th ri -point on th pl n r li urin pr pro in . (ii) Synthetic actor and/or avatar display in th tiv - p inxin m th o w h v n u ul in po itionin h um n-m o l or our i ility xp rim nt. n utur n v t r or y nth ti h um n tor i xp t i

The Scan&Track Virtual Environment

75

to r pli t th m otion o th p rti ip nt. n utur w pl n to om pl t ly utom t th n r k y t m o th t p r on in 3 nvironm nt n tr k n r pr nt y n v t r. h i i th o l o our propo rrh. Multiple participants: l o xp t th n r k y t m to work wh n m ultipl p rti ip nt r in i th tiv - p h own in i ur 13. r n o m ultipl p rti ip nt n th ir im r r n h ll n in pur ly vi iony t m . or x m pl iti h i h ly lik ly th tth om tri -im print or h m r im will noth v n i nti l num r o point v n th ou h m r r vi win th m n . h i oul u to o lu ion y oth r p rti ip nt in th . xpl in rli r orr pon n i m u h i r or th n r k y tm th tiv - p in xin provi v lu l 3 in orm tion. h own in i ur 1 m h - plittin m y n ry wh n m ultipl p rti ip nt r pr nt 32. Ratio-theory: o r olv r m -to- r m orr pon n w ll orr pon n ro m ultipl im o th m n w pl n to u th on pt o r tio th ory r l t to o y p rt. h i h in th ratio theory n tun r too y im inin t ilor m ur m nt. h xtr m iti o ontour n r lt ily i w t rt with th im n ion o th p rti ip nt h own in i ur 1 n 1 . rom on r m to noth r th l n th o th o y - xtr m iti o th p rti ip nt n not h n . L t u um th t th om tri -im print o two po h v n o t in h own in i ur 1. now n to orr pon point with point 1 point with point 2 n o on. h n th point r onn t i. . 1 to 2 to 3 n o on. n i ur 1 th urv l n th L1 tw n xtr m ity n orr pon to th urv l n th L2 tw n n n o on. n p rti ul r it n i th tth r tio L1/L2 woul pproxim t ly qu l L3/L . o th orr pon n pro l m r u to m t h in n fi n in t fi t in m ll rh- p p n nt upon th num r o om tri -im printpoint on urv rom m r -im . Double verification: h n r k y t m llow double verification or v li ity o orr pon n . or x m pl i iti oun th t tripl t( 1 2 3) r m th w n l o v ri y it y m t h in th olor o 1 2 n 3 rom th im . th y r i r ntth n w o noth v m th . n ition w n h v v ri ty o oth r m ur . or x m pl i th tim t po o n v t r i r ti lly i r ntth n th po in th pr viou r m th n itm i h t in i t orr pon n rror unl th p rti ip nti m ovin xtr m ly tin th . Avatars and Human Actors: in virtu l nvironm nt r xp t to popul t y oth v t r or h um n orm wh i h r pli t th m ov m nt o p rti ip nt n virtu l y nth ti tor wh o utonom ou m ov m nti ir t y om put r 2319 . n th n r k y t m th tion o th p rti ip nt r to tr k n m pp to th y nth ti h um n orm 333 35 3637.

76

6

Sudhanshu Kumar Semwal and Jun Ohya

Expected Significance of the Scan&Track System

i tru o ny vi iony t m th i rim in tion p ility o our y t m i lim it y th ph y i l- iz o th pix l in th m r -im . ow v r th y t m llow th u o m ultipl m r n th u m ultipl in i n m or ur t pr i tion o pth . On w h v t rm in 3 - ll or vox lv lu n n pproxim t lo tion th n w oul utom ti lly u noth r t o m r or m u h lo r n pr i look tth 3 - p n r th 3 -point. n oth r wor th tiv in xin m th o n ppli twi or m or . h fi r ttim itwoul u to fi n h i h - r in in x n l t r w woul fi n m or pr i in x y u in th o m r wh i h h v th m o t t il vi w o th li n n r y r . h in orm tion out m or t il vi w woul r or urin pr pro in th m r rr n m nt o not h n urin tr k in . h u w r u tin th u o lo l n lo l m r . N ot th ton m r oul lo l or on point n oul li t lo l or om oth r pointin 3 p . tiv in xin i xt n i l n m n l to h r w r im pl m nt tion; p i lpurpo h ip n i n or po itiontim tion l ul tion . n ition i r nt li -p tt rn oul v il l or low n h i h - n u r or utom ti pr pro in . h active-space indexing mechanism m p 3 point in i th tiv - p to orr pon in proj tion on th m r -im . m u t n w r on o viou qu tion h ow woul w lwith th itu tion wh n th m r i pl with u h n ut n l th t llth li proj t lin . n th i itu tion ri -lin m y r ly vi i l or v ry ut m r n l n th li will proj t lin wh n th m r -im pl n i p rp n i ul r to th li . not th titi h i h ly unlik ly itu tion m r n n ur th tnon o th li r p r ll l to th r placed by us n w vi w-norm lo ny m r . h on r l t qu tion i n w tilltr k llo th tiv p with th r tri tion th t m r n not pl t ut n l w.r.t. li . h n w r i th t ll th t il o th tr ion n o t in y u in m r li h tly w y n lo ly zoom in on th r o int r t. n ition i or om r on th m r n not m ov th n th ori nt tion o th li n lo h n urin pr pro in to r t n w tiv - p in xin m h ni m . ti th i r om to h n th ori nt tion o m r n /or li wh i h m k th tiv - p in xin m th o v r til . or om m nt r l t to r olution ur y r pon iv n ro u tn r i tr tion n o i ility r in 38 . h n r k i xp t to accurate n h v xtr m ly h i h resolution u vr l m r n pot nti lly zoom -in n provi h i h r olution tiv - p ir . h tiv - p in xin i on t nt tim wh n th num r o li n lin r on t nt. h responsiveness o th n r k y t m woul l o p n upon th tim itt k to l ul t im print-point rom th m r -im . hi n to urth r inv ti t ut w not th t th orr pon n pro l m i m u h im plifi u to th tiv - p in xin n th om tri -im print l orith m . ti li v th t th orr pon n u - y t m ( i ur 1) will th k y to r l-tim int r tion

The Scan&Track Virtual Environment

77

n h oul im pl m nt in h r w r muh po i l 39 . h tiv in xin y t m i xtr m ly robust it n lw y fi n 3 - ll or iv n im print-point with in th tiv - p . i u in 38 th n r k y tm u m r -im wh i h r tt r in r olvin th swimming pro l m n ount r in virtu l nvironm nt. irtu lo j t t n to wim u o th li h tv ri tion in tim tin th 3 po ition o th tr k -po ition. u o th tiv - p in xin th m orr pon in point on th im will lw y pro u th m r ult n th r or th wim m in pro l m i xp t to on i r ly r u . urin pr pro in m r n zoom in n out n pl p n in upon th u r wi h . ultipl u r n tr k th m 3 p upon th ir own h oi o li n otp tt rn .N w m r n u to m or tiv - p ir . h tiv - p in xin m th o i uniqu n n w n iti n tt m ptto olv th orr pon in pro l m ro m ultipl m n i l o n w m th o to tim t 3 po ition. ontour xtr tion m th o in th propo lfi n th o point on th h um n o y wh i h r m o t vi i l n th r or r m or lik ly to pr nt n -point in ontour xtr t rom m r -im . Occlusion: n th n r k y t m w pl n to look into p t vi or m to r olv or tl tk now th tth om th in wh i h w vi i l or i notvi i l now. n th i th propo n r k y t m will ontinu to tr k th o n -point wh i h r vi i l ( . . h n t). h pointi th i th tth n r k y t m i ro u t th om tri -im print h n rom r m -to- r m .

7

Can VEs Really Understand Humans

h n r k i xp t to h i h ly i tri ut y t m with vr l n r k tiv - p ph y i lly i tri ut . i ur 1 h ow vr l tiv - p . lth ou h tiv p n r itr rily l r nou h priv y to th h um n-p rti ip nt i provi no tr k in n on outi th tiv - p . in w xp t v r l h um n-p rti ip nt to int r tin in th i h i h ly i tri ut iti ppropri t to k ( ) n m or powr ul th t urin h in ( ) ( ) n r lly un r t n th h um n p rti ip nt h v th ollowin o rv tion (i) l r nti lly m or pow r ul th n th urin h in m o l u o th non-lin rity introu y h um n-p rti ip nt. y llowin m ultipl h um n to int r tin th ph y i lly i tri ut m nn r r xp t to h v in non-lin r w y u h um n h vior i nti lly non-lin r. (ii) lth ou h r nonlin r n th r or xp t to m or u ul th n in lin with h um n-int r tion it i our opinion th t m y not l to completely un r t n th human participants. lth ou h w n not un r tim t th h um n- r in w h v im pl y t om p llin r um nt h um n r tin non linear abstraction. oo x m pl o non-lin r tr tion n oun

78

Sudhanshu Kumar Semwal and Jun Ohya

wh n w tr v l. ow m ny o u r lly inv t ny tim th ink in outth pl n fly in oon w o r th pl n w r lr y m k in pl n to wh t w will o wh n w rriv tour tin tion. n oth r wor w h v lr y tr t th tu l tr v l (w um th tw will rriv tth tin tion ly ). o l rly t t our o rv tion m y llow u to p r orm nonlin r t k h ow v r th y m y notun r t n th h um n-p rti ip nt n l to pr i twith rt inty wh tth h um n-p rti ip ntw nt to o tth v ry n xtm om nt.

8

Conclusions and Final Comments

h n r k y t m voi th om pli t m r li r tion op r tion n th i tortion u to m r proj tion r utom ti lly voi . h y t m wh n im pl m nt woul l l tiv -in xin or th m 3 p oul v lop or oth th low n th h i h - n y t m . n ition th tiv - p volum i l o l l. h qu liti m k th n r k y tm i l or utur h um n- nt r ppli tion . h tiv - p in xin m th o provi r m work or un n um r 3 tr k in upon m ultipl vi o qu n . h tiv - p in xin m th o o notr quir ny m r li r tion or m r ori nt tion p r m t r or tim tin th po ition. iv n th im print- to point th tiv - p in xin m th o t rm in th 3 - llor vox lo th tiv - p ont inin th tpoint in on t nttim . h m th o provi p -lin rity with in th 3 ll n llow lin r int rpol tion to u or tt r po ition tim t . h pr pro in l orith m i im pl n n ily utom t . l n r li n r itr rily l r or m ll n n h v fi n r or o r ri p tt rn . n utur w pl n to u muh lr r 8 t y 10 tpl n r li (w ll) with fi n r otp tt rn. h propo n r k y t m i u ul or i tri ut . ultipl u r n tr k th m 3 p upon th ir own h oi o li n ot p tt rn . N w m r n u to m or tiv - p ir . h n r k y tm n u or p r on l tiv - p in ronto th u r r n y pl in th r m r tth top o th m onitor. h pr pro in l orith m i im pl n n ily utom t . h n r k y tm n lo u lon with oth r xi tin tr k in t h nolo i . m r r xp t to m ount on th w ll n outo th w y o th p rti ip nt th tiv - p in xin m th o i uit l or h um n- nt r ppli tion .

References 1. Meyer K., Applewhite H.L., and Biocca F.A.: A Survey of Position Trackers. PRESENCE, MIT Press 1(2) (1992) 173-200 2. Sturman D.J., and Zeltzer D.: A Survey of Glove-based Input. IEEE CG&A (1994) 30-39.

The Scan&Track Virtual Environment

79

3. Speeter T.H.: Transforming Human Hand Motion for Tele-manipulation. PRESENCE, MIT Press 1(1) (1992) 63-79 4. Newby G.B.: Gesture Recognition Based upon Statistical Similarity, PRESENCE, MIT Press, 3(3) (1994) 236-244 5. Gottschalk S., and Hughes J.F.: Autocalibration for Virtual Environments Tracking Hardware. Proceedings of SIGGRAPH (1993) 65-71 6. Rashid R.F.: Towards a system for the Interpretation of Moving Light Displays. IEEE Transaction on PAMI, 2(6) (1980) 574-581 7. Sutherland I.E.: A head-mounted three dimensional display. ACM Joint Computer Conference, 33(1) (1968) 757-764 8. Brooks F.P., Ouh-Young M.J., Batter J.J., and Kilpatrick P.J.: Project GROPE – Haptic Displays for Scientific Visualization. Proceedings of SIGGRAPH (1990) 24(4) 177-185 9. Burdea G., Zhuang J., Roskos E., Silver D., and Langrana N.: A Portable Dextrous Master with Force Feedback. PRESENCE, MIT Press, 1(1) (1992) 18-28 10. Iwata H.: Artificial Reality with Force-Feedback: Development of Desktop Virtual Space with Compact Master Manipulator. Proceedings of SIGGRAPH (1990) 24(4) 165-170 11. Badler N.I., Hollick M.J., and Granieri J.P.: Real-Time Control of a Virtual Human using Minimal Sensors. PRESENCE, MIT Press, 2(1) (1993) 82-86 12. Krueger M.W.: Artificial Reality II. Addison Wesley Publishing Company, Reading, MA, 1-277 (1991) 13. State A., Hirota G., Chen D.T., Garrett W.F., Livingston M.A.: Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking. Proceedings of SIGGRAPH (1996) 429-438 14. Faugeras O.: Three-Dimensional Computer Vision: A geometric Viewpoint. MIT Press, Cambridge, Massachussets (1996) 15. Grimson W.E.L.: Object Recognition by Computer: The Role of Geometric Constraints. MIT Press, Cambridge, MA (1990) 16. Maybank S.: Theory of Reconstruction from Image Motion. Springer-Verlag (1993) 17. Serra J.: Image Analysis and Mathematical Morphology. Academic Press, vols 1 and 2 (1988) 18. Talbert N.: Toward Human-Centered Systems. IEEECG&A, 17(4) (1997) 21-28 19. Cruz-Neira C., Sandlin D.J., andDeFanti T.A.: Surround-Screen Projection-based Virtual Reality: The Design and Implementation of the CAVE. Proceedings of SIGGRAPH, (1993) 135-142 20. Proceedings of the Second International Conference on Automatic Face and Gesture Recognition (1996) 1-384, Killington, Vermont, USA, IEEE Computer Society Press 21. Proceedings of the Second International Workshop on Automatic Face and Gesture Recognition (1995) 1-384, Zurich, Switzerland, IEEE Computer Society Press 22. Capin T.K., Noser H., Thalmann D., Pandzic I.S., Thalmann N.M..: Virtual Human Representation and Communication in VLNet. 17(2) (1997) 42-53 23. Semwal S.K., Hightower R., and Stansfield S.: Closed form and Geometric Algorithms for Real-Time Control of an Avatar. Proceedings of IEEE VRAIS (1996) 177-184 24. Wren C., Azarbayejani A., Darrell T., and Pentland A.: Pfinder: Real-Time Tracking of the Human Body, Proceedings of International Conference on Automatic Face-and-gesture Recognition, Killington, Vermount (1996) 51-56

80

Sudhanshu Kumar Semwal and Jun Ohya

25. Segen J. and Pingali S.G.: A Camera-Based System for Tracking People in Real Time, Proceedings of ICPR96, IEEE Computer Society, 63-68 (1996) 26. Ohya J. and Kishino F.: Human Posture Estimation from Multiple Images using Genetic Algorithms. Proceedings of 12th ICPR (1994) 750-753 27. Huang T.S. and Pavlovic V.I.: Hand Gesture Modeling, Analysis, and Synthesis. Proceedings of International Workshop on Automatic Face-and-gesture Recognition, Zurich, Switzerland (1995) 73-79 28. Kass M., Witkin A., and Terzopoulos D.: Snakes: Active Contour Models. International Journal of Computer Vision (1987) 1(4) 321-331 29. Moezzi S., Katkere A., Kuramura D.Y., and Jain R.: Immersive Video. Proceedings IEEE VRAIS, IEEE Computer Society Press (1996) 17-24 30. Semwal S.K.: Ray Casting and the Enclosing Net Algorithm for Extracting Shapes from Volume Data. Special issue on Virtual Reality and Medicine for the journal of Computers in Biology and Medicine, 25(2) (1994) 261-276 31. Pachter N.S.: The Enclosing Net Algorithm. MS Thesis, Department of Computer Science, University of Colorado, Colorado Springs, CO (1997) 1-47 32. Semwal S.K. and Freiheit M.: Mesh Splitting for the Enclosing Net Algorithm, accepted for publication in the Proc. of the Int. Conf. on Imaging Science, Systems, and Technology (1998) 1-10 33. Carignan M., Yang Y., Magnenat-Thalmann N., and Thalmann D.: Dressing Animated Synthetic Actors with Complex Deformable Clothes. Proceedings of SIGGRAPH (1992) 26(2) 99-104 34. Magnenat-Thalmann N., and Thalmann D.: Complex Models for Animating Synthetic Actors. IEEE CG&A, 11(5) (1991) 32-44 35. Hodgins J.K., Wooten W.L., Brogan D.C., and O’Brien J.F. Animating Human Athletes, Proceedings of SIGGRAPH (1995) 71-78 36. Semwal S.K., Armstrong J.K., Dow D.E., Maehara F.E.: MultiMouth Surfaces for Synthetic Actor Animation. The Visual Computer, 10(7) (1994) 399-406 37. Semwal S.K., Hightower R., and Stansfield S.: Mapping Algorithms for Real-Time Control of an Avatar using Eight Sensors. PRESENCE, MIT Press 6(1) (1998) 1-21 38. Semwal S.K., and Ohya J.: The Scan&Track Virtual Environment and the ActiveSpace Indexing Method. Technical Report ATR-MIC-0024, ATR Media Integration and Communication Research Laboratories, Kyoto, Japan (1997) 1-14. 39. Semwal S.K., and Ohya J.: Geometric-Imprints: An Optimal significant Points Extraction Method for the Scan&Track Virtual Environment, accepted for publication at the Third IEEE International Conference on Automatic Face and Gesture Recognition , Nara, Japan. IEEE Computer Society (1998) 1-8 40. Semwal S.K., and Biggs K.: Automatic Extraction of Contours for the Geometric Imprints Algorithm, Department of Computer Science, University of Colorado, Colorado Springs (Fall 1997) 1-10.

CyberGlass: Vision-Based VRML2 Navigator Chisato Numaoka Sony Computer Science Laboratory – Paris 6, Rue Amyot, 75005 Paris, France [email protected]

Abstract. In this paper, I will show an example of a VRML 2.0 interface for a portable device. CyberGlass uses a “video recording metaphor” to examine a 3D virtual world. The critical problem is to change a viewpoint in the virtual world based on the movement of a portable device. We have solved this problem using data obtained from a CCD camera. Using a CCD camera to detect motion enables us to develop a compact device. We developed a prototype of CyberGlass to provide an intuitive access to 3D world.

1 Introduction Given the explosive growth of the Internet and progress of Internet technology, it will be necessary to develop technologies that promote various types of access to content. Existing access styles are based on HTML and JAVA. VRML 2.0 is a comparatively new format for content publishing, which enables us to represent information as a collection of 3D mobile graphical objects. Unlike text-based schemes, this type of representation can present information which has not been explicitly coded by the producer. Even the relative locations of objects can be meaningful. If objects move, information will change over time. Thus, we need to discover an application domain where VRML 2.0 is an attractive alternative. We are attempting to identify a plausible context in which we might naturally access a 3D virtual world. This will help suggest a style of content publishing appropriate for VRML 2.0 moving worlds. One plausible scenario is to use VRML 2.0 information in a place for which a 3D model is available. Suppose we are visiting the Louvre museum and its corresponding 3D model is available on a portable device. We may access appropriate information for each work of art at the appropriate moment. It would be useful if the view displayed on a screen of the device matched an image being observed. This is because the view allows us to find relevant information quickly. To make this scenario realistic, we need to solve two problems: one is to determine the location we are looking at and the other is to detect motion of a device. For the first problem, we can assume that infrastructure support will provide us with a global position. For example, NaviCam uses bar-codes pasted on target objects for which we

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 81-87, 1998 © Springer-Verlag Berlin Heidelberg 1998

82

Chisato Numaoka

can retrieve information (Rekimoto, 1995). We are currently focusing on the second problem, namely, detecting a device’s ego motion. As described above, our final goal is to record our trajectory in a visiting place and to replay it using a VRML 2.0 browser with an appropriate 3D model of the visiting place. Through our work towards this goal, we have recognized that ego-motion detection itself provides a good user interface for VRMK 2.0 browser. In general, a gyroscope is better hardware for identifying an ego motion of devices such as headmounted displays (HMD) in virtual reality environments. We chose a CCD camera, however, mainly because we do not want to add more hardware to a small computing device for simplicity. Again, our final goal is to use a portable device in a real world to record the changes of our viewpoint in a visiting place. A CCD camera could be used to detect a global landmark. Thus, our challenge here is to see how well we can solve our problem using a CCD camera only.

2 CyberGlass: Design Issues for Portable Interfaces We designed CyberGlass for users of portable computers, which are, in general, smaller, slower, and have less storage than desktop systems. In order take full advantage of these computers, we identified the following design issues: intuitive operation for examining the world: CyberGlass is a user interface  designed to help us examine the world. When we look around the world, we normally turn our head or move our eyes. In a desktop environment, however, because of this restriction in the position of the display, that type of motion is impossible. Instead, we change our viewpoint by using pointing devices such as mice or touch sensors. When we carry a device with a display, the restriction disappears. We can move it to right or left while looking at its display. This is typical in video recording. We move a video camera so as to keep the things we want to record in the center of the display. Since we are already familiar with the action of filming a video, we can use this action as a metaphor for the user interface. We shall call this the “video recording metaphor.” In fact, the main idea of CyberGlass is to capture images in a 3D virtual space as if through a video camera or a water glass in the real world. Note that this is not unusual in VR environment, but we are trying to apply the same type of idea to portable computers using computer vision techniques, rather than gyroscope. inexpensive movement detection: One of characteristics of portable computers  is their ease of use – we can use them anywhere and anytime. Nevertheless, they have a serious deficiency. In general, to save battery consumption, portables have less CPU power and less memory. For this type of computer, simpler algorithms are preferable in order to reduce computation time. With respect to ego-motion detection, several computer vision algorithms are available. One is the optical flow algorithm. Several studies are based on optical flow (e.g. (Horn and Schunck, 1981) (Sundareswaran, 1991) (Barron, Fleet, and Beauchemin, 1994)). One particular example of the optical flow algorithm is shown in (Ancona and Poggio, 1993). This algorithm is said to be well-suited to VLSI implementation. Therefore,

CyberGlass: Vision-Based VRML2 Navigator

83

compared with other optical flow algorithms, this algorithm is worth considering for portable computers. Optical flow computation, despite its complex calculation, can never reduce errors to a negligible level. If we can limit the working environment, we can expect that simpler methods such as frame difference could work well. Simple frame difference is much less expensive than optical flow algorithms. Fortunately and strikingly, in our experiment, the simple frame difference algorithm works very well when target images have a clear contrast in color depth a condition that we can generally expect around picture frames in museums. Therefore, we decided to use simple frame difference for a prototype system while we continue to investigate the other algorithm for further developments. approximation of rotation angle: Movement of a device is perceived as linear  movement, even if we rotate the device. We are interested in rotation here. Therefore, we need to project linear movement to rotation.

Fig. 1 A Prototype of CyberGlass.

3 Implementation A prototype system of CyberGlass was developed based on the SONY CCD camera MC-1 and TOSHIBA portable computer Libretto50 (Pentium 75 MHz, 32MB). The CCD camera is arranged so as to match its direction with user’s gaze. A picture of this system is shown in Fig. 1. This system consists of two software modules: a motion detection module and a 3D virtual world’s viewpoint control module. A block diagram is shown in Fig. 2. Hereafter, I will explain how this system works.

84

Chisato Numaoka

Virtual Motor

VRML “View” Control Module

Motion Detection Module CCD Camera

VRML “View” LCD VRML World

A Portable Handy Device

Fig. 2 A block diagram of CyberGlass

3.1 Motion Detection Module The image captured by the CCD camera is stored in a frame buffer. After grabbing two successive frames, we perform an optical flow calculation. As I mentioned above, the method used is a frame difference algorithm. We can calculate a center of gravity for changes of all the pixels at time t by comparing two serial frames of images from a CCD camera. Let us call the center of gravity (x, y). After the next frame is received, we perform the same calculation to obtain the center of gravity at t +1. We call this (x’, y’). From these centers of gravity, we can obtain ( x, y), where x = x’- x, y = y’- y.

the number of pixels on a bitmap image moved

P

L



d P

LMAX

the total number of pixels in a image in 1 dimension

CCD Camera

 = tan-1(L/d) L = (P/P)*LMAX then  = tan-1((LMAX*P) /(P*d))

Fig. 3 Approximation of rotation angle of a device

CyberGlass: Vision-Based VRML2 Navigator

85

3.2 3D Virtual World’s Viewpoint Control Module Once we have obtained the movement of the center of gravity (x, y), the 3D virtual world’s viewpoint control module can rotate the viewpoint in a 3D virtual world by the following angle :

 = tan-1((P * LMAX) / (P * d)).

(1)

where P is either x or y in pixels and P is either the bitmap width or height in pixels. LMAX is a distance in meters corresponding to P and d is a distance in meters to a target object in front of the device (See Fig. 3 in detail). The result is then represented on the display. As described in the introduction, our final goal is to use a portable device in a visiting place. Therefore, d must be a correct distance from a CCD camera to a target object. In a real environment, by adding a special sensor hardware (e.g. an ultrasonic sensor) to the CyberGlass, it might be possible to obtain an exact distance d. In the case of a VRML 2.0 browser, however, we do not have any target. D, in this case, is the distance from our eyes to a virtual referential object in a virtual space. Thus, we need to set d to a constant.

4 Characteristics of an Experimental System We conducted experiments with the CyberGlass equipped with both a CCD and a gyroscope for comparison. Fig. 4 shows traces of horizontal and vertical movement of the viewpoint according to the movement of a CyberGlass. The gyroscope provides almost the exact angle from an initialized position, whereas the CCD uses approximation calculated by equation 1. Even so, it should be noted that the approximated moving angles have nearly the same tendency as of the gyroscope. In particular, the approximated moving angles successfully return to 0 degrees whenever the gyroscope says that it is 0 degrees. Unfortunately, when we consider both the horizontal and vertical axes, and we move a CyberGlass only in horizontal direction, a problem arises as shown in Fig. 4 (b). Although we did not move it in vertical direction, the angle approximation module automatically accumulates erroneous movement caused by noise. As the result, the angles went down towards the ground. We can avoid this by a forced solution. Namely, a CyberGlass can suppress updates of angles of either direction by detecting which direction has a major movement. If we need more degrees to be detected at one time, we will need to consider a more complex algorithm presented in (Roy and Cox, 1996).

86

Chisato Numaoka

5 Conclusion We designed a VRML 2.0 interface for a portable device and developed a prototype system, called CyberGlass. This system is designed based on a “video recording metaphor”. Namely, we can look around a 3D virtual world as if we were looking around the real world using a video camera. The current implementation has been tested in an environment where lighting is relatively moderate and objects are evenly distributed all around the environment so that any movement can be detected according to the motion of the handheld device.

CCD vs Gyroscope 400 300

Ho riz on tal m ov e m en t

CCD(x) 200

Gyro(x) 100 0 14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 378 391 404 417 430

-100 -200 -300 -400 -500

Time

(a) CCD vs Gyroscope

20

-20

-40

-60

-80

-100

Gyro(y) CCD(y)

-120

Time

(b) Fig. 4 A Trace of Horizontal/Vertical Movement.

430

417

404

391

378

365

352

339

326

313

300

287

274

261

248

235

222

209

196

183

170

157

144

131

92

118

79

105

66

53

40

27

14

Vertical movement

0

CyberGlass: Vision-Based VRML2 Navigator

87

One frequently asked question is why we do not try to detect forward or backward movements in this interface. It may be interesting to note that, according to our forward/backward movement with CyberGlass, the viewpoint inside a 3D virtual world changes. Nevertheless, I believe that, with respect to forward/backward movement, it would be better to use a push button interface rather than to change our standing position. One reason is the limited space available for the user to move. The other reason is that we are familiar with the metaphor of “driving a car” using an accelerator. This is a report of work in progress. In future work, we would like to integrate this system with NaviCam so as to determine a global position of CyberGlass in order to produce a platform for more advanced applications.

References 1.

2. 3. 4. 5. 6.

Rekimoto, J. 1995. Augmented interaction: Interacting with the real world through a th computer. In Proceedings of the 6 International Conference on Human-Computer Interaction (HCI Intenational’95): 255-260. Ancona, N. and Poggio, T. 1993. Optical Flow from 1D Correlation: Application to a simple Time-To-Crash Detector, MIT AI Memo No. 1375. Barron, J. L., Fleet, D. J., and Beauchemin, S. S. 1994. Performance of optical flow techniques. Int. J. Computer Vision, 2(1): 43-77. Horn, K. and Schunck, B. 1981. Determining optical flow. Artificial Intelligence, 17; 185-203. Sandareswaran, V. 1991. Egomotion from global flow field data. In Proceedings of IEEE Workshop on Visual Motion: 140 - 145, Princeton, NJ. Roy, S. and Cox, I. J. 1996. Motion without Structure. In Proceedings of IEEE International Conference on Pattern Recognition, Vol 1:728-734.

Work Task Analysis and Selection of Interaction Devices in Virtual Environments Thomas Flaig1 1

Fraunhofer-Institute for Manufacturing Engineering and Automation IPA, Demonstration Center Virtual Reality, Nobelstraße 12, D-70569 Stuttgart, Germany [email protected]

Abstract. The user interfaces in virtual environments depend on the work task given by specific applications and the adaptation to the human senses. This paper describes a method for work task analysis and selection of interaction devices for the design of user interfaces in virtual environments. The method takes into consideration degrees of freedom in the interaction process and the human senses. Characteristic is the systematic process for the designer. The method will be described and the results in an experimental environment discussed.

1 Introduction Over the past few years, Virtual Reality (VR) has grown from an experimental technology to an increasingly accepted standard part of the business process. VR offers significant benefits as a problem-solving tool in various applications. It gives the user the feeling of being a part of the application to see and examine products, plans or concepts before they are realized [1]. The starting point for problem-solving in Virtual Environment (VE) is choosing the technological configuration for a VR system. It is mainly the application, which drives the need for particular technology requirements. These requirements are the computer hardware, VR software and the configuration of interaction devices. Especially the adaptation of interaction devices has a close influence on the human performance in problem-solving processes. From the beginning, VR technology has produced a broad variety of interaction devices. Some, like force-feedback systems, for use in specific work fields and others for more universal use. Therefore the designer has no easy choice. The result is, that in expert interviews the user-interface of VR systems is rated unsatisfactory because of its user-hostility [2].

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 88-96, 1998 © Springer-Verlag Berlin Heidelberg

Work Task Analysis and Selection of Interaction Devices

89

2 Conditions One characteristic of VR technology is multi-modal communication between a user and a Virtual Environment. A wide variety of VR interaction technology for perception and manipulation in VE is available for use in practical applications. They differ in performance, information throughput rate or user-friendliness. For example the visual sense can be served by a common monitor, a head-mounted display (HMD) TM or a CAVE -system. The choice depends on the necessary degree of perception and the work tasks in VEs. To balance both factors is the job of the designers of the user interface. VR applications commonly disregard human factors. Most designers have not examined the requirements of psychology and perception [3]. The main requirements for the interaction devices are driven by work tasks for system and application control. The required interactions are so-called necessary interactions in VE at configuration and run time. The task of the designer is to fulfill these necessary requirements with the available interaction devices and to take into consideration the human factors for perception and manipulation processes. Several researches have reviewed the important ergonomic issues in VR. But it is rare to find ergonomic guidelines for designing VEs and especially the configuration of interaction devices. The goal is to gain a maximum in human performance for a given work task. Experiments show, that the terms of interaction, presence and interaction devices are very important issues in human performance [4]. Sheridan stated that the determinants of sense of presence are: the richness of sensory stimuli, simultaneous viewpoint changes with head movements, the ability to handle objects inside the virtual environment and the dynamic behavior of the moveable objects [5], [6]. A breakdown of sense of presence can occurs because of the weight of the HMD, time lags or poor balanced configuration of interaction devices in VE. The result is a reduce in human performance. To gain an optimum in performance a method for configure interaction devices has to take into consideration the factors of perception by given requirements.

3 Method The configuration of interaction devices has to take into consideration the requirements of the work tasks (necessary interaction of the user), the degrees of freedom (DOF) of the interaction devices in VE (possible interactions) and the combinations of interaction devices [7], [8]. The necessary interactions of users will be separated from the specific work tasks of system and application control and classified by the method of Ware-MacKenzie [11]. Here the interactions will be classified by DOFs. Multiple DOF devices such as 3D or 6D trackers and glove input devices are excellent choices for multiple DOF tasks such as viewpoint or object placement by gestures. Similarly, 2D devices are well suited for 2D tasks such as drawing or pointing on a CRT. The same classification of the interaction devices allows a first assignment of interaction devices to the work tasks by DOFs. By the use of grading and comparison the final combination of interaction devices will be chosen by taking the allowed combinations of interaction devices with their mutable

90

Thomas Flaig

combinations into consideration (Fig. 1 shows the methodical steps to find an optimized combination of interaction devices). (1) Classification and grading of necessary interactions for controlling and observing the VR system based on DOFs. (2) Classification and grading of necessary interactions for the application in the VE based on DOFs. (3) Classification and grading of the interaction devices based on DOFs. classification of interactions for system control

classification of interactions for application control

classification and valuation of interaction devices

valuation of mutal influences using interaction devices

methode for selecting interaction devices

Fig. 1. Method for task-analysis and selection of interaction devices

Problems arise, when using interaction devices for tasks that do not have the same number of DOF. For instance, specifying the 3D position and orientation of an object with a mouse is difficult because one can only modify two DOFs at a time. Conversely, using higher DOF devices for lower DOF tasks can also be confusing if the interaction device is not physically constrained to the same DOF as the task. (4) Grading of the combination of interaction devices with input/output functions. Not all combinations of interaction devices are meaningful. For instance the use of a computer mouse in combination with a HMD make no sense. (5) Choice of interaction devices by the use of the grading and comparison. The choice of interaction devices begins with the comparison and grading of input and output devices with the necessary interaction for each DOF. The result is a table of interaction devices with a grading depending on system and application relevant work tasks. Further the tables will be used to build up matrices with the combinations of input and output devices for system and application work tasks. A first rough ranking will show, how use-full each interaction device is for a specific work task separated for system and application work tasks). Because not all combinations are useful, the matrices have to be compared and valued with the possible combinations

Work Task Analysis and Selection of Interaction Devices

91

of interaction devices, described in step 4. The result is a ranking, which interaction device fits best the given system or application requirements. The separate ranking for system and application work tasks could lead to an overlap of two different work tasks to the same human sense. This is sometime very disturbing for a user. For instance, printing system information in the view field of the user can be very disturbing in an immersive environment. Such overlaps can be solved by the mapping one function to another human sense. Here, system information can be presented as acoustic signals or as a speaking voice.

4 Experimental Environment The method will be evaluated in an experimental environment for virtual prototyping [12]. A VE has to be designed to fulfill design, validation and training requirements in safety engineering. Safety systems are very complex and the development a difficult task for design engineers [9]. A lot of rules and laws have to be regarded. Mechanical and electrical know-how is necessary for the technical design, ergonomic and psychological know-how for the control design. Most accidents occur because of errors in the control design. Missing ergonomic aspects lead to high workload and low concentration. The test environment for the virtual prototyping of safety systems consists of a robot working cell with two drilling machines and two separate working stations. A worker has to interact with the system by programming or reprogramming the robot system, by exchanging the storage systems and by the repair of male-functions of the robot system, the drilling machines and the working stations (Fig. 2 shows the geometrical model of the robot working cell with the safety system components).

Fig. 2. Geometric model of the experimental environment

92

Thomas Flaig

5 Results The process of virtual prototyping consists of an order of work tasks with different tools. Each tool is well adapted to the problem-solving process [10]. In a VE the user controls these tools by the use of interaction devices. The use of interaction devices for problem-solving is not constant but differs from the work task and the DOF the device in VE. The method for work task analysis and selection of interaction devices has been tested in the experimental environment. The first step is to define the work tasks for VR-system and application control and observation. Table 1 and 2 show the work task separated into the required DOFs, input and output interactions. The grading of the tasks is a subjective ranking with 5 levels of importance. Table 1. Classification and grading of required interactions for system control and observation in virtual environments based on degrees of freedom

Table 2. Classification and grading of required interactions for application control based on degrees of freedom

This method guarantees, that important work tasks will be mapped to high ranked interaction devices (Table 3). Classification by DOF of the interaction devices is shown in Table 3. The process is the same as for the work tasks before. The grading ranks how suitable the interaction device is for the required immersion and presence in the VE.

96

Thomas Flaig

7. Flaig, T.; Wapler, M.: The Hyper Enterprise. In: Roller, Dieter (Hrsg.): Simulation, Diagnosis and Virtual Reality Applications in the Automotive Industry: Dedicated Conference on ..., Proceedings, Florence, Italy, 3rd-6th June 1996. Crydon, London: Automotive Automation, 1996, S. 391-398. 8. Bauer, W.: Virtual Reality as a Tool for Rapid Prototyping in Work Design. In: Koubek, R. J.: Karwowsky, W. (Editors): Manufacturing Agility and Hybrid Automation I. Proceedings fo the 5th International Conference on Human Aspects of Advanced Manufacturing HAAMAHA’96. Maui. August 7-9 1996. Louisville: IEA Press, 1996, pp 624-627. 9. Flaig, T.; Grefen, K.: Auslegung sicherheitstechnischer Einrichtungen mit Virtueller Realität. In: Fachtagung SA, Simulation und Animation für Planung, Bildung und Präsentaion; 29. Februar - 01. März 1996, Magdeburg, Deutschland. 10. Kuivanan, R.: Methodology for simultaneous robot system safety design. VTT Manufacturing Technology, Technical Research Centre of Finland, ESPOO 1995. 11. Herndon, K., P.; van Dam, A.; Gleicher, M.: The Challenges of 3D Interaction; A CHI´94 Workshop. In: SIGCHI Bulletin Literatur zur Kollisionserkennung (1194) Volume 26, Number 4, S. 36-43. 12. Westkämper, E., Kern, P.: Teilprojekt D1; Virtuelle Realität als Gestaltungs, Evaluierungsund Kooperationswerkzeug. In: Univerität Stuttgart (Hrsg.): Ergebnisbericht 01. 10. 1994 31. 12. 1997; Sonderforschungsbericht 374 Entwicklung und Erprobung innovativer Produkte - Rapid Prototyping. Stuttgart, Germany, 1997. S. 369-424.

Effect of Stereoscopic Viewing on Human Tracking Performance in Dynamic Virtual Environments u

i

1

i ipp

ux1

i ipp

oi

1

n

ig o

u

2

1

Laboratoire de Robotique de Paris UPMC-UVSQ-CNRS 10-12 avenue de l’Europe, 78140 Velizy, France {richard, hareux, coiffet}@robot.uvsq.fr http://www.robot.uvsq.fr 2 CAIP Center Frelinghuyssen Rd, P.O. Box 1390, NJ, USA [email protected] http://www.caip.edu

Abstract. In this paper, we present the results of a human factor study aimed at comparing the effect of stereoscopic versus monoscopic viewing on human tracking performance. The experimental paradigm involved tracking and grasping gestures toward a 3D moving object. This experiment was performed using different frame rates (from 28 frames per second (fps) down to 1 fps). Results show that monoscopic viewing allowed stable performance (grasping completion time) down to 14 fps. Stereoscopic viewing extended this stabilty to 9 fps, and decrease task completion time by 50 % for frame rate under 7 fps. We observed that stereoscopic viewing did not much increase performance for high frame rates.

1

Introduction

i u i y ( ) i m g ing n im po n no og y wi v p o oun influ n on wi v i y o u ining n inm n vi u iz ion n o oi . in ion on wy w in wi wo . um n in ion wi i nvi onm n i p im i y vi uo-m o o n viion i om in n ou o on ning o ion . o in n ov ou o no m y n v g p on m un o im m ov m n oo j in p . o p o u u m ov m n on m u u vi u in o m ion ou po i ion o o j in p . n ing o m oving o j on ou o u in o m ion ou o j ’ v o i y 1 2. n i u nvi onm n ( ) o p iv po i ion n v o i y o o j in p o o v u j op p u i o ion .On o m o im po n p m m y u u p p u i o ion i mpo o u ion num o up im g p uni im . m po J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 97–106, 1998. c Springer-Verlag Berlin Heidelberg 1998 

98

Paul Richard et al.

o u ion i y num o n o on i vi u i y vi u ip y y n (iii) y .

o

w in

m

(i)

n o w o . pm n o m po o u ion o m (ii) g p i up

m i w - on o v i m in num o im g n y om pu g p i y m p on . up o vi u i y vi u ip y y m i m o im po n m po v i n m . o in n i ip y y m m o 0 u n up o z n y m p n 15 on u iv i n i im g o n o i ng ny o m n in n n p n 15 on u iv im g o n w n . u i y m n p on y ou im g p on . m jo o in m ining up om pu ion om p xi y o vi u wo .

z n n i

y in vi u iy y m o wo y p om pu ion n n o . om pu ion y i m jo o in m ining up o ip y y m. n o y i im qui o n o y m o p iv m ov m n m y u qui up ing i p y. n o y n om pu ion y i iv . o in n y m w i 10 zup n 15 0m n o y wou p o u m xim um y o 25 0m w n v u ’ m ov noug o qui yn i o n w im g o vi u wo . o ow g p i up vi u o p i jum p p in 3p y y in ion in v . n um g p w n ion w no j i m po in po ion ( p io- m po in m o ion om qu n n o ion y im p in w n ion u y p n

j pp o m ov in i in ion i n n in im n vi u y m o i g u y p n y u ing p iopo ion im p ion o g y on u ing m o ion ).

y on i p nom non o pp n m o ion i v n w p y o og i inv ig ion o im p i ion o i pp n on inui y o o p o vi u p p ion. n p i u i no j m oving in i p i jum p i p iv i in on inuou m o ion w o i im p y ou p iv vi u i ion o o j ny on in n o im ? v i in in o vi n poin o on u ion vi u i ion o o j in i um n m y on i wou o upy ing i i w y in on inuou m o ion. u pp n y m oving o j n o m oo ing y m ov m n in w i y on inuou y ng i po iion in m wy i o o p on inuou y m oving g .M o ov om on ow in m i n y m p vi u in o m ion m y u on inuou y in on o o m ov m n 3 . ow v in o m ion om n pp n y m oving g i g y in ing p io/ m po in v w n u iv g p n ion wi om poin w y wi no ong o ow g on inuou y u wi m ov om po iion o po i ion in i o . M o ov wi om poin w

Effect of Stereoscopic Viewing on Human Tracking Performance

99

n wi no ong o ow g on inuou y u wi o m ov om po i ion o po i ion. n iv inv ig o up on um n p o m n in m nipu ion u ing m ono opi i p y. u j w op o m wo m o m nipu ion u ing n g onn -2 v n- g -o om vo m nipu o . o m n on w n inv o im qui o o o y. g p i up w ju o 2 1 o p . n iv o v p o m n w n up w ow 1 p 5 .M im ino n i n om p m nipu ion p i i y o i vi ion v u m ono opi vi o i p y in im p o -in ion . y oun m n om p ion im opp m i y up ow 1 p . ow v u i u on y m ono opi i p y . n i in ug g vi u p n ion o p p iv i p y up ow m y n ig ni qu n y o in u ion o ino u i p i y 7. g n n v y o v p ju g m n mo u wi yn m i i p i y up ow n2p . i ug g o opi i p y m y up io o m ono opi i p y ow up . om i u i wi vi ion i p y ow o opi i p y om p o m ono opi on i no p ovi ig ni n v n g in p o m ing om m nipu ion 9 10 11 . M o n u i owv in i op o m n w up io o m ono opi un mo on i ion w i m oun o im p ov m n v i wi vi i i i y n ning o 12 13. u ow vn g o oopi vi ion i p y om p onoun wi in n om p xi y n o j vi i i i y. M ono opi n o opi g p i ip y w om p y m p oy ing - xi -m nu ing 1 15 . u w on i n wi p viou vi ion i p y u in i ing o opi g p i i p y i g ny p m i up io ing p o m n w i m ono opi i p y ow quiv n p o m n w n y w n wi qu vi u n n m n p u u p p iv g i .

2

Experiment

n i xp im n w inv ig o o opi vi wing on um n p o m n in ing / qui i ion o 3- m oving vi u o j o i n g p i up . xp im n y m i u i i u in ( ig .1). oo y oup i n - v i u ow o i i u ion o om pu ion o im u ion on wo wo ion . On un wo ion i i o ing n i ing g ov n m in ining in o m ion on o j in vi u wo . n 75 5 wo ion i i o g p i n ing n i p y m o 2 p . 75 5 m ono opi i p y (12 0x102 pix o u ion) w p y p i o m oni o wi 120 z

100

Paul Richard et al. ETHERNET

HP 755 CRX48Z

Sun 4/380 RS232

Monoscopic display

Speakers

Stereoscopic display

stereo glasses

VPL Interface

Pohlemus sensor Emitter

VPL DataGlove

USER INTERFACE

Fig. 1.

xp im n

y

m

i

u

( qui o u o og ). ov TM i u TM n o w i i m oun on u n g u 1 . o mu o g ov n m i 3- w i po i ion n o i n ion 17. u y TM m o )i ou 2.5 m m n ion n o o m u TM n o ( o 1o in o ion. m om

2.1

Visual Display

n o opi vi ion ou in u wo ig y i n vi w o wo ( n y o y ) in o on o n 3- im g . n m ono opi g p i ip y ing 2- im g i p n o o y . um n in in p 2- m ono opi im g 3- im g y u i izing p u (p p iv o u ion m o ion p x n o on). n o opi g p i i p y wo p im g g n wi ig y i n vi wpoin p n o o y . xi i n niqu o p n wo p im g o o y 15 . Ou xp im n u ing n wi n ing ( im -m u ip x ) n ig vi w n u g . n ing n ig im g y n oniz wi op ning n o ing ion o u oug n in ( ) on o ( ig . 2). 2.2 p

Virtual Environment p

xp im n vi u nvi onm n ( ig . 3) i iv vi u n n o m

om po (

o w g ). i u

in p

Effect of Stereoscopic Viewing on Human Tracking Performance

101

CRT Data for right eye image

image signal

Data for left eye image

Switching of left and right images

Dl

Dr control signal of shutters

L

Fig. 2. im g

ino u

R

vi wing y

m u ing

im

ip y o

n

ig

u

u v i p oj ion o o j “ ow ” n g i g oun n in o u in nvi onm n o im p ov p p p ion. vo um o vi u n i m o 129 po y g on n vi u oom i ou 1 m 3 . m in m i um n n . i m o 72po y g on . o o j p og m m in o “ i p y i ” u ing p i i y 1 ou u ing n ou u ing wi on ig ou .

2.3

Protocol

i w un o ow g pp in vi u oom wi m v o i y (25 m p on ). u j w n in u o m oving g n g i qui y po i . o g iv n xp im n ion u j p om wi 15 on p io w n i . ion on i o 10 i . u j w o y -vi u ip y i n w pp oxim y 0 m. n o opi vi wing m o og w wo n. o i ig n n qui i ion w on . ov TM w u j p o m on i ion o m i i iz m v wi im i o vi u oom n g ping (g u og ni ion) niqu ( ig . ). i in p p iv w w up im po on vi u w . o oy oom i um in ion w p ow in o o in on w n ip y n im m i u oun ing . u ing m oving g w in o u in vi u oom w y m o ion u wi n om om p ion im w o i . j o y in 5 o on .

102

Paul Richard et al.

Fig. 3.

xp im n

vi u

nvi onm n

wo g oup o 2 u j ( 2m n 2 m )w u o xp im n . g oup p o m ing u ing m ono opi ip y w i on g oup p o m m u ing o opi i p y. g oup w ivi in u g oup ( 1 o )o 7 u j . 1p i p u ing g m 1 2 p 2 2 1 p 3 3 7 p 3p n 5 5 2p 1 p .

3

Results

om p ion im w u m u o p o m n . o v u o g p i up w o om p ion ov i n ov u j o o vi wing on i ion (m ono opi n o opi ). i ow om p i on w n vi wing on i ion o up n o i . u ow p o m n w om 2 p own o 1 p w n u ing m ono opi vi u i p y ( ig . 5 ) n om 2 own o 7 p w n u ing o opi i p y. o m n i no i i y i n w n 2 p n 1 o o vi wing on i ion ( ig . ). ow v o m o inp o m n y 5 0% o up ow n7 p . o pp o m o p ovi mo i u ( 5 0% m o upw n 1 p n 7 p n 30% m o up w n 2 p n 7 p ). o v ning p o op 9 i o up o m ono opi vi wing ( ig . 7). o o opi i p y i v y i ning o up ov 7 p . o ow up ning p o on inu up o 10 i ( ig . ).

Effect of Stereoscopic Viewing on Human Tracking Performance Virtual ball

VPL DataGlove

Polhemus sensor Stereo glasses

Emitter

Operator

Virtual workspacet

Fig. 4.

4

103

xp im n

on g u

ion.

Discussion o

v o up qu o ig n 1 p u j m ov n on inuou y in p w i o up n o qu o y x i m ov m n o o vi wing on i ion . n gy u w o m ov n om po i ion o po iion. i ug g p io- m po in po ion w im po i op o m . Mo qui o m xim um up m po i p io- m po in po ion o 3 m oving g . o opi vi wing p ov up io o m ono opi vi wing o ow up . on i p o y o ow up op o w p ovi wi i o p i n i im g n p p i m o ion o g on i g oun . u p in o m ion i y vi oug m o ion o o j in vi u n w no v i in i . u on m o p viou y o in y g n n v y w oo v mo u p ju g m n wi yn m i i p i y ow 2 p . o up p io- m po in po ion ow u o o j m o ion o p p p ion. u o in om i xp im n ou om p m n y ing in o on i ion p i u on i ion w o invo ving g ping g u o p o m wi om p i m ov m n . ow v ou u n u g ui in o inv ig ion n ig n o im p m nu in ion wi vi u o j . i 3 p

104

Paul Richard et al.

- - : Mono 20 - : Stereo

Completion time (Sec)

15

10

5

0

1fps

2fps

Fig. 5. M n om p ion im n o opi vi wing

3fps 7fps 14fps frames per second (fps)

n

n

28fps

vi ion o

o

m ono opi

n

o opi

8 7

Completion time (Sec)

6 5 4 3 2 1 0 0

Fig. 6. i vi wing m o

5

n

o v

u

5

om p m

10 15 20 frames per second (Ips)

ion im

25

30

w n m ono opi

Further Work

o x u iv u y o o m u in ion wi vi u m oving o j m ov m n o op o ou m op o p o m n .

w u

n vi wing on i ion on m nom m n y n n n in u in n y i o

Effect of Stereoscopic Viewing on Human Tracking Performance

25

105

1fps

Completion time (Sec)

20 2fps 15

10

5

3fps

7fps 14fps

0 1

28fps 2

3

4

5

6

7

8

9

10

Trial

Fig. 7. Mean task completion time versus trials using monoscopic viewing

25

Completion time (Sec)

20

15

10

1fps 2fps

5

3fps 7fps 14fps 0 28fps 1 2

3

4

5

6

7

8

9

10

Trial

Fig. 8. Mean task completion time versus trials using stereoscopic viewing

6

Conclusion

v inv ig o o opi vi wing on um n ing /g ping m ov m n ow 3 m oving vi u o j o i n g p i m . o m n w oun no o i i y i n w n2 p n 1 p o m ono opi n o opi vi wing . ow v o opi vi wing in p o m n y 5 0% o m ow n 7 p . i ug g o opi i p y mo u u ow m .

106

Paul Richard et al.

References 1. A.P. Georgopoulos, J. Kalaska, and J. Massey, “Spatial trajectories and reaction times of aimed movement: effects of practise, uncertainty, and change in target location”, Journal of Neurophysiol., 46:725-743, 1981. 2. C.G. Atkeson and J. Hollenback, “Kinematic features of unrestrained vertical arm movement”, Journal of Neuroscience, 5:2318-2230, 1985. 3. J.A Thomson, “How do we use visual information to control locomotion ?” Trends in Neuroscienc es 3, 247-250, 1980 4. J.A Thomson, “Is continual visual monitoring necessary in visually guided locomotion ?” Journal of Experimental Psychology: Human perception and performance9, 427-443, 1983. 5. V. Ranadive, “Video Resolution, Frame-rate, and Grayscale Tradeoffs under Limited Banwidth for Undersea Teleoperation”, SM Thesis, M.I.T, September 1979. 6. M. Massinimo and T. Sheridan, “Variable force and visual feedback effects on teleoperator man/machine performance” Proceedings of NASA Conference on Space Telerobotics, Pasadena, CA. Vo.1, pp:89-98, January 1989. 7. Y.-Y. Yeh and L.D. Silberstein, “Spatial judgments with monoscopic and stereoscopic presentation of perspective displays.” Human Factors, 34(5), 583-600, 1992. 8. D. Regan, K.I. Beverley, “Some dynamic features of depth perception”,Vision Research, Vol:13, 2369-2379, 1973. 9. W.N. Kama and R. DuMars, “Remote Viewing: A Comparison of Direct Viewing, 2-D and 3-D Television, AMRL-TDR-64-15, Wright-Patterson Air-Force Base, 1964. 10. P. Rupert and D. Wurzburg “Illumination and Television Considerations in Teleoperator Systems”. Heer, E. (Ed.), Remotely Manned Systems, California: California Institute of Technology, pp. 219-228, 1973. 11. L.A. Freedman, W. Crooks and P. Coan, “TV Requirements for Manipulation in Space”. Mechanism and Machine Theory, 12, 5, 425-438, 1977. 12. R.L Pepper, D. Smith and R. Cole, “Stereo TV Improves Operator Performance under Degraded Visibility Condition” Optics and Engineering 20, 4, 579-585, 1981. 13. R.L. Pepper, R. Cole, E. Spain and J. Sigudson, “Reseach Issues Involved in Applying Stereoscopic Television to Remotelly Operated Vehicles” Proceedings of SPIE, 402-Three-Dimensional Imaging, 170-173, 1983. 14. S.R. Ellis, W. Kim, M. Tyler, M. McGreevy and L. Stark, “Visual Enhancement for Telerobotics: Perspective Parameter”. IEEE 1985 Proceedings of the International Conference on Cybernetics and Society, pp. 815-818, 1985. 15. W.S. Kim, S. Ellis, B. Hannaford, M. Tyler and L. Stark, “A Quantitative Evaluation of Perspective and Stereoscopic Display in Three-Axis Manual Tracking Tasks” IEEE Transportation Systems, Man and Cybernetics, SMC-17, 1, pp. 16-72, 1987. 16. VPL Research Inc., “DataGlove model 2 Operating manual”, Redwood City, CA, 1987. 17. Polhemus Navigation Science Division, Space Isotrack User’s Manual, Colchester, VT. 1987. 18. Hewlett-Packard, “Starbase Display List Programmers Manual”, 1st Edition, January, 1991.

Interactive Movie: A Virtual World with Narratives y oh i M

k tsu

ok o os

n

k sh i O h i

i t r tio o i tio s s r ik ri i ik o or k oto 619 {nakatsu,tosa,ochi}@mic.atr.co.jp http://www.mic.atr.co.jp/

or tori s

Abstract. t r ti o i s r s t o i t t i l ts i t r tio iliti s i to o i s. i t r ti o i s o l t r rs jo t lo to stor i t r ti it r t rs i t stor . t is r rst l i t o t o i t r ti o i s ri fl s ri t rotot s st lo . t s ri t o str tio o s o s s t i r rr tl lo i s ll s s r l i ro ts i or or t i it.

1

Introduction

v sin th Lum i oth s t in m tog ph y tth n o th 19th ntu y 1 m otion pi tu sh v un g on v ious v n sin oth t h nolog y n ont nt o y m otion pi tu s o m ovi s h v st lish th m slv s s om posit t o m th ts v s wi ng o ultu l n s xt n ing om n tto m ss nt t inm nt ow v onv ntion lm ovi sunlit lly p snt p t m in s n s n sto y stting s so u i n st k no p tin th m n m k no h oi sin sto y v lopm nt On th oth h n th us o int tion t h nolog y m k sitpossi l o th vi w to ” om ” th m in h t in m ovi n njoy sth n xp i n li v th tth is pp o h woul llow p o u sto xplo th possi iliti so n w l sso m ovi s om th is vi wpoint w h v n on u ting s h on int tiv m ovi p o u tion y pply ing int tion t h nolog y to onv ntion l m ovi m k ing t h niqu s s n initi lst p in ting n w ty p o m ovi w h v p o u p ototy p sy st m 2 s on th issy st m w u ntly v loping s on p ototy p sy st m with m ny im p ov m nts h isp p i fly s i sth stp ototy p sy st m n outlin sitsp o l m s n qui im p ov m nts h p p lso into u sth on g u tion o th s on p ototy p sy st m wh i h isnow ing n y in o po ting th s i im p ov m nts J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 107–116, 1998. c Springer-Verlag Berlin Heidelberg 1998 

t r ti

Mo i

irt

l

orl

it

rr ti s

1 9

y th s n m n g n int tion m n g h y ontolv iousinput n output un tionssu h ssp h og nition g stu og nition outputo visu l im g s n outputo soun s (2) Hardware ig u 1( ) sh owsth h w on g u tion h sy st m onsistso n im g outputsu sy st m sp h og nition su sy st m g stu og nition su sy st m n soun output su sy st m n th im g output su sy st m h ig h sp g n ting wo k st tion isus o th visu lim g output h sp h og nition op tions x ut y sing l wo k st tion h g stu og nition un tion is lso x ut y wo k st tion h soun outputsu sy st m onsistso sv l s k g oun m usi soun ff ts n h t i log s sim ult n ously p o u y th issu sy st m 2.3

Evaluation and Problems

t st th stp ototy p sy st m with pp oxim t ly 0 p opl u ing th h l y p io ollowing om pl tion o sy st m v lopm nt s on th i om m nts w v lu t th sy st m n i nti s o u th s h s sum m iz low (1) Number of participants h si on pto th stsy st m w s sto y with juston pl y ting th ol o h o ow v th stsy st m l k th m ulti us un tionsn o th sto y to t k pl in y sp sin y sp will t ov n two k n will qui th sto y to v lop om notjuston pl y ut om th int tionso sv lpl y sp ti ip ting tth s m tim (2) Frequency of interaction nt tion in th stsy st m w sg n lly lim it to h ng pointsin th sto y so th sto y p og ss lin ly long p t m in ou s lik m ovi x pt tth s h ng points h t in v nt g sto th ist h niqu su h s ing l to us th sto y v lopm ntt h niqu s n xp tis u m ul t y sk ill in m tog ph s ow v th is v nt g o using x t in th s m w y s o onv ntion l m ovi isth tth sto y l m nts pl y s m sto n up sp t to wh o n sit i ultto p ti ip t int tiv ly tpointswh int tion is l ly qui h lim it oppo tuniti s o int tion t oth w k s o th pl y su h sh ving littl to is ting uish th xp i n om w t h ing m ovi n h ving v y lim it sns o involv m nt

3 3.1

Description of Second System Improvement

h ollowing pointsw us to im p ov th s on sy st m s s i low (1) System for multiple players Ou initi l ffo tto v lop sy st m o m ultipl pl y s llow two pl y s to p ti ip t in y sp in th v lopm nto sto y h ultim t g o l

11

o

i

k ts

t l.

w sto t m ulti pl y sy st m op ting oss n two k utth stst p in th p sntstu y w sth v lopm nto p ototy p m ulti pl y sy st m onsisting o two sy st m s onn t y L (2) Introduction of interaction at any time o in s th qu n y o int tion tw n th p ti ip nts n th sy st m w vis w y o pl y sto int twith y sp si nts t ny pointin tim si lly th s im p om ptu int tions ll sto y un onsious int tion ( ) o u tw n th pl y s n h t s n g n lly o not ff tsto y v lopm nt On th oth h n th som tim sint tions th t o ff tsto y v lopm nt h isk in o int tion ll sto y onsious int tion ( ) o us t n h pointsin th sto y n th sultso su h n int tions t m in th utu v lopm nto th sto y (3) Other improvements m otion og nition o liz int tion t ny tim n m otion og nition p ility w sinto u h n pl y sutt spont n ousutt n s th h t s t y using th i own utt n s n nim tions o ing to th m otion og nition sult M otion ptu into u m otion p tu sy st m s on m g n ti snso s h two m jo sons o su h sy st m On isto sh ow v t s s lt g oso th pl y son s n th us g iving th pl y sth ling th tth y lly tiv p ti ip ntswith th sy st m h oth isto im p ov g stu og nition h stsy st m sg stu og nition s on im g so t in y m w sin ff tiv u to low lig h t h o w w nt to us m otion ptu t o g stu og nition 3.2 ig u

Software System Structure 2( ) sh owsth

stu tu

o th

so tw

us in th

D

Fig. 2. ( ) o tw n two k

s on sy st m

E

on g u tion o s on

sy st m

n

( ) s n t nsition

t r ti

Mo i

irt

l

orl

it

rr ti s

111

(1) System structure concept h il th stsy st m st ss sto y v lopm nt th s on sy st m h to hi v g oo ln tw n sto y v lopm nt n im p om ptu int tion y in o po ting th on pto int tion t ny tim h is qui uil ing isti ut ontol sy st m inst o top own sy st m stu tu h is v i ty o h it tu s v il l o isti ut ontol sy st m s ut w h os to us n tion sl tion n two k 3 th t sn s n iv s tiv tion l v ls m ong m ultipl no s h s l v ls tiv t no s n tig g p o sss sso i t with th no s t point y on th tiv tion l v l th sh ol (2) Script manager h ol o th s iptm n g isto ontol t nsitions tw n s n s just sit i with th stsy st m n int tiv sto y onsistso v iousk in so s n s n t nsitions m ong s n s h un tionso th s iptm n g to n th l m ntso h s n n to ontol s n t nsitions s on sing l s n to on o n in nit utom ton ( ig 2( )) h t nsition om sv l possi l su squ nts n sis i s on th sultsnt om th s n m n g (3) Scene manager h s n m n g ontolsth s n s ipt sw ll sth p og sso th sto y in s n tion l t to th p og sso th sto y in s n is ll n v nt n v ntt nsitions ontoll y th s n m n g v nts o h s n onsisto s n im g s k g oun m usi soun ff ts h t nim tion n h t sp h n pl y n h t int tion h s ipt o h s n isp sto h o tim in n v ntn two k n th s n m n g g n t s h s n s on t om th s iptm n g vi s ipt h tim ing o t nsition om on v ntto th n xtw s ontoll y th s n m n g in th stsy st m ut solut tim nnot ontoll in th s on sy st m us itin o po t sth on pto int tion t ny tim ow v l tiv tim n tim o n ontoll in th s on sy st m so th tion sl tion n two k w s ppli h sw ll h ollowing s i s h ow th iswo k s 1

tiv tion l v ls snt o x h ng m ong v nts sw ll s xt n l v nts 2 n v nt tiv t swh n th um ul tiv tiv tion l v l x sth th sh ol 3 On tiv tion o n v nt p t m in tion o spon ing to th v nt o u s tth s m tim tiv tion l v ls sntto oth v nts n th tiv tion l v l o th tiv ting v ntis st h o o v nts n p st n v i tion sw ll s m ig uity n into u into th o o v nts y p t m ining th i tion th t tiv tion l v ls snt n th st ng th o tiv tion l v ls

11

o

i

k ts

t l.

(4) Interaction manager h int tion m n g isth m ost iti l om pon nt o h i ving int tion t ny tim ig u 3sh owsth stu tu o th int tion m n g h siso int tion t ny tim is stu tu th t llots h h t (in lu ing th pl y s v t ) n m otion lst t n int tion input om th pl y long with int tion tw n th h t s t m in th m otion l st t s w ll sth spons to th t m otion l st t o h h t om l w y is g iv n to h ow spons is xp ss p n ing on th h t sp son lity n i um st n s h int tion m n g is sig n s on th on pts outlin low

Fig. 3. tu tu 1) h

ning n m otion l st t st t n int nsity o pl y

o int

s(i

tion m n g

1, 2...) m otion ttim

T is

n

s Ep(i, T ), sp(i, T )

(1)

sp(i, T ) 0o 1 (0in i t sno input n 1 in i t s n input) im il ly th st t n int nsity o n o j ts(i 1, 2...) m otion ttim T is n s Eo(i, T ), so(i, T ). (2)

wh

2) th

ning th m otion l st t o n o j t o th s k o sim pli ity th m otion l st t o n o j tis t m in y m otion l st t wh n pl y int tion sults om m otion og nition {Ep(i, T )} → {Eo(j, T

tiv tion l v ls input s

sntto

1)}.

h o j twh n m otion sp(i, T ) → sp(i, j, T ),

(3) og nition

sults (4 )

t r ti

wh i is

Mo i

irt

l

orl

it

rr ti s

113

sp(i, j, T ) isth tiv tion l v lsntto o j tj wh n th m otion o pl y og niz h tiv tion l v l o o j tj isth tot lo ll tiv tion l v ls iv y th o j t  so(j, T 1) sp(i, j, T ). ()

3) xh i iting tion n o j t th t x sth tiv tion th sh ol p o m s tion Ao(i, T ) s on n m otion lst t M o sp i lly tion is h t sm ov m nt n sp h s tion to th m otion lst t o th pl y tth s m tim tiv tion l v lsso(i, j, T ) sntto oth o j ts if so(i, T ) > T Hi then Eo(i, T ) →  Ao(i, T ), Eo(i, T ) → so(i, j, T ) so(i, j, T ). so(j, T 1) h ism h nism t sint tion tw n o j ts n int tion th n sim pl int tion with on to on o m otion og nition sults n o j t tions 3.3

n l sm o spon n

() iv s tw n

Hardware System Structure

ig u 4 sh owsth s on sy st m sh w put voi n m otion og nition g stu sy st m s

Fig. 4.

w

stu tu om pos o im g out og nition n soun outputsu

on g u tion o s on sy st m

114

o

i

k ts

t l.

(1) Image output subsystem wo wo k st tions(Ony x n nit lity n n ig o 2 m p t) p l o g n ting om put g ph i s t h ig h sp us to output im g s h Ony x wo k st tion isus to un th s ipt m n g s n m n g int tion m n g n ll im g output so tw h t im g s p sto on th wo k st tionsin th o m o om put g ph i nim tion t in o to g n t om put g ph i sin l tim k g oun om put g ph i im g s lso sto s ig it l t so k g oun im g s n g n t in l tim om k g oun im g s ph otog ph i im g so l s n y sto on n xt n ll s is h m ultipl h t om put g ph i s k g oun om put g ph i s n k g oun ph otog ph i im g s p o ss sim ult n ously th oug h vi o o son oth th Ony x n n ig o 2wo k st tions om put g ph i s ispl y in 3 o mo listi im g s n u v s n isus to nv lop th pl y with im g s n im m s h im o h in th int tiv m ovi wo l m g t o th l t n ig h t y p viously t on th wo k st tions int g t y st osopi vision ontol n p oj t onto u v s n y two p oj to s On th n ig o 2 n h ow v im g s outputon n o in y l g s n ispl y with outst osopi vision us o p o ssing sp (2) Voice and emotion recognition subsystem oi n m otion og niz with two wo k st tions( un 20s) th t n m otion og nition h n l s oi inputvi m i oph on lso un th voi is onv t om n log to ig it l y th soun o uiltinto th un wo k st tion n og nition so tw on th wo k st tion isus to og niz voi s n m otions o th og nition o m ning sp k in p n ntsp h og nition lg o ith m s on M M is opt 4 m otion og nition is h i v y using n u l n two k s lg o ith m h wo k st tion p o sssvoi input om on pl y (3) Gesture recognition subsystem stu s og niz with two n y wo k st tionsth t un th g s tu og nition h n l s h wo k st tion t k soutput om m g n ti snso s tt h to pl y n ussth t t output o oth ontolling th vt n og nizing g stu s (4) Sound output subsystem h soun outputsu sy st m usssv lp son l om put s us k g oun m usi soun ff ts n sp h o h h t m ust outputsi m ult n ously oun ff ts n h t sp h sto s ig it l t th t onv t om ig it lto n log sn n m ultipl p son l om put s us to n l sim ult n ous ig it lto n log onv sion o m ultipl h nn ls k g oun m usi issto on in o to outputth s soun ssim ult n ously n xt n l om p t is wh os outputis lso ontoll y th p son l om put h m ultipl h nn l soun outputs m ix n outputwith m ix ( m h 02 ) th t n ontoll y om put

t r ti

4 4.1

Mo i

irt

l

orl

it

rr ti s

11

Example of Interactive Story Production An Interactive Story

h v p o u n int tiv sto y s on th p viously s i on y st m sl t ” om o n uli t” y h k sp sth s sto y o th ollowing sons 1

h two m in h t sin th sto y n th o th issuppli s g oo x m pl o m ulti p son p ti ip tion 2 ” om o n uli t” is v y w ll k nown sto y n p opl h v stong si to toutth ol o h o o h oin h o itis xp t th t p opl n sily g tinvolv in th m ovi wo l n xp i n th sto y h m in ploto th sto y is s ollows t th i t g i sui i th lov s souls sntto s wh th y n th tth y h v tot lly lostth i m m o y h n th y st tth i jou n y to isov wh o th y n wh tth i l tionsh ip is ith v iousk in so xp i n s n with th h lp n g ui n o h t sin s th y g u lly n th m slv s g in n n lly g o k to th l wo l 4.2

Interaction

h two p ti ip nts on pl y sth ol o om o n th oth uli t h two su sy st m s lo t in two sp t oom s n onn t y L h p ti ip ntst n sin onto th s n o h is h sp tiv sy st m w ing sp i lly sig n loth sto wh i h m g n ti snso s n m i oph on s tt h n th s o om o th p ti ip ntw s 3 L sh utt g l ss n n njoy 3 s n s h i v t s on th s n n m ov o ing to th i tions h y n lso om m uni t y voi si lly th sy st m ontolsth p og sso th sto y with h t nim tions n h t i log u s p n ing on th voi n g stu tionso th p ti ip nts th sto y m ov son u th m o sis s i o int tion ispossi l t ny tim h n th p ti ip ntsutt th h t s t o ing to th m otion og nition sults onsqu ntly p n ing on th qu n y o th p ti ip nts int tion th issy st m n g o ny wh tw n sto y om in nt op tion n im p om ptu int tion om in ntop tion ig u illust t s ty pi l int tions tw n th p ti ip nts n th sy st m

5

Conclusions

n th isp p w st xpl in th on pto int tiv m ovi s n i fly xpl in ou stp ototy p sy st m s on n v lu tion o th issy st m w i nti sv l p o l m sin th sy st m th t n to im p ov On is th l k o qu ntint tions n th oth issing l p son p ti ip tion o ov om th s i n i sw v loping s on sy st m xpl in

116

o

i

k ts

t l.

D ¦5RPHR§ FRQWUROV KLV DYDWDU

E ¦5RPHR§ WULHV WR WRXFK REMHFW LQ -DSDQHVH FXULRVLW\ VKRS

Fig. 5.

x m pl so int

tion

tw n p ti ip nts n sy st m

two sig ni ntim p ov m ntsin o po t into th t ny tim n two p son p ti ip tion th oug h so tw n h w on g u tionso th s on th s im p ov m nts in lly w illust t th op with th x m pl o ou int tiv p o u tion o ”

s on sy st m int tion n two k s i th sy st m wh il m ph sizing tion o ou on y st m om o n uli t”

References 1. .

.

. r ” i r olo i s i os ” o o lt rl r (196 ). o i k ts oko os ” o r t liz tio o t r ti o i s t r o i tio t r o t st ” ro i so t t r tio l o r o M lti i o ti st s’97 .71 77 ( 1997). 3. tti M s ” o to o t ri t t i ” o tio i ol.1 o.3 . 91 3 3 (19 9). 4. ts i iz t l. ” o t o s i lo o itio si ross or o t t o str i or r ” ro i so ’96 ol. 1 . 14 14 ( ril 1996). . oko os o i k ts ” i lik o i tio t otio s i r t r ’M ’ li ssio r t r ’M ’ ” ro i so t t r tio l o r o M lti i o ti st s .1 19 ( 1996).

Real-Image-Based Virtual Studio ong l

k

n

ik i nou

ATR Media Integration & Communications Research Laboratories 2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan {pji,sinoue}@mic.atr.co.jp http://www.mic.atr.co.jp/

Abstract. In this paper, we propose a real-image-based virtual studio system where a virtual environment is generated by an image-based rendering technique and then a studio scene is put in the environment based on a 3D compositing technique. The system is implemented and the validity of the system is confirmed through experiments.

1

Introduction

o on u ing n i pl y ing vi u l nvi on n n x n iv ly xplo o in o n o i l l . k y nolog y in i o o po i v iou i g vi o ou in o n i g vi o. n on i l wo k on jux po ing l i g n o pu g p i ( ) in on x o vi u l li y. K u g ’ L 4 o u on in ion w n u n o pu g n wo l . y i et al.’ vi u l u io y 1 p iz i g qu li y. i n o o i pu po . o i i ing p ovi ing u wi in iv ly njoy ing vi u lwo l w il l i pu uing ig qu li y in g ion o ul ipl i g ou in vi u lwo l . n o o n w o o o n in ion wi vi u l nvi on n n g n ion o ul i i on n w v loping n g xp ion oo ( oo ) 2. n oo on n lo n o v on l in vi u l nvi on n n n v iou i g ily wi ig g o o . n i p p w o u on pu uing li y in vi u l wo l . i i o ul i i on n . y on u ing n i pl y ing vi u l nvi on n n n in ing u io n in nvi on n w g l u io n n vi u l nvi on n . vi u l nvi on n n li g w ll . i in lu only g ing i o i pl . u ing p o o o o l k ing u io n woul noug . i i pp o k n in 1 . w on i o o i g ou . p opo li g vi o o po i ing y Real-Image-Based Virtual Studio in i p p . o n ing u o y y J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 117–122, 1998. c Springer-Verlag Berlin Heidelberg 1998 

118

Jong-Il Park and Seiki Inoue

og n n i y vi w u ing li g u vi wpoin o i n ion n on i ion o vi u l k ing vi u liz l n vi on n oul ju o o o u io . n i g n ing niqu i ploy o qui n . pp o in i p p i o k n nvi on n n wi ul ipl n n o x p n n p p on ul i vi w o ing nolog y . n n i y vi w i g n u ing i g n p p . in p p o i y vi w i i ul n ou ly g n o i w ll ui o Z k y o po i ion. n i pl y g i g on l g n in on o oo n in ily wi vi u l nvi on n w i n l n u li g xp ion.

2

Image Expression Room

u

ion.

i fly in o u

on p o

oo . ig u

1

ow i

on g

EOXH PDWWH ODUJH KDOI PLUURU

VFUHHQ

XVHU

VHFRQGDU\ FDPHUDV

PDLQ FDPHUD

analysis YLGHR

COMI&CS

Fig. 1.

GDWDEDVH

g

xp

ion

oo .

jo o pon n o oo in lu o k y n l i o in on o n u io n ( o pu g niz i n g ion n o uni ion y n i l l in vi u l n n xp i l ing i oug . in oo oug l i o o i o n. n i w y in wi ou pu i g i i ing i o w o onv ni n n i n ly w y o in ion o u n. u on w n o n vi w n liz wi on l l o i pli g zing u vi w in on g u ion o lu k g oun g ion in i g o in i

u io

l g

). u l wi in u io n u n i i on o o y in g zing oo . li in y

Real-Image-Based Virtual Studio

p

ing oug n o ion n o pon n . il

3 3.1

o n

k y . on y n ly oun in 2.

y

n o on olv iou

119

u i

Real-Image-Based Virtual Studio 3D Virtual Environments

n i g n ing niqu i ploy o ing vi u l nvi on n . p o uil vi u l nvi on n u ing ul i vi w li g n i ip iy p. iv u in ou i pl n ion. n i lo n n o lo y i lly in o izon l n v i l i ion. n n p p po n i x on i o ul i vi w o ing niqu w i i ig n o ov o o o lu ion . w n g n n i y vi w only u ing v il l in o ion ul i vi w i g n p p w n y l n i vi u liz . n i w y w n on u vi u l nvi on n . 3.2

Arbitrary View Generation

g n n i y vi w n i p p g iv n ul i vi w i g n n i g ’ p p. o i on 2 p pp o . up po w v vi u l n ov o n i y po i ion n o i n ion. i p i o op wi pu n l ion o vi wpoin . i p o p n on p o pix l. u w ll i 3 p o ing . i p on i o n o ing p p in o o n w vi w po i ion n n pping o olo o i g o p op on y . p p o n vi w i n o in o o n w vi w po i ion. o w pping niqu wi up pling i u . o ol w ul ipl pix l pp on g i w oo ll ong up po p v lu u ing o j op qu . olo (o in n i y o g y i g ) in o ion i i ul n ou ly pp u ing n o ion. o un ov w i ion lly in pol p p. k u o k g oun p v lu in in pol ing p o un ov . on o in pol p p o un ov i wo ol . n i o p ovi w y o g u ing p op olo in o ion o on y . o i o ou pu p p o u u in 3 vi o o po i ion. n w in pol olo o un ov y k pping u ing p v lu n v li i y k. on p i o op wi zoo ing n pu o ion o vi u l . i in p n n o p o pix l n u involv only 2 p o ing . No i w nw n l o i n ion n l w lo o jo o p p op p o 3 o po i ion. il on i y vi w g n ion n oun in .

120

3.3

Jong-Il Park and Seiki Inoue

3D Video Composition

vi o o po i ion in oo n ivi in o 3 go i on l v l o xploi ing 3 p op y. n i 2 o po i ion w ul ipl i g ou ju jux po in l y . on i 2.x o po i ion on ly p n ion o i g qu n 9 w i i y on 2 u low 3 . g n i g in o o ly n n l ly in p n n ly. i i 3 o po i ion on ull 3 in o ion o n w i w o u on in i p p . uppo w o g ul ipl i g ou in o on i g . ll i g ou k n y x vi o w n g i g y i pl Z k y o 3. i p i y v lu o pix l o ul ipl i g o o po o p n i g o l g i p i y quiv l n ly n poin o pix l po i ion l pix l v lu o o po i g . on o un g o wo k w oul ju o i g o ing o wo k 1 . n oo in i llow o ov ly n o ng o l l ng . u w n w o pu u io n in vi u l nvi on n vi u l oo ing vi u l nvi on n oul l ov y o ing o wo k o in . o ov o u iliz Z k y o o pon ing p p oul l o p p . n i pl n w i in .3.2. 3.4

Implementation ov ll lo k i g o oo i own in ig .2

PXOWLYLHZ LPDJHV GHSWK H[WUDFWLRQ

li

DUELWUDU\ YLHZ JHQHUDWLRQ

GHSWK PDS PDJQHWLF VHQVRU

SRVLWLRQ RULHQWDWLRQ

g

vi u l u io y

in

5*% LPDJH GHSWK PDS 5*% LPDJH GHSWK PDS

=NH\ V\QWKHVLV RXWSXW LPDJH

5*%  NH\ LPDJH FRQYHUVLRQ PDLQ FDPHUD GHSWK NH\ LPDJH FKURPD SRVLWLRQ DQDO\VLV VHFRQGDU\ NH\LQJ FDPHUD FKURPD NH\LQJ

Fig. 2. in

on p u l flow i g oo .

o

li

g

vi u l u io y

Real-Image-Based Virtual Studio

121

po i ion n o i n ion o in u y gn i n o n n i o i y vi w g n ing uni . vi w g n ing uni p o u vi u l vi w ving po i ion n o i n ion o o o in . u vi w g n ion i pu ly o w i pl n now i nno p o u i y vi w in vi o . u w p p n i y vi w n i p p a priori o nu o pl vi w poin wi in volu o in n o in o y i il o pp o o ovi p 5. n w w n o xplo vi u l nvi on n in o wo w n w o g u io n in vi u l nvi on n n vi u l vi w po i ion o u v lu l ul . n o pon ing vi w n i p p ll o o y n n i o Z k y yn i uni oug g o i l n o ion uni . g o i l n o ion uni x u 3 i g o ion n zoo ing in l i . n i i pl n ion y n i vi u l nvi on n i. . on ining oving o j nno n l . o i l olu ion ow y n i nvi on n y o k i y vi w g n ion p o in vi o w i i un x n iv inv ig ion. po i ion o u on lu k g oun u io i y on y lo on op o n. p io i k nowl g u i on fl floo i xploi . i g o in i now n o in o n i g wi on n p u y on y n n i o Z k y yn i uni . in i g o in i g fl p n l p i p u 3 o po i ion i no llow in u n i pl n ion. n i v only i w v l i n p pp in u io 3. n o ppli ion ow v fl p n l pp o g iv u i n ly g oo ul . no li i ion u n i pl n ion i ul i o n l in on ly . o n l p ly w oul in o u n n ly i uni po i ly wi o on y w i i u n ly inv ig . ing l i n p pp n lo olu ion. ow xp i n l ul in ig .3. u i n ing x po i ion. in i ov iv ly o ig ( u lly o l u w u l i o in oo ) n lig ly zoo ing in. qui n u l look ing o po i ion ul . nw o v n in l i w l u w u lly in vi u l nvi on n . v on oug ny xp i n vi w y n i o oul g n v i y o li g vi u l nvi on n u ully n 3 o po i ion wi oving oul p o u n u l look ing o po i ion i g .

4

Concluding Remarks

n i p p w p opo li g vi u l nvi on n i g n y n i u io n i pu in nvi on n

g

vi u l u io y w n ing niqu n on 3 o po i ion niqu .

122

Jong-Il Park and Seiki Inoue

•• •

•••

Fig. 3. xp i op ion.

n l

ul

3

o po

i

g

()

o

p o o yp y i i pl n n v li i y o oug xp i n . u n i pl n ion y n i vi u l nvi on l . u u o i x on g n ing i no li i ion i ull3 in o ion on o j v il l w i n ov o only y in o u ing pp . l o on i ing no n io in vi u l y oving i ing u ing o ion u io o ing o i u Acknowledgment: u o g ul o . o i v lu l o n .

n ( )

y

i

on

n nno n y vi w in l i . in u io i no l i n p u io k ing n n n on ol o ion. ki y io N K

References 1. M.Hayashi et al., “Desktop virtual studio system,” IEEE Trans. Broadcasting, vol.42, no.3, pp.278-284, Sept. 1996. 2. S.Inoue et al., “An Image Expression Room,” Proc. Intl’ Conf. on Virtual Systems and Multimedia’97, pp.178-186, Geneva, Switzerland, Sept. 1997. 3. T.Kanade et al., “A stereo machine for video-rate dense depth mapping and its new applications,” Proc. IEEE CVPR’96, pp.196-202, San Francisco, June 1996. 4. M.Krueger, Artificial Reality, Addison-Wesley, 1983. 5. A.Lippman, “Movie-Maps: An application of the optical video disc to computer graphics,” SIGGRAPH’80, 1980. 6. J.Park, N.Yagi, and K.Enami, “Image synthesis based on estimation of camera parameters from image sequence,” IEICE Trans. on Information and Systems, vol.E77-D, no.9, pp.973-986, Sept. 1994. 7. J.Park and S.Inoue, “Toward occlusion-free depth estimation for video production,” Proc. Intl’ Workshop on New Video Media Technology’97, pp.131-136, Tokyo, Japan, Jan. 1997. 8. J.Park and S.Inoue, “Arbitrary view generation using multiple cameras,” Proc. IEEE ICIP’97, vol.I, pp.149-153, Santa Barbara, USA, Oct. 1997. 9. E.Adelson, J.Wang, and S.Niyogi, “Mid-level vision: New directions in vision and video,” Proc. IEEE ICIP’94, pp.21-25, Austin, USA, Nov. 1994.

Pop-Out Videos Gianpaolo U. Carraro, John T. Edmark, J. Robert Ensor Bell Laboratories 101 Crawfords Corner Road, Holmdel, New Jersey USA [email protected], [email protected], [email protected]

Abstract. This paper discusses a class of multimedia displays that we call popout videos. A pop-out video is a composition of multiple elementary displays. Some of these base elements are video streams, and some are three-dimensional graphic displays. The elementary displays of a pop-out video may be related in two ways. They may be contiguous portions of a single virtual world, forming what we call an integrated video space. Alternatively, they may be parts of distinct spaces, forming what we call a complementary video space. In both cases, the elementary displays are related to each other by coordinated event handling. In our on-going research, we are using pop-out videos to create multimedia virtual worlds and to orchestrate the presentation of data in multiple media. After a survey of related work, the paper outlines our initial implementations of pop-out videos and presents future plans.

1 Introduction We are building pop-out videos. These are multimedia displays that combine video streams and three-dimensional graphic displays. The elementary displays of a pop-out video may be related in two ways. They may be contiguous portions of a single virtual world, forming what we call an integrated video space. Alternatively, they may be parts of distinct spaces, forming what we call a complementary video space. In both cases, the elementary displays are related to each other by coordinated event handling.



Fig. 1. Integrated Video Space

Fig. 2. Complementary Video Space

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 123-128, 1998 © Springer-Verlag Berlin Heidelberg 1998

124

Gianpaolo U. Carraro et al.

Figure 1 shows an integrated video space created by Peloton [1]—a sports simulator that creates virtual courses for bicycle rides. The central portion of this view is a video stream, which is displayed as a texture on a large rectangle—a two-dimensional video “screen.” In the surrounding display, three-dimensional graphic elements represent the road and some roadside objects. The bicyclist avatars represent multiple simulation participants. As they explore this virtual world, users move through the three-dimensional synthetic terrain and view photo-realistic scenery. Figure 2 illustrates an example of a complementary video space. In this case, the video display and the graphical display are not composed into a single virtual world. Rather, these displays are related only by their integrated responses to system events. This golf instruction program helps students associate the techniques of good golfers with the motions of simpler graphical figures. Students watch video clips showing golfers—who have exemplary technique—swing clubs to hit balls; they also watch three-dimensional humanoid figures demonstrate corresponding animations. By viewing both presentations, students learn to monitor the most important elements of successful club swings. In our current research, we are creating new techniques to build pop-out videos and to orchestrate the presentation of data in multiple media. We are learning more about how to design and assemble collections of multimedia displays, and we are investigating ways to use them.

2 Background A variety of applications integrate three-dimensional graphics with still images or video clips. Photographs are commonly used as textures and background elements in three-dimensional virtual worlds. In addition, image based modeling techniques, e.g., [2] and [3], permit creation of three-dimensional models from two-dimensional photographs. However, these applications do not represent three-dimensional spaces with video, and they do not permit objects to move between graphical regions and video regions in response to user input/output events. Virtual sets, e.g., [4], [5], also combine video and three-dimensional graphics. In [6], live actors can move within computer-generated settings. Augmented reality systems, e.g., [7], demonstrate another combination of these media; they lay computergenerated graphics over video inputs. The Interspace system [8], which supports multiparty conferences on the Internet, creates virtual spaces in which avatars have live video streams as “heads.” [9] discusses virtual reality systems, containing video displays, in which the relative importance of model elements (objects) is specified and used as a quality-of-service control for video transmission. These examples represent steps towards integrated video spaces; however, they do not use video for the representation of spaces within three-dimensional graphic models. Television and Web documents can be combined with WebTV devices [10]. However, these devices

Pop-Out Videos

125

do not create complementary video spaces because they do not provide any association between the television programming and the Web based materials. The MPEG-4 standard proposal [11] is expected to allow specification of video graphical hybrid environments. Its video based objects might form an appropriate infrastructure for pop-out videos.

3 Integrated Video Spaces An integrated video space represents some parts of a virtual world as threedimensional objects and other parts of the same world as still images or video. Because these displays are parts of the same virtual world, we call them regions of the world. The size, shape, and position of each region is calibrated with its neighboring regions to create visual unity of the composite space. A common set of controls and integrated event handling mechanisms are associated with the single world. Objects can move from one part of an integrated video space to another. Hence, we must handle the movement of objects between graphical and video regions. We have developed two techniques to deal with these inter-regional object movements. In the first technique, when an object moves from one region to another, the medium used to represent that object correspondingly changes. The new medium matches the one used to represent the other objects in the new region. For example, when an object goes from a three-dimensional region into a video-based one, it becomes a video element. We call this transform a merge-in because, in a sense, the object has “merged into” the video panel. Figure 1 illustrates a world in which an avatar has moved from a graphical region into a video region and has become a video element. Figure 3 is a behind-the-scenes view of the same merge-in. On the far left, a semi-transparent avatar represents the red cyclist’s “real” position in the virtual world. In conventional three-dimensional worlds, the video panel would occlude the yellow cyclist’s view of the red cyclist’s position. However, by performing the merge-in, Peloton allows the red cyclist to remain visible to the yellow cyclist. Using the red cyclist’s real position, the animation system translates and scales the red cyclist’s video panel for the yellow cyclist’s view. Figures 5 shows a close-up from this point of view. In our second technique, we permit objects to be designated as traceable objects. When a traceable object moves from a three-dimensional region to video panel, it does not become a video element. Instead, it is replaced in the three-dimensional foreground by a trace object. A trace object is an element of the foreground that represents an object “behind” the video panel. Figure 4 shows the same scene as Figure 3, but this time a trace object is used to maintain the red cyclist’s visibility. The animation system translates, scales, and maintains orientation of the trace object

126

Gianpaolo U. Carraro et al.

Red cyclist

Yellow cyclist

Fig. 3. Behind-the-Scenes View of a Merge-In

Red cyclist

Yellow cyclist

Fig. 4. Behind-the-Scenes View of a Trace

Merge-in

Fig. 5. Yellow’s View of a Merge-In

Trace object

Fig. 6. Yellow’s View of a Trace

for the yellow cyclist’s view, according to the red cyclist’s real position. From the yellow cyclist’s viewpoint, it is not possible to distinguish between the “real avatar” and the trace object. Figure 6 shows a close-up of the yellow cyclist’s view of the scene.

Pop-Out Videos

127

4 Complementary Video Spaces In this type of pop-out video, the video display does not integrate with other displays to form a single virtual world. Rather, the displays are related only through their coordinated response to the events that their common application handles. Without visual unity of a composite space, the common control interface and integrated event handling are created as distinct system components. Without regions for a common space, objects do not seamlessly move from one display to another. However, the elements represented in these regions are related through system actions. In Figure 2, both regions display the same golfer but in different media. On the lefthand side, the golfer is represented as video, whereas on the right-hand side, it is represented as a three-dimensional humanoid. This configuration allows users to manipulate elements, i.e. the golfer, in one region and to see the effects in both regions. Multiple representations of data is not a new concept. What is new in complementary video spaces is the ability to associate manipulations in a graphical region to the ones in a video region (and vice-versa). For example, playing back the video stream of the golfer may trigger the animation of the humanoid. When the two regions respond “in synch,” users experience the photo-realistic view of the video and also see the humanoid’s swing from several viewpoints. Handling of events can also result in asynchronous responses. To implement the environment described above, complementary video systems must provide users with a set of controls to perform the manipulations. Also, these systems must be able to “translate” the manipulations performed in one region to the corresponding ones in the other region.

5 Future Work and Conclusion Pop-out videos provide apparent benefits in building virtual worlds as well as displaying information in multiple media. Shopping applications, similar to the golf program described above, could allow shoppers to pick objects from a video catalog or a commercial video and see them in other forms, or pick them from other regions and see them displayed in a video. Also, media melding techniques can be used within a multimedia virtual world to implement multimedia level-of-detail. For example, a surveillance application could display a region near the viewer (camera) in video, while displaying more distant areas through other media. Pop-out videos could enhance television programming by coupling video with information represented in other media. The closed caption (subtitle) concept of today’s television allows text to be associated with video content. This concept can be extended to associate other information—displayed in other media—with video content. For example, hypertextual Web documents and three-dimensional virtual worlds could be associated with video programming content. Various suppliers, including the producers of the

128

Gianpaolo U. Carraro et al.

original programming content or specialized providers, could supply these additional materials. In fact, several collections of additional material could be available for each program. Consumers could then select their preferred collections. Pop-out videos can also be used to build distributed virtual worlds that react to performance constraints. Since region boundaries within a virtual world can vary dynamically, the relative media mix of a world can vary correspondingly. This variation provides an opportunity to tailor applications and their presentation of information to the needs and resources of the system user. For example, it could improve the performance of a distributed application executing on heterogeneous computers connected via the Internet. Application users with powerful machines but poor network connections could take advantage of their computing power by specifying that a virtual world be presented mainly as three-dimensional objects rendered locally. On the other hand, users with network computers—offering highend network connections but less processing power—could specify that the scene be presented using minimal three-dimensional representation to view most of the scene as streamed video. Although many issues still need to be resolved, these examples show that the integration of video in richer multimedia environments could open the doors of a very large set of interactive multimedia applications.

References 1. Carraro, G., Cortes, M., Edmark, J., and Ensor, J., “The Peloton Bicycling Simulator,” Proc. VRML ’98, Monterey, CA, 16-19 February, 1998. 2. Debevec, P., Taylor, C., and Malik, J., “Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach,” Proc. SIGGRAPH 96, 4-9 August, 1996, New Orleans, LA, pp. 11-20. 3. McMillan, L. and Bishop, G., “Plenoptic Modeling: An Image-Based Rendering System,” Proc. SIGGRAPH 95, 6-11 August, 1995, Los Angeles, CA, pp. 39-46. 4. 3DK: The Virtual Studio. In GMD Web Site: http://viswiz.gmd.de/DML/vst/vst.html 5. Katkere, A., Moessi, S., Kuramura, D., Kelly, P., and Jain, R., “Towards Video-based Immersive Environments,” Multimedia Systems, May 1997, pp. 69-85. 6. Thalmann, N., and Thalmann, D., “Animating Virtual Actors in Real Environments,” Multimedia Systems, May 1997, pp. 113-125. 7. Feiner, S., Macintyre, B., and Seligmann, D., “Knowledge-Based Augmented Reality,” Communications of the ACM, (36, 7), June 1993, pp. 53-62. 8. Interspace VR Browser. In NTT Software Corp. Web Site: http://www.ntts.com Interspace 9. Oh, S., Sugano, H., Fujikawa, K., Matsuura, T., Shimojo, S., Arikawa, M., and Miyahara, H., “A Dynamic QoS Adaptation Mechanism for Networked Virtual Reality,” Proc. Fifth IFIP International Workshop on Quality of Service, New York, May 1997, pp. 397-400. 10. WebTV. In WebTV Site: http://www.webtv.net 11. MPEG Home Page. In http://drogo.cselt.stet.it/mpeg

Color Segmentation and Color Correction Using Lighting and White Balance Shifts ilipp

r rd, lifton

illips, nd

m s

in

Visual Computing Lab. ECE building, University of California, San Diego, La Jolla CA 92037, USA

Abstract. A method was developed to segment an image foreground and background based on color content. This work presents an alternative to the standard blue-screen technique (weather man method) by exploiting the color shifts of light sources and filtering each camera lens and correcting white balances for each camera. we compressed the colors of the scene background into a chromaticity subspace to make the foreground-background segmentation easier to perform. The segmentation is singular decomposition (SVD) based.

Keywords: rom -k y

1

olor

g m nt tion, lu

r n, 3

ooting , irtu l

lity,

Introduction

ppli tionsd m nding n utom ti vid o str m sg m nt tion r num rous n oft m ostf m ousist lu sr n w t r m n) m t odw r su j t situ t din frontof uniform lu w llor y loid,isov rl idon n ot r im g y n utom ti rom -k y sg m nt tion n t l vision produ tions, t lu sr n isusdin studio nvironm nt nd llowslim it dpossi l positions nd ori nt tionsfor t m r s n virtu l lity ppli tionssu st 1 , on w ntsto sg m nt for g roundsu j tfrom k g rounds n using t p rsp tiv of n r itr rily position d ndori nt t d m r , uilding 360 d g r s lu sr n isnotusu lly f si l , sitwouldim ply to p int n ntir sp r in lu ,position t su j tinsid ndprovid ol sfor m r l ns ddition lly, t lu sr n m t od r quir sp rti ul r tt ntion for s dows proj t don t lu w ll ndt qu lity of t sg m nt tion d p ndson t uniform ity of t lig tspr don t k g round n our ppro ,t lig ting oft k g roundism or tol r nt T is l im is justi d us w do not v d rk surf sw i g n r t nois ur m t od issim pl to r li in pr ti , nddo sn’tr quir ution on k g roundlig ting in t rm sof uniform ity nd s d sof illum in tion not r im port ntim prov m ntist tsom on n w r lu lot s ndstill sg m nt d orr tly k g rounds n inst d n d ptt sg m nt tion to t su j t ndt J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 129–142, 1998. c Springer-Verlag Berlin Heidelberg 1998 

130

Philippe G´erard et al.

of im posing onstr intson t tors lot s nd k g round olors T isprovid sn w fl xi ility s ow t tour m t od ould xt nd dto st nd rd outdoor s ooting wit f w onstr intsw i ouldm k itv ry usful T iswork provid st r im port nt n ts t 1) provid s not r solution to t for g round- k g roundsg m nt tion pro l m y using lo llig ting onditionsto for sp r tion of olorsin t rom ti ity olor su sp ;it 2) disussst us of t w it ln nd xploitst r l tions ip tw n w it l n s ift ndt olor tint; nd 3) n lly provid s solution for orr ting wrong m r w it ln quir din t d t tion 2disusss t t ory ndd v lopm ntof t on ptsusdto supportt iswork tion 3 ov rst olor orr tion t niqu s tion 4 disusss n sdsg m nt tion nd olor lust ring m t od tion 5 d sri st xp rim ntsw ondu t d ndprovid s disussion nd on lusion of t r sults

2

Theory and Concept Development

nd rst nding t r sultsw pr snt r r quir s l r und rst nding of r li ing t olor s iftt roug lt ring nds ifting oft m r w it l n n ddition, t p y si sof t lig tsour s, t ppropri t lt rs, nd t ir im p t on t olor om pr ssion in t rom ti ity olor su -sp m ust und rstood in lly llof t im g d t is quir d y m r ,t r for ,t pit-f llsofd t quisition using ”off-t -s lf” nd” ro d stqu lity ” m r s s ould lso xplor d 2.1

The Color Shift

nst dofs ooting su j tin frontof sp i olor d k g round,w ng t sp trum of t for g roundlig t y pply ing olor d lt r in frontof t ul f w orr tt iss iftonly for t for g round ndnotfor t k g round w ” om pr ss” t ntir k g roundin r g ion of t olor sp T r r two w y sof doing t is – T uning t w it l n of t m r s) on t t m p r tur of olor of ll t ul sinvolv din t s ooting usu lly 3200K for indoor lig ts, w usd 65 0 ) dding lt r on t l nsof t m r ,t is lt r is n opposit olor of t on position d on t for g round lig ts did two t stsin using som oso lt rs 1 T st1 ull lu # 3202 oosts3200K to 5 5 00K ) on for g roundlig ts nd ososun85 # 34 01 onv rts5 5 00K to 3200K ) on t m r l ns 2 T st2 lf lu # 3204 K oosts3200K to 4 100K ) on for g roundlig ts nd ososun lf T # 34 08 onv rts5 5 00K to 3800K ) on t m r l ns n t s xp rim nts, w usd lm ost om pl m nt ry lt rs, t n us t is om pl tion isf r from p rf tt r sultof t orr tion will littl it tint d or ng not d sir d s ift t t w ould orr t y t olor orr tion m t od xpl in dfurt r

Color Segmentation and Color Correction



2.2

131

s ond m t od for doing t is, isto djustt m r w it l n to t lig tspr d y t for g roundsour s ving t lf lu or full lu lt r on) T ism t odisusfullonly for orr ting sm lls ifts ow v r,it g iv ssom tt r r sults Light Sources

T lt rsw r nott uniqu olor s ifts tu lly t w it of qu l n rg y do sn’t xit, ll t w it lig tsw n s r in f tm or or l sstint d n pproxim tion oft sp trum of t w it lig ts ul sisg iv n y t l n k ’s qu tion −3 −5 2 − 1)−1 1) ) 1 2 nd 2 1 4 388 10-2m K T ist t m p r tur r 1 374 15 10-16 of olor oft sour t olor oft sour isusu lly d sri d y t quiv l nt l n k i n r di tor n our xp rim nts,t ul s d t m p r tur of olor of 3200K indoor lig ts) pply ing som olor d lt rsto sour is quiv l ntto produ tof t lig tsp trum nd t sp tr l tr nsm ission f tur sof t lt r ig ur 1 s owst sp trum oft lig tspr don t for g round v r l

×

=

Fig. 1. sulting lig tsp trum from lf lu lt r dd don 3200K w it lig t; )sp trum oft 3200K w it lig t f ); ) tr nsm ission f tur soft lf lu lt r g ); nd ) t r sulting for g roundlig tsp trum f )g )

sp tr m y r t t s m stim ulus, t isprin ipl is ll d ”m t m rism ” T isp nom n iswid ly usdin vid o to sy nt si sp i olor from only 3r y sof olor dlig ts or x m pl onsid r t r d,g r n nd lu prim ri s f r n s 2 3 5 6 4 g iv s g ood sisto st rtwit not r r pr snt tion of lig tsour p ssing t roug lt r would m ixtur of only on r y of pur olor nd w it of quiv l nt n rg y r t tint isg iv n y t w v l ng t of t r y, t s tur tion isg iv n y t p r nt g of w it dd dto t is olor, ndt int nsity isg iv n y t tot l n rg y oft ism ixtur n ord r to o t in t isn w r pr snt tion w i is si r to int rpr t, w n d to om put produ tof onvolution of t sp trum of stim ulus nd t t r urv sof t snsitivity of st nd rd um n vision ) 78 9 sy st m T sd n d st of t r urv s g ) or g iving t snsitivity of st nd rd” y ” to sp i olor T us, w o t in

132

Philippe G´erard et al.

   

) ) )

) ) )

) ) )

2)

100 ( ) ( )d

r ist illum in tion sour , ist sp tr l r fl t n f tor, nd , nd , r t tristim ulusv lu sin t r pr snt tion n now r pr sntt stim ulusin t rom ti ity di g r m x,y ) n t x m pl low, w r study ing ow s iftt lig tsour , t n w ’ll onsid r t ton t ntir visi l sp trum + +

T

ig ur 2 s ows ow to g tt

Table 1. x,y lt rs

oordin t sin t

1−

+ +

tintof



stim ulus

rom ti ity su sp

3) , y

om puting t

of t

lig ts nd

Light 3200K Light 3200K with Light 3200K with a Half blue filter a Full blue filter x y

0.4234 0.3991

0.3808 0.3806

0.3387 0.3316

int rs tion of t lin joining t w it of qu l n rg y ) nd t studi d olor ) ndt ound ry of t urv s or t purpl olors,w i do

Fig. 2. nt rpr t tion of s ift dw it in t

di g r m

not orr spondto uniqu r y of lig t, w will us − for d sri ing t tint t isis tu lly t tintoft opposit olor) in t purity of x it tion s som int r sting g om tri prop rti s, w ’ll us itl t r for t olor orr tion, utt olorim tri purity ism or pr is , t isist on will us r 1− 1−

4)

Color Segmentation and Color Correction

r ist lum in n of t w it olor on t ound ry of t di g r m t t w

133

nd t lum in n of t pur or dding two stim uli w k now lso w

w

w

5)

T n, k nowing t nd v lu sfor , nd , w n r solv sy st m of t r qu tionsfor d t rm ining t lum in n v lu s nd t n t purity 1 , sin w justn dto om put t r l tiv rom ti n m y oos lum in n s T n, stim ulusm y r pr snt d s n ddition of two stim uli p rtof rom ti lig t ) nd p rtof pur olor lig t ) in w r only onsid ring t lig tsour t r fl tion is100 on t ntir sp trum ,t r sulting stim ulus ) isg iv n y w willonly do t proof for , sin nd r sim il r) 1− )

6)

T isint rpr t tion r pl s ny olor dstim ulus y n ddition of n qu l nrg y w it nd pur olor r y T s r l tions ips r v ry usfulfor orr ting olor s ift ur g o l isto sg m nt utom ti lly su j t situ t d in t for g round of s n f w lig t t for g round wit lu lig t nd t k g round wit 3200K lig t,t k g roundr quir slig ting wit m or ”or ng lig t” t n t for g round T is ff twills iftt olor oft k g roundtow rdt or ng y llowis r g ion oft rom ti ity su sp fw orr tt for g round lig t tt m r t r sultwill norm lfor g roundw il t k g round g ts om pr ssdinto sl t dr g ion of t rom ti ity su sp T rst w y of doing so isto tun t w it l n of t m r on 3200K nd lm ing t roug n or ng lt r om pl m nt ry to t lu lt r on t for g roundlig ts f t lt rs r r lly om pl m nt ry,t for g round lig tisp r iv dw it 3200K ) ndt ntir k g roundiss ift din or ng , it full lu lt rson t for g round lig ts nd full or ng lt r on t m r l ns, t k g round iss ift d of full or ng w il t for g round pp rs orr tly w it l n d n f t, in our xp rim ntst is orr tion isnotp rf t us t lt rsw r not” x tly ” om pl m nt ry T r r no p rf t om pl m nt ry lt rs om m r i lly v il l tt istim N xtw pr snt w y of orr ting s ift dim g ft r ing sg m nt d n m ig t int r st d in s ifting t k g round y m or t n on full or ng in ord r to p rform tt r sg m nt tion sdon olor tispossi l to ddsom fullor ng lt rson t k g roundlig ts n t is s t k g round will s ift d twi of full or ng w i m k s t k g round om pl t ly om pr ssdin t or ng su sp of t r pr snt tion ur m t od im pli sm or pow r to lig t n t for g round t n in lu sr n m t od, for inst n , f t lig tsour is n qu l n rg y lig t, ndt lt rs ppli don t lig tsour ndl ns r r sp tiv ly full lu nd full

134

Philippe G´erard et al.

or ng lt r,t m r wills only 10to 30 oft n rg y oft lig tsour ow v r id ls lt rs sp trum of t orr tion fl tfor v ry ) ould m k t m r s 5 0 of t sour n rg y usd not r w y for orr ting t s iftloosing l ss n rg y, using w it ln orr tion inst d of full or ng lt r on t l nss om v ry int r sting f tur s r positioning t ism t od s n lt rn tiv to t popul r lu sr n m t od st r d r spro ly noti d,w don’t n dto v p rti ul r k g round T sts n rio would to lig t n t k g roundwit itsm ost om pl m nt ry olor T ism t od do sn’td m nd p rf tly uniform lig ton lu w ll Lig t ning itwill noug Lik in t rom -k y pro ss, t l k surf swill diffi ult p rtto pro ss us l k isv ry snsitiv to nois in olor n our postpro ssing orr tion w r m ov t l k und r t r s old ndr pl t m y int rpol tion using t n ig orsof t r m ov d r T is m t od n lso work for outdoor s n s n p otog r p rs r usu lly using g old n r fl torsto sim ul t sunst,t y r in f ts ifting t for g roundwit lig ttint ddiff r ntly t tt k g round orr tion on t l nsisdon y t om pl m nt ry lt r to t r fl tor 2.3

The Cameras

stim ulusisr li dwit t r r y sof lig tonly,t v lu s , nd r in f t p r nt g of t prim ri sto m ix to g t quiv l ntstim ulus T m r is l to djustitsw it l n in fun tion of t tintof t , , vid o v lu s) to t lig tsour T following qu tionsr l t t tristim ulusv lu s , nd itwill usful for t olor orr tion pro ss) ⎤−1 ⎡ ⎤ ⎤−1 ⎡ ⎡ ⎤ ⎡ 0 0 ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 0 0⎦ ⎣ 7) 0 0 wit

⎡ ⎣

⎤ ⎦

⎡ 1 ⎣ w

⎤−1 ⎡ ⎦



⎤ w w



8)

w

oordin t s , , , , , , , , , w , w nd w r r sp tiv ly, t w r of t r d, g r n nd lu prim ri sin N T ndof t w it of r f r n T s ond possi ility to orr t t s ift don y t lu lt r on t for g round lig t, isto djustt w it l n on t n w olor o t in d y t sour t lt r oos to putt lu lt r on t lig ts us t w it t3200K of t ul is oost dto 4 000K lf lu lt r) nd5 000K full lu lt r) w i m k st w it l n possi l n t is s , s xp t d, t only sorption of lig tisdon y t lu lt r T r sultis tt r t n t on w g otin t form r m t od r t for g round s ift isprop rly orr t d, nd t k g round ist n s ift d in or ng t s m w y T lim itof t ism t odist possi ility for t m r

Color Segmentation and Color Correction

to do t w it l n y or ng lt r on t s iftdon y n or ng m ixtur of t two

3

135

or inst n , m r isnot l to orr t s iftdon 3200K lig t, utwould p rf tly l to orr t lt r on lig ts m t odsm ig t wort to us

Color Correction

propos t following m t odto p rform orr tion in ord r to m lior t n im g ,or in ord r to r tri v olor s iftdu to non- om pl m nt ry lt rsin t rstm t od T is olor orr tion m y lso int r stt os w o r on rn y vid o squ n quir dwit poor w it l n d ndw ntto r tri v m or pt l r sult willtry to sim ul t t r lstim uli, sdon t d t w quir d T rstst p isto nd t r l w it of t im g w r n ly ing T ism y g iv n m nu lly y indi ting w it of r f r n , or utom ti lly if on s s ot olor rs, w i ist s in our xp rim nts usd 75 olor r rtwit 100 w it r f r n ,in ord r to r t m od l for our m r for som p rti ul r s ifts rsts ott rtund r t 3200K lig ts nd tun d t w it l n to g tt stw it spossi l on t v tor sop T is rtist n our r f r n for t visi l sp trum n furt r work w will pro ly d sig n our own rtin luding m or olors w l of 300 li r t d olors) ft r s ooting t is rton und r t r f r n lig t, nd s ond tim und r t s ift dlig t,w r l to r t ro ustm od lof t m r usd nd for p rti ul r s ift ur urr nt m od l isonly using 6 olorsfor int rpol ting t ntir sp trum , utt orr tion o t in d is lr dy quit int r sting sing t r pr snt tion of olor xtr t dfrom t olor r,w rstr tri v t orr ttintof t stim uli N xtw orr tt purity of t olor o t in d using our m od l om postpro ssing r ppli din ord r to tr tt l ks diff r ntly 3.1

White Retrieval

r w d sri ow to orr t lfor ng s ift it t inv rs m trix 7), w om put t position of t m n of t 100 w it in rom ti ity di g r m T following disussion d n s st w it of r f r n

034 86 035 17 n int r sting w y of orr ting t s ift d olor rsisto nd t stim ulus om pl m nt ry to t w it w found r , ndus t isn w v lu st w it pp ring in qu tion 8) r look ing for t stim ulusw i g iv st w it w n m ix d wit t w it s ift d ov o

o

o

o

o

o

136

Philippe G´erard et al.

r indi s , nd orr spondto t w it , stim ulus om pl m nt ry nd t w it s ift d n oos r itr rily 1, us w r justint r st d y t r l tiv lum in n T n, t qu tions ov om o

wit w

1− ndt t r

o



1

o

o

o

1−

o−

stim uli r on t s m

lin

o o

o

1−

o o−

o

ving t isfollowing qu tion

− w r nd − − , N ow t k t om pl m nt ry stim ulusto st s m lum in n o ,w i in t s m lum in n pl n ) o o o nd o will 1 g t o 02787 nd o 02872 T n 2 o, or o o 2 o

o

N ow,w onv rtt v lu sto stim uli using t inv rs m trixof qu tion 7) ndusing o, o nd o st w it ig ur 3 om p r st n w stim uli,t r f r n olor rs ndt s ift d olor rs v r pr snt d

Fig. 3. ) pr snt tion of t r tri v l of t tintsof n lf or ng s ift d olor rs rt ) sultof t purity orr tion in t di g r m r pr sntst fr n olor rs oordin t s, t lf or ng s ift d olor rs nd t orr t d olor rs

t tintsoft r f r n olor rs dott dlin s) ig ur 3 s owst tt g r n ndm g nt r littl itdiff r ntfrom w tt y s ould , utt ot r tints r pr tty orr tly r tri v d T isr sultisnoty tw tw w nt v n if t tints r lm ost orr t,t purity of t s olors r quit diff r ntof t r f r n olor rspurity T n xtst p isto r li lt r of purity ig ur 4 r pr sntst purity lt r, w r t ng l s xpr sst tints nd ( ) ) usdonly six olorsto g n r t t is t norm ,t r tio r ( ) fun tion y int rpol tion T isfun tion isr l t dto t r l snsitivity urv s

Color Segmentation and Color Correction

137

of our in t

m r T isr pr snt tion isquit t s m st u r pr snt tion olor sp T r sultof olor orr tion ispr snt din ig ur 3

Fig. 4.

)

4

tor sop r pr snt tion of t

purity

lt r, ) im g to sg m nt

Segmentation Using the SVD Method

oos s m pl from t k g roundto pply st tisti l m t odto sg m nt t s n T r for if n w o j t pp rsin t for g round it isnot n ss ry to ng our s n k g round- sdiniti li tion l ssi dt s ift d olorsusing m t od of sg m nt tion lust ring m t od w s usdto g roup t s ift d olorsinto snsi l stsin t olor sp T w s ppli d to lust r nd r sult d in r t ri ing g roup d olor wit p r m t rsof 2-sig m llipsoid T llipsoidisproj t donto t rom ti ity su sp 4.1

Applying the SVD Segmentation

T w

sing ul r v lu d om position provid stwo n tsin t iswork 10 irst, xploitt id t tt ov ri n of olor s m pl n f tor d into t r , t m trix r pr snts n tur l sisfor t x s long t prin ip l om pon nts t ist lin r tr nsform tion from t s m pl d sis to t n tur l sisof t prin ip l om pon nts ond, t v ri n long t prin ip l om pon nts r g iv n y t sing ul r v lu sin L T v ri n is s l d y n r itr ry onst nt 2 n t is sso i tion isr l t dto t d sir d sig m v lu t t r t ri st olor s m pl 4.2

Clustering

rly t sting s ow d k g roundst tw r wid ly spr d rosst rom ti ity su -sp r sult d in lust rst tw r too l rg ppli d lust ring m t od to d l wit t isissu lust ring m t od su st K -m ns lg orit m 11 n ppli d to t s ift d olors usd m odi d v rsion of t K -m ns lg orit m T ntir k g roundis rsts m pl d T sing ul r v lu d om position is ppli dto d om pos t olor s m pl ov ri n m trix

138

Philippe G´erard et al.

into itsprin ip l om pon nts ing ul r v lu sprovid d y t r pr snt t v ri n for t prin ip l om pon nts v lu t t v ri n g inst t r s old f t m xim um sing ul r is ig r t n t sl t d t r s old, t d t issplit long y p r-pl n t tisort og on l to t prin ip l xiswit t m xim um v ri n T pro ssisr -it r t dfor r sulting lust r until t sing ul r v lu sfor lust r r wit in t t r s old onstr ints foundt r sults r not ppr i ly diff r ntfrom on olor sp to not r 4.3

Segmenting a Clustered Color Sample

onsid r xtr ting s m pl stofm pointsfrom t rg t,w r t olor sp isd n d sd on t N T r iv r prim ry sy st m d nition pix ls m pl d n r pr snt d t oordin t in olor sp T s m pl dm n for stin isd t rm in d s

t

t   1  9) g =1 =1 g =1

t r pr sntst m ns of om pon nt w r t ntri s of g wit in t stof int r st irst onsid r pply ing t ist niqu to on stof s m pl d d t , t n t r sult s own n xt nd d to m ultipl stsw r s m pl s n p rtition d into m ultipl su - lust rs T v ri n n usdto st lis st llipsoidin sp t t ont inst d sir dstof pix ls for ny v ri n n om put d,itisn ss ry to rry out tr nsform tion to n w sis nt r d outt s m pl dm n T n w sis isto lin dup wit n stim t oft stprin ip l xisof t s m pl dd t T tr nsform tion xtr t dfrom t g u r nt st stim t dprin ip l om pon nt x s r ort og on l T r for t st tisti s r d - oupl d w n stim t d long t prin ip l om pon nt x s point onsistsoft r v lu s orr sponding to t r d,g r n, nd lu om pon ntsof olor sp L tt stof points form dinto m trix , w r is , wit 3, nd onsid r tr nsl ting t xis y − su t t t stof s m pl d pointsistr nsform d to t − )w r t ro-m n st long its prin ip l om pon nts y tr nsform tion isfoundfrom im pl m nting t L t t ov ri n m trix for ll t s m pl d d t pointsr pr snt d st olum nsof or s m pl d stof d t , t ntri sfor t ov ri n m trix r g iv n sin 10) ⎤ ⎡ ⎦



w r nd

1

 1

1



1 − )2 , 1 g )− g 1 g ) − g)

=1 

 =1

10)

g − g)2 , 1



1

 =1

1

− )2 )−

Color Segmentation and Color Correction

139

tisint r sting to not t t ist s m ,w t r t isp rform don t d t itslf or t ov ri n m trix T im port n ist tt olum nsof sp n t sp of nd r t d sir dort og on l sisv tors orr sponding to t prin ip l v lu sfor t tr nsform d llipsoid T d t ont in d in n tr nsform dto t ro-m n st − ) T olum n v torsin m trix g ttr nsform dinto t n w r ng sp y 9) w r is , wit 3, nd t − ) 11)

t T s m pl d d t T m trix ont inst tr nsform d v tors γ β is ssum d to norm lly distri ut d T tr nsform d d t ont in d in is lig n d wit t prin ip l x s nd t v ri n of t d t n found from 12)  2

=1

−ˆ )2

( −1

 2 γ

=1

(

−ˆγ )2 −1 γ

 2

=1

−ˆ )2

( −1

12)

tis sy to s ow t r sultof qu tion 12) n o t in ddir tly y pply ing t to t ov ri n m trix inst dof t d t itslf spr viously st t d, t tr nsform tion will id nti l, ow v r t sing ul r v lu so t in dfrom im pl m nting t ism t odon t m trix r t v ri n s 2 , γ2 , 2 T s v ri n sr pr sntt ig nv lu sof t ov ri n m trix 4.4

Establishing Sets of Probable Points

of t t r v lu sof t v ri n d t rm in d y pply ing t r pr sntst int rs tion of t on -sig m llipsoidwit it r t sm i-m jor or on of t sm i-m inor x s n llipsoidw ssl t d st ounding g om try us t lo tion of its ounding surf isw llund rstoodw n t d t is norm lly distri ut d sing t s on -sig m v lu s s r t risti p r m t rs d sri ing t ound ry of stofint r stm nst t63 oft s m pl dd t f llswit in t st ling t sig m v lu s y two or t r r sp tiv ly r sults in 86 nd95 of t s m pl dd t f lling wit in t st T r is tr d -off wit t sl tion of s l v lu for t sig m v lu n would lik to y pot si t t ny r itr ry pix l m pp d into t st lis d ound ry from t tr nsform tion tu lly long sto t st or ig r-s l dv lu sofsig m t r is ig r lik li ood t t pix l l ssi d in t st do snot long T r r four possi l out om sto t y pot sist st 1) pix l id nti d s long ing to t stof int r st long sto t st 2) pix l id nti d s long ing to t stof int r stdo snot long to t st 3) pix l id nti d snot long ing to t stof int r stdo s long to t st 4 ) pix lid nti d snot long ing to t stof int r stdo snot long to t st T stim t d v ri n ss ould s l d to m xim i t pro ility of onditionson nd four, nd m inim i t pro iliti sof onditionstwo nd t r L tt p r m t r k t s l dv lu for t

140

T

Philippe G´erard et al.

llipsoidin

sp

ounding t

stof pointsof int r stis

2

γ2

β2

2

2 γ

2

2

13)

w r k is t s l d v lu T p r m t r k is sl t d to optim i t out om of t y pot sst sts sm ntion d ov point n t st d using qu tion 13) T ounding llipsoid n tr nsform d k into sp y t inv rs tr nsform tion s own in 14 )

4.5

3 4 5 6 7 8 9

5

t



γβ

t

T

14 )

Procedure Summary

olor sg m nt tion using t sfollows 1 2

g

m t odd sri

d

r

n

om plis d

t in s m pl from t t rg tsof int r st indt m nsfor t rg tusing 9),t n tr nsform s m pl sfrom t rg tsu t tt y r ro m n s m pl s s t sing ul r v lu d om position to nd t tr nsform tion m trix for t rg td t st p d t st into r ng sp w r t d t is lig n d wit t prin ipl x susing 11) s 12) to ndt v ri n in t tr nsform dsp or us t ig nv lu s of t ov ri n m trix f t v ri n is ov sl t dt r s old, p rtitioning t d t into su lust rs sr quir d st lis t ound ri soft s m pl st s n int rs tion of lust rsfound for t rg tusing 13) ppli dto lust r st n visu li d y tr nsform ing t st ound ri s k into t sp y t inv rs tr nsform tion of 11) T st pix l in t ntir im g , or sm ll r r of int r stwit in t im g to d t tt t rg t

Experiment and Conclusion

n ord r to g t t st r sult w s m pl d t r r g ions ig ur 5 ) of t rstpi tur ig ur 4 ) of vid o squ n , nd ppli d t r tim sour sdsg m nt tion m t od T t r s m pl sw r t k n in t s m r sin t two squ n s non s ift d nds ift d) T rsts m pl is low r solution s m pl orr sponding to n ori ont lsli oft k g round,for ot squ n s w r j t d ny v lu sg r t r t n 4 st nd rd d vi tions T is rst r sultis lr dy int r sting w n w onsid r ow olorful w st orig in l k g round T s onds m pl is p rtof t f ndw k ptt v lu s om pris in 38

Color Segmentation and Color Correction

141

st nd rdd vi tions,t l sts m pl ist k n in t ir w r t tol r n w s 1 6 st nd rdd vi tions sultsof sg m nt tion ig ur s5 nd 5 ) v pr snt d n w m t od to lust r t k g round to im prov sg m nt tion pro ss T prin ipl isr l tiv ly sim pl ndwould sk l ss r in s ooting t n t tr dition l lu sr n m t od or t oug , m ultipl m r s ooting would possi l T is on ptis lm ostim possi l wit lu sr n m t od ur r sultisnotp rf t ut sto om p r d to tr dition l m t odssu

Fig. 5. ) m pl s for im g s, im g sg m nt tion

)

rig in l im g

sg m nt tion

)

ift d

s rom -k y s it su om pl x s n t st rom -k y would v g iv n wors r sultt n m t od ppli d to t non s ift d im g T lim it tion of our m t od isfor sg m nting d rk olors, t l k of lum in n is n im port ntsour of nois in vid o T o r solv t ispro l m w will m ix t ism t od to not r ty p of sg m nt tion m otion tr k ing , dg d t tion or d pt stim tion) n t ispr snt tion w w nt dto g tt str sultsfor t worst onditions T wo m in ontri uting f torsto d g r d d onditions r om pl x k g rounds n nd n off-t -s lfvid o m r T n to im prov t r sultson wouldjust v to g t m or pr is m r su s ny tripl sdor ro d stqu lity m r T k g roundwill r lly sim pl r in n tur l or studio nvironm nt, utit snotto p rf tly uniform lik in t lu sr n m t od lso pr snt d olor orr tion m t od sdon t stim uli p r ption r tri v l T is orr tor ould v m or g n r lus in postprodu tion ppli tionst n t on pr snt d r

References 1. Moezzi S., Tai Li-Cheng, Gerard P.: Virtual View Generation for 3D Digital Video IEEE Multimedia January-March (1997) 18–26 2. Kowaliski P.: Applied photographic theory, Kodak Pathe,France, John Wiley and Sons, (1972). 3. Engel, C.E: Photography for the scientist, Academic Press, (1968). 4. Corbett D.J: Motion Picture and Television Film Image Control and Processing Techniques, BBC engineering training manual, The focal press, (1968).

142

Philippe G´erard et al.

5. Fink, D.G: Handbook for Television Engineering, 3rd edition, McGraw-Hill, Academic,(1984). 6. James, T.H: Theory of the photographic process, 4th ed., Macmillan, New York, (1977). 7. Commission International de L’eclairage: Colorimetry, Publication 15, Paris, (1971). 8. De Marsh, L.E: Colorimetric Standards in US Color Television, J.SMPTE 83 (1974) 1-5. 9. Judd D.B: The 1931 CIE Standard observer and coordinate system for colorimetry, J.Opt.Soc.Am. 23 (1933) 359-374. 10. Press W.H, Teukolosky W.T, Vetterling W.T, Flanery B.P: Numerical Recipes in C-The Art of Scientific Computing, 2nd ed., Cambridge University Press (1995). 11. Hartigan J.A: Clustering Algorithms, Wiley, (1975).

Designing Emergence in Animated Artificial Life Worlds Jeffrey Ventrella There Studios [email protected] http://www.ventrella.com

Abstract. A methodology is described for designing real-time animated artificial life worlds consisting of populations of physically-based articulated creatures which evolve locomotion anatomy and motion over time. In this scheme, increasing levels of emergent behavior are enabled by designing more autonomy into the simulation on progressively deeper levels. A set of simulations are discussed to illustrate how this methodology was developed, with the most recent simulation demonstrating the effects of mate choice on the evolution of anatomies and motions in swimming creatures.

1 Introduction The art of animation as invented by the Disney animators has been called "The Illusion of Life." It seems that Computer Animation as a craft has never really caught up to the expressivity, humor, and lifelike motion possible with classic cel-based character animation. The animators at Pixar have been successful at adapting computer technology to the fine art of classic character animation technique. But as feature film animators refine this marriage of new and old technologies, a new form of animated character emerges, not in the film industry but from within various artificial life labs. Add to Disney's "Illusion of Life," the "Simulation of Life," and witness a new technologyone which is compatible with the goals of Virtual Realitya future cyberspace in which characters are not just animated: they are autonomous, reactive agents as well. This paper describes an approach to artificial life (alife) [9], stemming from animated art and a design process for creating it. It is an exploration of autonomously generated motion and form. The original impetus is not biological science, although biology has become an important aspect of the work. The angle on alife discussed in the paper may provide ideas and techniques that are useful in creation of populated virtual worlds for entertainment, education, and research. I will discuss a family of simulations which were built upon each other, in which populations of artificial creatures evolve anatomy and motion over time. In these simulations, emergent behavior is enabled by progressively designing levels of autonomy into the model. The simulations are used to illustrate how this methodology

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 143-155, 1998 © Springer-Verlag Berlin Heidelberg 1998

144

Jeffrey Ventrella

was built, with the most recent simulation demonstrating the effects of mate selection on the evolution of anatomies and motions in swimming creatures. Future enhancements will include wiring up an evolvable nervous system connected to various sensors and actuators, within which neural net-like structures might emerge, potentially enabling the creatures to evolve higher-level stimulus-response mechanisms and "states of mind," which emerge through direct contact with the environment. Design Considerations. A main goal of alife projects is to construct models in which self-organization and adaptive behaviors can arise spontaneously, not by design, but through emergence. A duality is observed in the creation of real-time alife worlds: while the goal of an alife simulation is emergence, it is ultimately a designed artifact. Creating alife worlds, in this methodology, is a matter of designing emergence. One must choose a level of abstraction, below which a number of assumptions are made about the ontology supporting the model. Above this level of modeling, emergent phenomena arise as a product of the design of the subsystem when the model is run. For instance, many alife simulations model populations of organisms whose spatial positions are represented by locations in a cellular grid, and whose physical means of locomotion (jumping from cell to cell) are not clearly specified. The emergent properties in question may not require a deeper level of simulation than this. However, a simulation in which an articulated body plan is integral to locomotion (as well as reproduction) requires a lower level of abstraction, and a deeper physical model becomes necessary. More design may be required to build the simulation foundations, however, it may allow for more complex emergent phenomena. Towards the quest for more autonomy, the most recent simulation in the set which I will discuss consists of a population of many hundreds of swimming creatures which animate in real-time. The creatures come in a large variety of anatomies and motion styles. They are able to mate with each other, and choose who has the "sexiest" motions, enabling evolution of swimming locomotion and anatomy which is attractive (beauty of course being in the eye of the beholder). The best swimmers reproduce their genes (because they can swim to a mate), and the most "attractive" swimmers get chosen as mates. In this simulation, not only are the aesthetics of motion and form subject to the chaotic nature of genetic evolution, but the creatures themselves partake in the choice of what forms and motions emerge. The methodology discussed in this paper includes the following key components:      

a morphological scheme with an embryology a motor control scheme a physics a nervous system genetic evolution computer graphics rendering

Each of these components are described throughout the sections below. They are not explained in great technical depth, but rather discussed as a general methodology, with some commentary.

Designing Emergence in Animated Artificial Life Worlds

145

1.1 Related Work Early examples of using alife principles in computer animation include Boids [12], in which complex life-like behaviors emerge from many interacting agents. Badler [2] describes physically-based modeling techniques and virtual motor control systems inspired by real animals used to automate many of the subtle, hard-to-design nuances of animal motion. In task-level animation [22] and the space-time constraints paradigm [21] these techniques allow an animator to direct a character on a higher level. The genetic algorithm [7, 6] has been used for the evolution of goal-directed motion in physically-based animated figures, including a technique for evolving stimulus-response mechanisms for locomotion [10]. Sims [14, 15] has developed impressive techniques for the evolution of morphology and locomotion using the genetic programming paradigm [8]. Also, a holistic model of fish locomotion with perception, learning, and group behaviors, which generates beautifully realistic animations, was developed by Terzopoulos, Tu, and Grzeszczuk [16]. 1.2 Walkers The first project in the series began an attempt to build a 3D stick figure which could stand up by way of internal forces bending various joints. The original goal was to model a human-like form, but this was too large a task at the time. To simplify the problem, the bipedal figure was reduced to the simplest anatomy possible which can represent bipedal locomotion: two jointed sticks, called "Walker," shown in Figure 1a. Performance artists Laurie Anderson once described walking as a continual process of catching oneself while almost falling, by placing one foot out in the direction of the fall, stopping the fall, and then almost-falling again [1]. This notion inspired the idea to construct a 3D bipedal figure which has mass and responds to gravity and which is always on the verge of falling over and so must continually move its legs in order to keep from falling, to stay upright, and to walk. Walker's "head" experiences a pull in the direction of its goal (an inaccurate model of the desire to be somewhere else, yet sufficient for this exploration). Walker can perceive how much its body is tilting. The amount of tilt is used to modulate the motions in its legs. As it's head is pulled towards its goal, its legs move in such a way as to keep it upright and afford locomotion. Internal forces cause the legs to rotate about the head joint, back and forth, in periodic fashion, using sine functions. Parameters in the sine functions for each leg, such as amplitudes, frequencies, and phases, are genetically determined, varying among a population of Walkers, and are allowed to evolve over time within the population. Responses to body tilt are also genetically determined. At the start, the motions of most of the Walkers are awkward and useless, with much aimless kicking, and so most of the Walkers immediately fall as soon as they are animated, although some are slightly better at staying upright for longer. By using a simplified genetic algorithm which repeatedly reproduces the best Walkers in the population, with chance mutations, and killing off the rest, the population improves locomotion skill over time. To demonstrate Walker's ability to react to a constantly changing situation, a virtual "leash" was attached to the head, which a user could gently pull around. Also, sce-

146

Jeffrey Ventrella

narios were constructed in which multiple Walkers had attractions and repulsions to each other, generating novel pursuit and evasion behaviors. Walker is the first in a series of alife creatures in this system consisting of linkedbody parts which are connected at their ends and which can rotate about each other. Figure 1b shows a creature from a later simulation of 2D creatures consisting of five interconnected segments, with four effective joints. As in the case of Walker, rota tions about the joints are controlled by sine functions with evolvable parameters. Fitness in this scheme is based on distance traveled. Although creatures in this population are 2D, locomotion is possible, with evolved populations consisting of creatures ambulating either to the left or to the right.

Fig 1. (a) Walker, (b) 2D walking figure.

2 A Morphology Scheme It is one thing to link together a set of sticks in a predetermined topology and then allow the motions of the joints to evolve for the sake of locomotion. It is another thing entirely to allow morphology itself to evolve. In the next incarnation of this alife methodology, a morphological scheme was designed which allowed the number of sticks, lengths, branching angles, and stick topology to vary. All of these factors are genetically determined. An important principle in this newer morphological scheme is that in this case, there are no implied heads, legs, arms, or torsos, as shown in Figure 3. Creatures are open-ended structures to which Darwinian evolution may impart differentiation and afford implicit function to the various parts.

Fig. 3. 3D creatures with variable morphology.

Designing Emergence in Animated Artificial Life Worlds

147

An extension of this scheme includes bodies with variable thickness among parts. Figure 4 shows a representative 3D creature from this set. These creatures are the most complex of all the morphological schemes, and can exhibit locomotion schemes and anatomies which exploit the effects of uneven distributions of mass among the body (consider for instance the way a giraffe uses the inertia of its neck when it runs). The figure in this illustration uses a tripod technique to stabilize itself. Creatures in this scheme tend to evolve 3 or 4-legged locomotion. Bipedal locomotion does not evolve: this is probably due to the fact that they have no sensors for balance, and thus no way of adjusting to tilt (as Walker did). A future enhancement will be to add 3D "vestibular" sensors, to enable bipedal locomotion.

Fig. 4. A 3D creature with variable thickness in parts.

2.1 Embryology Each creature possesses a set of genes organized in a template (the genotype). Genotypes are represented as fixed-length strings of real numbers. Each gene is mapped to a real or integer value which controls some specific aspect of the construction of the body, the motion of body parts, or reactivity to environmental stimuli. Genotypes get expressed into phenotypes through this embryological scheme. The typical genotype for a creature includes genes for controlling the following set of phenotypic features: number of parts the topology (branching order) of parts the angles at which parts branch off other parts colors of parts thicknesses of parts lengths of parts frequencies of sine wave motions for all parts amplitudes of sine wave motions in each part phases among sine wave motions in each part amplitude modulators of each sine wave (reactivity) phase modulators of each sine wave (reactivity) The design of a genotype-phenotype mapping has been one of the most interesting components of the design process in building alife worlds in this scheme. It is not always trivial to reduce a variable body plan to an elegant, simple (and evolvable) representation. Dawkins [5] promotes the concept of a "constrained embryology,"

148

Jeffrey Ventrella

whereby global aspects of a form (such as bilateral symmetry or segmentation) can be represented with a small set of control parameters. A representation with a small set of genes which are able to express, through their interactions during embryology, a very large phenotypic space, is desirable, and makes for a more "evolvable" population. 2.2 Motor Control The motor control scheme in this system is admittedly simplistic. Real animals do not move like clockwork, and do not move body parts in periodic fashion all their lives. However, as mentioned earlier, one must choose a level of abstraction. For the simulations designed thus far, this simplistic model of animal locomotion has been sufficient. Also, the addition of modulators to the periodic motions makes the motion less repetitive, and enables reactivity to the environment. A more reactive motor control scheme would be required if any behavior other than periodic locomotion is expected to evolve.

3 Physics This alife system evolved from a simple animation technique whereby motion was made progressively more complex. The notion of a physics also became incorporated in progressive steps. In the figures modeled in this scheme, deeper levels of physical simulation were added as needed, as more complex body plans were designed. In the present scheme, time is updated in discrete steps of unit 1 with each time step corresponding to one update of physical forces and one animation frame. A creature is modeled as a rigid body, yet which can bend its parts as in an articulated body. Torque between parts is not directly modeled in this scheme. The effects of moments of inertia and changing center of mass are modeled. A creature's body has position, orientation, linear velocity, and angular velocity. Position and orientation are updated at each time step by the body's linear and angular velocities. These are in turn affected by a number of different forces such as gravity, collisions of body part endpoints with the ground (for terrestrial creatures), forces from parts stroking perpendicular to their axes in a viscous fluid (for sea creatures), and in some cases, contact with user stimulus (a mouse cursor which pokes into the virtual space).

4 Nervous System It is possible that brains are simply local inflammations of early evolutionary nervous systems, resulting from the need in early animals to have state and generate internal models in order to negotiate a complex environment. This is an alife-friendly interpretation of brains. The design methodology employed here is to wire up the creatures with connectivity and reactivity to the environment, and then to eventually plug in evolvable neural nets which complexify as a property of the wiring to the environ-

Designing Emergence in Animated Artificial Life Worlds

149

ment. Braitenberg's Vehicles [3] supply some inspiration for this bottom-up design approach. The design philosophy used here does not consist of building a brain structure prior to understanding the nature of the organism's connectivity to the physical environment. If adaptive behavior and intelligence are emergent properties of life, then it is more sensible to design bodies first, and to later wire up a proto-nervous system in which brains (might) emerge. The "Physical Grounding Hypothesis" [4], states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. In this system, there are not yet any brains to speak of. It includes a simple mental model which is (at this point) purely reactive: sensors in a creature detect aspects of the its relation to the environment, which can directly transition its state to another state, or which directly control actuators (real-time changes in the motions of body parts). For instance, while in "looking for mate state", a creature will react to certain visual stimuli, which can cause it to enter into a new state, such as "pursuing a mate". A future enhancement includes building an evolvable recurrent neural net which mediates the sensor-actuator pathways, and can grow internal structure and representation in the context of the creature's direct coupling with the dynamic environment.

5 Evolution Models of Creationism are made implicit every time a craftsperson, animator, artist, or engineer designs a body of work and establishes a methodology for its creation. Darwinian models are different than Creationist models (though not absent of a creator). In Darwinian models, important innovation is relegated to the artifact. Invention happens after initial Design. Surprises often result. Genetic algorithms are Darwin Machines [11], designed to take advantage of lucky mutations. They are serendipity engines. The earlier alife simulations discussed use a variation of the standard genetic algorithm: hundreds of creatures are initialized with random genes, then each is animated for a period of time to establish its fitness ranking (in most cases, determined by how far it has traveled from the position at which it started). After this initial fitness ranking, a cycle begins. For each step in the cycle, two statistically fit individuals are chosen from the population, mated, and used to create one offspring which inherits genes of both parents through crossover. The offspring undergoes embryology and then replaces the least fit individual in the population. The offspring is then animated to determine its fitness. The process continues in this way for a specified number of cycles. This technique guarantees that overall fitness never decreases in the population. The most fit creatures are never lost, and the least fit creatures are always culled [17]. Figure 5 shows 4 snapshots of a locomotion cycle showing a 3-legged anatomy and locomotion strategy, which evolved from one of these simulations.

Fig. 5. An evolved 3-legged locomotion strategy.

150

Jeffrey Ventrella

6 Sexual Swimmers In the latest series of simulations, dimensionality was brought down from three to two, but the model was deepened by allowing reproduction to be spontaneous. To ground the simulation more in a physical representation, the concept of a "generation" was blurred. Instead of updating the entire population in discrete stages (a schedule determined by the "creator"), this system allowed reproduction to occur asynchronously and locally among creatures able to reach proximity to each other in a spatial domain. The notion of an explicit fitness function is thus removed [18]. Locomotion behavior evolves globally by way of local matings between creatures with higher (implicit) fitness than others in the local area. Figures 6a and b illustrate two swimming styles which emerged from this simulation.

Fig. 6. Sequences of images of swimming strategies: (a) paddling-style, and (b) undulatingstyle, which emerged through evolution.

Turn That Body. In order to pursue a potential mate which is swimming around (possibly pursuing its own potential mate), a swimming creature has to be able to continually orient itself in the direction it wants to go. A reactive system was designed which allowed the creatures to detect how much it would need to turn its body while swimming (analogous the the "tilt" sensor in Walker, only now covering 180 degrees in a plane). Essentially, the creature senses the angle between the direction of its goal and its own orientation as the stimulus to which it reacts, as shown in Figure 7. Fig 7. Turning stimulus

As opposed to Walker, which experiences a "magic pull" in the direction of its goal, these creatures must create their own means of propulsion, using friction with the surrounding fluid. Since the number of possible morphologies and motion strategies in these creatures is vast, it would be inappropriate to top-down-design a turning mechanism. Since turning in a plane for a 2D articulated body can be accomplished by modulating the phases and amplitudes of certain part motions, it was decided that evolution should be the designer, since evolution is already the designer of morphology and periodic motion. Thus, modulator genes (shown in the list above) were implemented which can affect the amplitudes and phases in each part's motions in response to the stimulus.

Designing Emergence in Animated Artificial Life Worlds

151

A commercial product was derived from this simulation and called "Darwin Pond" [13]. It enables users to observe hundreds of creatures, comprised of 2D line segments, representing a large phenotypic space of anatomies and motions. They can be observed in the Pond with the aid of a virtual microscope, allowing panning across, zooming up close to groups of creatures, or zooming out to see the whole affair. One can create new creatures, kill them, move them around the Pond by dragging them, inquire their mental states, feed them, remove food, and tweak individual genes while seeing the animated results of genetic engineering in real-time. Figure 8 shows a screen shot of the basic Darwin Pond interface.

Fig. 8. Darwin Pond interface.

Physics and Genetics Are Linked. It is important to point out that in this simulation, genetic evolution is totally reliant on the physical model which animates the creatures. Not only does it take two to tango, but they have to be able to swim to each other as well. In this simulation, genotypes are housed in physically-based phenotypes, which, over evolutionary time, become more efficient at transporting their genotypes to other phenotypes, thereby enabling them to mate and reproduce their genes. This is the selfish gene at work. The emergence of locomotive skill becomes meaningful therefore in the context of the environment: it becomes physically grounded, and the fitness landscape becomes dynamic. A variation on this simulation, called "Gene Pool" [20], was developed, and included a larger genotype-phenotype scheme, variable thickness in body parts, a deeper physics, and a conserved energy model. Figure 9 illustrates a collection of these 2D swimming creatures (before evolution has had any effect of body plan). They are called "swimbots". Fig. 9. Swimbot anatomies

What's Sex Got To Do With It? Gene Pool introduces mate choice in the process of reproduction. In this scheme, not only is the evolution of form and motion more physically grounded, and subject to the local conditions of the simulated world, but the aesthetic criteria which determine the "sexiest" motions also become a factor for reproduction.

152

Jeffrey Ventrella

What does "sexy" mean? Sexual selection in evolution is responsible for phenomena such as the magnificent peacock tail, and the elaborate colorful dances of many fish species, which sport curious body parts, adapted for mate attraction. Attractionbased features in many species may be so exaggerated and seemingly awkward that one would assume they decrease the overall fitness of the species, in which locomotion efficiency, foraging, and security are important. I arrived at an hypothesis: could mate preferences for arbitrary phenotypic features inhibit the evolution of energyefficiency in a population of locomotive creatures? For instance, if the creatures in the population were attracted to slow-moving bodies, and only mated with the most stationary creatures in the population, would locomotion skill emerge? If so, what kind? What if they were all attracted to short, compact bodies? Would the body plan become reduced to a limbless form, incapable of articulated locomotion? If they were attracted to wild, erratic motion, would the resulting swimbots be burning off so much energy that they usually die before reproducing? As an experiment, a simulation was built, including a variety of mate preference criteria determining which swimbots would be chosen [19]. In choosing a potential mate, a swimbot would take snapshots of each swimbot within its local view horizon, and then compare these snapshots and rank them by attractiveness. Criteria settings included: attraction to long bodies; attraction to lots of motion; attraction to bodies which are "splayed out," and attraction to massive bodies. The inverses of each of these phenotypic characteristics were also tested. As expected, in simulations in which length was considered attractive, populations with elongated bodies with few branching parts emerged. In these populations, locomotion was accomplished by a tiny paddle-like fin in the back of the body, or by gently undulating movements. Figure 10 illustrates a cluster of swimbots resulting from this simulation.

Fig. 10. Swimbots which evolved through mate preferences for long bodies.

Attraction to "splayed-out" bodies (in which the sum of the distances between the midpoints of each part is larger than average), resulted in creatures which spent a large part of their periodic swimming cycles in an "open" attitude, as shown in Figure 11. In this illustration, the creature is seen swimming to the lower right. Approximately one swim cycle is shown. Notice that the stroke recovery (the beginning and end of the sequence) demonstrates an exaggerated "opening up" of the body. It is likely that this behavior emerged because snapshots can be taken at any time by creatures looking for a mate, and so the more open a potential mate's body is, and the longer amount of time it is open, the more attractive on average it will be. These experiments show how mate preference can affect the evolution of a body plan as well as locomotion style. The effects on energy-efficiency were non-trivial: locomotion strategies emerged which took advantage of attraction criteria yet were

Designing Emergence in Animated Artificial Life Worlds

153

still efficient in terms of locomotion. As usual with many alife simulations, the phenotype space has unexpected regions which are readily exploited as the population adapts.

Fig. 11. Swimbots which evolved through mate preferences for splayed bodies.

6.1 Realtime Evolution The alife worlds described here incorporate physical, genetic, and motor-control models which are spare and scaled down as much as possible to allow for computational speed and real-time animation, while still allowing enough emergent phenomena to make the experience interesting and informative. The effects of Darwinian evolution can be witnessed in less than a half-an hour (on an average Pentium computer). For educational and entertaining experiences, it is important that there be enough interaction, immersion, and discovery to keep a participant involved while the primordial soup brews. The average short-attention-span hardcore gamer may not appreciate the meditative pace at which an evolutionary alife system runs. But alife enthusiasts and artistically or scientifically oriented observers may find it just fine. For watching evolutionary phenomena, it sure beats waiting around a couple million years to watch interesting changes.

7 Computer Graphics Rendering For the 3D stick figures, thick, dark 3D lines are drawn to represent the sticks. The interesting (and important) part comes in drawing a shadow, since this is largely the graphical element which gives the viewer a sense of the 3D geometry. The shadow is simply a "flattened-out" replica of all the lines (without the vertical component and translated to the ground plane) drawn with a thicker linewidth, and in a shade slightly darker than the ground color. Since no Z-buffering is used, the shadow is drawn first, followed by the sticks. For the creatures with variable thickness in parts, a number of techniques have been used, many of them are graphics hacks meant to reduce polygon-count or to avoid expensive Z-buffering and lighting algorithms. Since these are specifically alife-oriented worlds, without any objects besides abstract articulated bodies, specific graphics techniques are used. At the point in which these creatures are imbedded in a more comprehensive virtual world, a more generalized renderer would be useful. Rendering Behavior, Then Pixels. Part of the methodology is to take advantage of something that the eye-brain system is very good at: detecting movement generated by a living thing. It is important not to spend too much computational time on

154

Jeffrey Ventrella

heavyweight texturemaps, lighting models, and cute cosmetics, when more computation can be spent on deep physics and animation speed (to show off the effects of subtle movements that have evolved). It is a purist philosophy: simply show what is there. No pasting 2D images of eyes and ears on the creatures. Instead, as the simulation itself deepens (for instance, when light sensors or vibration sensors are allowed to evolve on arbitrary parts of bodies, as evolution dictates) graphical elements visualizing these phenotypic features will be rendered. They may be recognizably eye and ear-like, or they may not.

8 Commentary: Why Are There No Humanoids Here? The goal of Artificial Intelligence research is something magnificent, and something very human indeed. It appears as if the skills enabling one to play chess and use grammar are a very recent evolutionary trend, whereas climbing a tree is a much older skill, truly more complex, and in fact possibly an activity whose basic cognitive mechanisms are the primordial stuff of higher reasoning. These simulations do not aim to model humans. But if that were the goal, it may be best to design a vestibular system and an environment in which vertical, bipedal locomotion might be advantageous. Best to design an evolvable neural system (and to allocate lots of computer memory!) and a highly complex, dynamic environment, whereby something brain-like might billow like a mushroom cloud as an internal echo of the external complexity. Best to design a simulation in which objects existing in the world might be exploited as tools and symbols, extended phenotypes, cultural memes. What might evolve then could be something more human-like than the wiggling worms shown in the illustrations of this paper. But in the meanwhile, there is no rush on my part to get to human levels with these simulations. And at any rate, imaginary animals can be very thought-provoking.

9 Conclusion In this paper, I have described many of the techniques, concepts, and concerns involved in a methodology for creating animated artificial life worlds. This approach does not begin with a biological research agenda but rather an eye towards the creation of forms and motions which are lifelike and autonomousa branch of character animation. The alife agenda of studying emergent behavior by way of the crafting of simulations has become incorporated into this animation methodology, and with it, some key themes from evolutionary biology, such as sexual selection. It is hoped that these concepts and techniques will be useful for further advances in animated artificial life, for education, entertainment, and research.

References 1. 2.

Anderson, L., Oh, Superman (record album) Badler, N., Barsky, B., Zeltzer, D. Making Them Move. Morgan Kaufmann, 1991

Designing Emergence in Animated Artificial Life Worlds

155

3.

Braitenberg, V., Vehicles: Experiments in Synthetic Psychology. MIT Press, Cambridge, Mass. 1984 4. Brooks, R., Elephants Don't Play Chess. Designing Autonomous Agents, (ed. Maes), MIT Press, 1990 page 3. 5. Dawkins, R.: The Evolution of Evolvability. Artificial Life Proceedings, Addison-Wesley 1989 pages 201-220. 6. Goldberg, D. Genetic Algorithms in Search, Optimization, & Machine Learning. Addison-Wesley, 1989 7. Holland, J. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor. 1975 8. Koza, J., Genetic Programming: on the Programming of Computers by Means of Natural Selection. MIT Press, 92 9. Langton, C., Artificial Life. Addison-Wesley, 1989 10. Ngo, T. and Marks, J. Spacetime Constraints Revisited. Computer Graphics . pp. 343350. 1993 11. Penny, S. Darwin Machine (text published on internet) at: http://www-art.cfa.cmu.edu/www-penny/texts/Darwin_Machine_.html

12. Reynolds, C., Flocks, Herds, and Schools: A Distributed Behavioral Model. Computer Graphics, vol 21, number 4, July, 1987. 13. RSG (Rocket Science Games, Inc., producer) Darwin Pond, software product, designed by Jeffrey Ventrella, 1997. (contact: [email protected]) or visit (http://www.ventrella.com). 14. Sims, K. Evolving Virtual Creatures. Computer Graphics. SIGGRAPH Proceedings pp. 15-22 1994. 15. Sims, K. Evolving 3D Morphology and Behavior by Competition. Artificial Life IV. MIT Press 1994 16. Terzopoulis, D., Tu, X., and Grzeszczuk, R. Artificial Fishes with Autonomous Locomotion, Perception, Behavior, and Learning in a Simulated Physical World. Artificial Life IV. MIT Press, 1994 17. Ventrella, J. Explorations in the Emergence of Morphology and Locomotion Behavior in Animated Figures. Artificial Life IV. MIT Press 1994 18. Ventrella, J. Sexual Swimmers (Emergent Morphology and Locomotion without a Fitness Function). From Animals to Animats. MIT Press 1996 pages 484-493 19. Ventrella, J., Attractiveness vs. Efficiency (How Mate Preference Affects Locomotion in the Evolution of Artificial Swimming Organisms, Artificial Life VI Proceedings, MIT Press, 1998 20. Ventrella, J., Gene Pool (Windows software application), at... http://www.ventrella.com, 1998 21. Witkin, A. and Kass, M. Spacetime Constraints. Computer Graphics. 22(4): 159-168 Aug. 1988 22. Zeltzer, David. Task Level Graphical Simulation: Abstraction, Representation, and Control. from Making Them Move. ed Badler. 1991

ALife Meets Web: Lessons Learned Lui i

li rini1

ri l ol n 2

ilippo M n zr3

n

nri

utop Lun

4

1

The Danish National Centre for IT-Research University of Aarhus, Ny Munkegade bldg. 540, 8000 Aarhus C., Denmark 2 Institute of Psychology National Research Council, 15 Viale Marx, 00137 Rome, Italy 3 Computer Science & Engr. Dept. Cognitive Science Dept., University of California, San Diego, La Jolla, CA 92093–0114 USA 4 The Danish National Centre for IT-Research University of Aarhus, Ny Munkegade bldg. 540, 8000 Aarhus C., Denmark [email protected], [email protected], [email protected], [email protected]

Abstract Artificial life might come to play important roles for the World Wide Web, both as a source of new algorithmic paradigms and as a source of inspiration for its future development. New Web searching and managing techniques, based on artificial life principles, have been elicited by the striking similarities between the Web and natural environments. New Web interface designs, based on artificial life techniques, have resulted in increased aesthetic appeal, smart animations, and clever interactive rules. In this paper we exemplify these observations by surveying a number of meeting points between artificial life and the Web. We touch on a few implementation issues and attempt to draw some lessons to be learned from these early experiences.

1

Introduction

n r nt r t nt rn t n it p rt xtu l n r p i l orl i u t v v lop v r r pi l . v n t ou n w t niqu — on um n- om put r int r tion in orm tion r tri v l n n twor routin m t o olo i — v n ppli to r n o pro l m t m r n o uitl t niqu not n r pi t rowt in iz n om pl xit o t . n t l o rti i l li ( Li ) provi t rowin om m unit wit u ul in pir tion? tn w p r i m m t o olo i n l orit m o Li o r? t uit l rti i l nvironm nt to o t r n w u ul rti i l li orm ? oul w ot r to m r on t nt rpri o popul tin t wit int lli nt int r tiv utonom ou nt wit li li vior ? r im port ntqu tion t t rv t tt ntion o t Li om m unit. o tim ul t t i i u ion w t rt in t i p p r wit urv o om proj t in w i Li lr to v ri l r m tt . J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 156–167, 1998. c Springer-Verlag Berlin Heidelberg 1998 

ALife Meets Web: Lessons Learned

157

will o u on om -r l t i u w r Li in our opinion m om to pl po itiv rol . i u in lu p t o i tri ut in orm tion r n m n m nt u lin n u rpt ilit ( ri in t ppli tion to r n M n m nt tion); n p t o int r i n u t ti pp l nim tion n int r tivit ( urv in t ppli tion to nt r i n tion). rou outt p p r w i u om l on l rn t rou t rl xp ri n .

2

Applications to Web Search and Management

On o t ntr l o l o Li in our opinion i to ppl l orit m in pir n tur l t m to pr ti l ppli tion . ppli tion t t tl n t m lv to Li ppro r t o w o op r tin nvironm nt r im port nt r t ri ti wit n tur l nvironm nt. it t xplo ion o t nt rn t itwoul v n i ultnotto noti t tri in im il riti tw n t i rti i l nvironm nt n t o in w i r l r tur volv pt triv om p t oll or t l rn row r pro u n i . Li n tur l nvironm nt t i v r l r ;iti n m i wit o um nt in r m ov n m ov ll t tim ;iti t ro n ou — v n on i rin t xt lon — wit ont xt- p n ntl n u orm t n t l ;iti noi wit lot o irr l v nt out t or in orr tin orm tion; n iti i tri ut ot t tion v o t ( or x m pl t n twor l t n o i t wit in rt in p ). nnum r l p r i m v n rou t ourt to t m t into ttin om p tt rn m ili r to u r . v n t m r in l o utonom ou o tw r nt ow m u o it u to t r v il ilit o virtu ll in nit in orm tion nvironm nt n t i ulti n ount r u r tt m ptin to op wit u nvironm nt. n o pi r proj tw orn n xplor tion o t po i l m t tw n om o t tur t tm t om pl x nvironm nt n om o t m ni m t t llow oloi o or ni m to ptto i r nt— ut tl t qu ll om pl x— n tur l nvironm nt. n p rti ul r t l r n mi n i tri ut fl vor o t l to lin pro l m w n tr ition l in orm tion r tri v l m t o r m plo to uil r n in . n orm tion o t n om tl or r n in t n up t . t ro n ou or niztion o in orm tion m n in l lo l xplor tion tr t l t n optim l— no m tt r ow l v r. t n n tur t u outw to om pl m ntt m t o wit l tr ition l ppro ? n n tur no in l p i i optim l wit r p tto t w ol worl ; i pt to om ni in w i m m r o t t p i nt or i ion on v volv . opul tion o itu t m o il lo l ont xt ontinuou pt tion n ro u tn — in n tur l n rti i l nvironm nt li . i r nt u popul tion n p i liz in lin wit t p uli r r t ri ti o t lo l nvironm nt t xp ri n .

158

2.1

Luigi Pagliarini et al.

InfoSpiders

im o t n o pi r t m i to ppl n t t v r l m in l rnin t niqu in pir n tur l pt tion or pro l m po r in n m n in in orm tion on t . i r ntm t o v n xplor xt n n int r t or t i t . n t proj t w xp rim nt wit v r ion o t n ti l orit m m plo in lo liz l tion m to ov rom t pro l m o pr m tur onv r n n llow or i tri ut im pl m nt tion ;wit i r nt ntr pr nt tion to n l nt to int rn liz lo lt xtu l tur into t ir volvin vior ;wit r in or m ntl rnin to ptin ivi u l tr t i ov r ort-t rm tim n p l on lo l ont xt; n wit r l v n to p rm itt u r to i t r pro m l l n on-lin on pr viou n urr ntp r orm n . t il ription o t im pl m nt tion o t n o pi r t m i out o t op o t i p p r. nt r t r r n n u t il w ll r port on pr lim in r xp rim nt l w r 14 15 16 or on-lin 1 . r w lim itour lv to outlin t n r li in t m o l. r t t p i to i nti t ru i lr our t oo o t rti i l nvironm nt. or t t t n iti to qu t r our wit r l v ntin orm tion. in r l v n i u j tiv ( tu l r l v n p n on t u r n m u t tim t nt) in orm tion m u t tr n orm into i r- ntrop qu ntit ;t i in l urr n w i nt in popul tion urviv r pro u n i i ll n r . n r mut po itiv l orr l t wit p r orm n n u r n nvironm nt. nt n ronou l o t rou im pl l in w i t r iv input rom t nvironm nt w ll int rn l t t p r orm om om put tion n x ut tion . tion v n n r o t utm r ultin n r int . nr i u up n um ul t int rn ll t rou out n nt li ;it int rn l l v l utom ti ll t rm in r pro u tion n t v nt in w i n r i on rv . nt t tp r orm t t tt r t n v r r pro u m or n oloniz t popul tion. n ir tint r tion m on nt o ur wit out t n o xp n iv om m uni tion vi om p tition or t r nit nvironm nt lr our .M ut tion n ro ov r or t n n r or t volution o n m i ll pt nt. i p r i m n or n it p n nt l tion t xp t popul tion iz i t rm in t rr in p it o t nvironm nt. o i tin i n r o t wit xp n iv i ntu tion intrin i ll n or ln n twor lo lim itin in o n wi t . Collective Behavior pt tion m n or nt to on ntr t in i nr r o t w r m n o um nt r r l v nt. nt urviv l will n ur x n in n qu t flow o in orm tion or n r . itu tion i illu tr t t n p ot in i ur 1 illu tr tin t pi l oll tiv vior o n o pi r in r pon to qu r n lim it to w ll n un o t . t r w il lu nt v oun w tt u r w nt ;

ALife Meets Web: Lessons Learned

t pro p r n m ultipl in t i r l v ntni xplor t worl in r o lt rn tiv .

w il ot r

159

nt ontinu to

Figure1. n p ot o n o pi r r in t p or o um nt r l v ntto t qu r L w ov rnin r l tion m on ov r i n t t . o um nt ontolo i r pr nt wit m or p i topi rt r rom t nt r o t ir l n tu l rti l outi o t ir l . r l v nt o um nt r r pr nt t llow r un nown tto t nt. o um nti m r wit r t n l i iti tim t r l v nt t r t nt vi itin it. olor o n ntr pr nt it lin o t t ll nt rin olor n rom t m n tor. ( t i 1997 n lop i rit nni n . 2;t vi u l r pr nt tion i t ut or .)

Search Efficiency On o t ntr l r ult o t n o pi r proj ti to v own t titi u ul or in orm tion r l orit m to vi w t n tur l nvironm nt n r t riz itin u t rm . t r t nvironm nt l t rom l rnin nt p r p tiv ? t ti ti l tur u

160

Luigi Pagliarini et al.

wor r qu n i r o our ru i l im n ion . t n r u t t lin topolo tru tur im po in orm tion provi r upon t orniztion o o um nt i not r im port nt r our . v n in un tru tur in orm tion nvironm nt ut or t n to lu t r o um nt outr l t topi l ttin t m pointto ot r. i r t ln p t t nt n xplor m in u o orr l tion o r l v n ro lin . r lim in r r ult on t or ti l n l i im ul tion n tu l xp rim nt on orpor r v r n our in 14 . t n own t tlin topolo n in t t n xploit i tri ut nt outp r orm in x u tiv r n or r o m nitu . M or ov r t nr tw n volution l rnin n r lv n in u n ition l our- ol oo tin p r orm n 15 . t

3

Applications to Web Interface Design

nt r i n( ) i r ti ll n in in it p t n un tionlit . titi un l r in w t ir tion iti m ovin . r r tl tt r qu lit tor in ( ) l v r int r tiv rul ;( ) m rt nim tion; n ( ) int r tivit n m rt nim tion n t ti pp l.1 ntilr ntl m or i rti i l int lli n t niqu r li onl on nim tor or pro r m m r iliti ; n t t ti pp l o r li uniqu l on r p i i n r iliti . ow v r in u r w ntto touto t i un x itin l n p t r i m n or m or pp lin or rti ti t ti m u m rt r — i notli -li — nim tion n m or l v r — i not um n-li — int r tion . Li t niqu pp r m on t rli t n m o tpl u i l ni t to r om o t n . n in t r m in r o t i tion w ow om x m pl in w i Li lp u to ul llt n provi in or n w w to int r t( ) m in nim tion ri r n m or n tur l ( ) n xt n in t um n p iliti to uil t ti rti t ( ). 3.1

Web Interactive Cellular Automata

tn or nt r tiv llul r utom t . t i m o Li 13 (or n ot r) llul r utom ton 20 t ti ontroll r flowin t xt on n MLp . r flowin t xti om tiv t xtt t w n li n m i ll n t tt o t llul r utom t . tw initi ll r t to upport - i n n provi pl in n in n unpr i t l nim tion or p lo o . tw t n xt n to upport n m i om put r rt n ot r in o ppli tion . om x m pl o n n on t om p 3. nim tion m to lm o t r quir m ntin t urr nt ion o p ; om t in m u t lw m ov . t p i t ti it om orin . r or w lot o nim tion on t n t. M o t o 1

Here we will not take into account other important, but secondary factors, such as user-oriented interface personalization, interface legibility, multilevel functionality, etc.

ALife Meets Web: Lessons Learned

t m

r nim tion n t ot r r u u nim tion r notv r pl in m onotonou tim irrit tin ! v nim tion r ju t littl o x tl t m p tt rn i r l invitin . lt ou t n u or t m purpo im t lo o ) t r r rom m onotonou . lw unpr i t l onv in lin o li .

161

ll im pl v nim tion . n in tir om — om tt r. n l r p tition nim tion r i r nt. nim tion ( . . nir Li fl vor m t m

A WICA-Based Homepage n x m pl on i r n mi w it lo o own in i ur 2. i u o w im pl m nt in t om p o ro . om ni o ri i o 4 . wo o i r nt olor r up rim po on t p lo o n n w p tt rn i r l into on o t m tim not r p i l t li in p rt xtlin . in t ppl tr i on p r t M L r m iti lw vi i l v n w n ot r r m r n . lt ou t i pl ov r t lo o t lo o r m in l r n i not i n it. r ultin nim tion i m u m or int r tin unpr i t l n liv t n n nim t or pr - i n nim tion n . ttri ut t i tto t u o Li t niqu .

Figure2.

n mi w

it lo o.

p i im pl m nt tion w m o t tt llul r utom ton ppl t u onw l i m o Li rul . ow v r t worl (im pl m nt r n r itr r roun pi tur ) i r two p rll l in p n nt ov rl ppin . ti n int r tiv ppl t w r ontrol i i v t rou li in on t xtu l lin r t r t n t ort o ox u o utton n m nu . tu l im n ion o t ri r r itr r ut in u u ll o wit pi tur roun t iz i i t t t pi tur . l orit m im pl m nt t r m t o or m in it l ul tion i ntl 2 1.

il t urr nt n r tion i i pl on t r n t n xt n r tion i pr p r on n o - r n u r n t n r pl t urr nt r n in in l t p. 2. Onl ll t t n t ir t tu r r -p int .

2

The basic algorithm is due to Andreas Ehrencrona [5].

162

3.

Luigi Pagliarini et al.

n l ul tin t num r o liv n i or o ll nowl rom pr viou ll l ul tion i u . . in t l t olum n o t urr nt lli t m i l olum n o t pr viou ll t t tu o t l tn i or i nown n n not r l ul t .

n mi r tr o i on two p r ll l in p n nt pl in im ult n ou l in t m worl . worl i ontroll vript t t m nt t t tiv t v routin . n t i routin i tiv t n w p tt rn o ll i in rt into on o t . t tr iv t p tt rn i l t tr n om . l o l t tr n om r t r p r m t r o t n w p tt rn t p tt rn it l it olor n it lo tion wit in t r m . p tt rn it l i l t rom li to t r int r tin n w ll- nown un t l p tt rn t R-Pentomino t H-Heptomino n t Pi-Heptomino. olor i l t tr n om rom li t n in t M L l. i im pl m nt tion r ult in vin x tl two olor ( n two popul tion ) t n iv n m om nt utt olor m n ivin t illu ion o in n w popul tion . t l o iv om w tun xp t vior on o t popul tion u nl n olor n t n w p tt rn. Text Oriented Interactivity nim tion n int r t wit xt Ori nt nt r ( O ) 17 o t tt lw r n w t m lv . oi 6 own in i ur 3 i u n xt n ion o n not r x m pl o v lu om m uni tion. oi provi t u r wit n int r tiv m o Li im pl m nt tion t ti ontroll r -flowin t xt. w i n to r tin p r ll lto r ul r M L p rt xt v nt in orto -pro u tto r ul r p r qu t n lw pro u in t m ( l itunpr i t l ) vior n m l in n w p tt rn to t worl . oi on t ot r n i i n to p r orm v riou pr n tion . n oi v riou p rlin m n i r ntt in n p r orm i r nt i t tion . i i on u in m or iv r v - v riptint r w r m ultipl v routin n tiv t t v ript om m n . oi m u o t r tool ML r m v n v ript. ML r m r u or r tin p w r on p rto t p (t pro r m or t pi tur ) i t ti n on t ntl i pl in pr n lo tion w il t r t o t p n roll or n ont nt. i m t iz o t u r int r pr ti ll unlim it . v i u or t pro r m it l v ript or m nipul tin t p rt xtlin n t v - v riptint r i u to llow t t xtto ontrol t pro r m . t xt ri t ppl tt ti i pl on t ML p n n l rn wit it li in on t ppropri t llow t row in u r to pl wor . oi ppli tion t u m on tr t t t Li t niqu n u tion l v lu to t m in r in n int r tiv n liv pro . or x m pl ou n im in n rti i l Li nt r tiv n oo w r u in t i t niqu ou n tu ll tr n orm our r in into n tiv pro . n x m pl woul i ou o not now w t i n unit in n ur ln twor i ou ju t li on t i n unit t xt n ou t tt

ALife Meets Web: Lessons Learned

Figure3.

oi w

163

it .

orr pon in p rto t n ur l n twor pi tur i fl in . ti w tw m n r in n int r tiv pro .O our t i i t m o t t ti p rt o it. ou n l o t in o intro u in oo r flowin t xtinto runnin im ul tion t . 3.2

Flocking Web Creatures

lo 7 r not r x m pl in w i Li i u ro ov r wit int r i u . lo ( own in i ur 4 ) lon to t flo in Li r tur v ri t 19 rin wit t m t o i l t n n to ti to t r n t li -li m r nt vior w i i on w im pl lo l rul . i r rom m o tot r Li flo in ( oi -t p ) im pl m nt tion in t rritori l nim t t t n t ir t rritor in tintru r . n t tm t r im pl m nt v ppl t. m or vn ppl t llow in ivi u l lo to n tr it n p r on lit (i lo n lo ) n popul tion o lo to r n volv ( lo ).On n o rv v riou ppli tion o t i t niqu . oi l orit m n u in t nt rt inm ntin u tr ( . . in t ur i r m ovi or nim tin torm o pr i tori ir ). or n x m pl o it ppli tion to . nim tion r liz wit t i in o t niqu i m or li -li vi u ll pl in n m rt r t n tr ition l nim tion t niqu ;iti l o to r liz — on on l rn to pro r m t oi or im pl u t ir ppl t. or urt r i u in lo im pli tion or l tu o ov r w t il outt ir im pl m nt tion. n n r l t lo vior i ov rn two rul – –

rul p i in rul p i in

ow to r l t to on own in ; ow to r l t to tr n r .

nli m o tot r flo in l orit m it om m unit ut onl to it two lo

lo o notr l t to ll m m r o tn i or . i i w orrow

164

Luigi Pagliarini et al.

Figure4.

lo

w

it .

rom

l x ulli m li o 9 o w i lo i n nt. lo m otion ll tt to two ot r m m r (it n i or ) o t flo lo r l t to t n i orin lo n tri to t lo to t m .3 i nti in w t r t r o it own in or tr n r . twill tr n r w w il in it own t rritor or fl i tr n r w il outi o it own t rritor . n t i lo ppl t vior p r m t r r om m on to t w ol popul tion. n i lo in t lo n i n i r nttr it. m on t p r m tr t t n i n w n l r tion ion olli ion i tn n m xim um p . or x m pl n i lo wit i r l r tion n p tr it will t n to t r m or rupt n will pp r m or n rvou pont n ou n in ivi u li ti ; lo wit i r ion v lu will t n to lin m or to t om m unit n will loo li on orm i t. n on i lo i i n i p or l r tion tr it t w ol roup om littl on u n i or niz . n i lo wit low n l z tr it will l t in . olor o or tr n r v .lo l lo .Lo li lo v t r iv vior w il non-lo li lo v t fli t vior. n i lo r tr n orm to t tr n r olor t r tt vni t r t m jorit . ow v r w n m n tr n r r pr nt t lo l i lo pp r on u n o not tt m tiv l . mo t vn o t t r v r ion i t lo ppl t w i upport volution. i tup llow m nipul tion o m or t n 30 i r ntp r m t r . n ition v r l pr n vior t l n i n to t urr nt popul tion. ppl t l o upport t option o tt in num r r p i ll to lo in or r to tr in ivi u l lo il . lt ou ppl t o not upport l or urit r on urr nt t tu n r ult n li

3

An approximate neighbor detection algorithm is used for efficiency, which is of course a crucial aspect in Web design.

ALife Meets Web: Lessons Learned

165

w i pl in i t in orm tion r n. r non- volvin lo liv t rn ll (or until t ir n r rop to zro) lo liv on n r tion n n r tion n w n t tr n r i n ll ill . lo volv xu ll w r lo i t n nto two p r nt. two p r nt r l t or in to two tn ttri ut n r n t . n lo n in or lo t urin it li tim n t m or it t tt r it i . n r i lo t proportion ll to p n t i in rom proxim it to n i or . it ti t-up on n il m xp rim nt wit popul tion o lo vin pr n vior or p r orm im ul tion o t volution r pro tunin t r l tiv w i t o t n n r in t tn un tion.

3.3

Interactive Computer Art

lo

v x m pli ow Li m o l n im pl m nt to run ro t . ut o u tm or n t in u ul rom pointo vi w? po itiv n w r m o t in on i rin int r tiv om put r rt w r t u r int r t wit r p i lo j t r t r t n o . rti i l int r pro r m 1 21 10 i on o v r l ppli tion o Li to om put r r p i i n. ti on rti i ln ur ln twor n volution r l orit m . t t innin o volution r ion it n r t 16r n om im w i r orto -im or t p int r to t rtit wor . u r n l t our o t m p r nt w i t n pro u ixt n n w in ivi u l on titutin n w n r tion. u r n zoom out o t im r ull loo tt m n o rv n tr n n int r tion in om po ition olor t xtur orm n p r p tiv . onl p r m t r t u r n ontrol t n r tion i m ut tion r t (w i ow v r n i n i r ntl or n r tion).

Figure5.

li

liv

rtw

it .

166

Luigi Pagliarini et al.

ppli tion o t i pro r m n n in r n ui om p 11 n t li liv rt it 12 own in i ur 5 . it m on tr t oupl o point. ir t Li t niqu n ppli to r p i i n n t r or to in t o r n ui it . on t u r n int r twit n influ n pi tur ontrollin it Li volution. li liv rt it riv rom t rti i l int r n lo llow t u r to o o int r tin wit t t xtt ti tt to n p rto t pi tur .

4

m t up r

Conclusion

proj t urv in t i tin point tw n Li p p r on n t w wit Li t niqu nm tur to v r i r nt

p p r o r n to i li t om o t n t . w v n owin ll lon n nvirom ntw r rti i l r tur m ov . l o ow t ton n im in t nt wit v r i r nt un tion n prop rti .

ir tl own t n o pi r t i rtil roun to ppl n xplor Li -in pir l orit m . l troni in orm tion nvironm nt row n om m or om pl x w il t l o u r xt n rom om put r xp rt to l m nt r ool tu nt t n or o t om putin t nolo i n onl om m or trin ntin our vi w. m or t loo li n tur l liv nvironm nt t m or w m u t loo t r l livin t m or um lin in pir tion. i lo m to t or w v oun Li t niqu (in pir livin t m ) to provi num r o uit l tool t t m i t ilit t t i n o pl in int r . v own x m pl o u o u t niqu to i v pp lin t ti ( in t li liv rt it n r n ui om p ) m rt nim tion ( in t n lo it ) n l v r int r tiv rul ( in t oi m on tr tion). in ll on n l o vi w t r t tin roun or Li m o l x m pli in t lo tm . pr m i o t i nti m t o i t r pro u i ilit o xp rim nt. Li t niqu n im ul tor r o t n i ultto om p r u to t tt t n i r nt l o nvironm nt. provi Li pr tition r wit lo l l or tor in w i t n x m in t ir t ori n o oo i n . o

in u pro l m t nt i o virtu l worl on t w it n r to t into ountt o in o ppro w v own. m o tim port ntl on l rn i t in o t w n tur l nvirom nt w r to pl or ni m n oin t t on qui l r liz t t Li t niqu tt m om nt r t m o t ton or t r liztion o io-li worl . m

ALife Meets Web: Lessons Learned

167

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

[15]

[16]

[17]

[18]

[19] [20] [21]

http://www.cs.ucsd.edu/˜fil. http://www.eb.com. http://gracco.irmkant.rm.cnr.it/luigi/wica. http://gracco.irmkant.rm.cnr.it/domenico. http://www.student.nada.kth.se/˜d95-aeh/lifeeng.html. http://www.aridolan.com/JcaToi.html. http://www.aridolan.com/JavaFloys.html. http://www.arch.su.edu.au/ thorsten/alife/biomorph.html. http://www.vulliamy.demon.co.uk/alex.html. http://www.daimi.aau.dk/˜hhl/ap.html. http://www.rex.opennet.org. http://gracco.irmkant.rm.cnr.it/luigi/vedran. M. Gardner. Mathematical games. Scientific American, 233:120–23, 1970. F. Menczer. ARACHNID: Adaptive Retrieval Agents Choosing Heuristic Neighborhoods for Information Discovery. In D. Fisher, editor, Proceedings 14th International Conference on Machine Learning (ICML97). Morgan Kaufmann, 1997. F. Menczer and R. K. Belew. Adaptive Information Agents in Distributed Textual Environments. In Proceedings 2nd International Conference on Autonomous Agents (Agents 98), 1998. To appear. F. Menczer, R. K. Belew, and W. Willuhn. Artificial Life Applied to Adaptive Information Agents. In Working Notes of the AAAI Spring Symposium on Information Gathering from Distributed, Heterogeneous Environments, 1995. L. Pagliarini and A. Dolan. Text Oriented Interactivity. Technical report, C.N.R., Rome, 1998. Submitted to Applied Artificial Intelligence (AAI) Journal Special Issue Animated Interface Agents: Making them Intelligent. L. Pagliarini, H. H. Lund, O. Miglino, and D. Parisi. Artificial Life: A New Way to Build Educational and Therapeutic Games. In C. Langton and K. Shimohara, editors, Proceedings of Artificial Life V, pages 152–156, Cambridge, MA, 1997. MIT Press. C. W. Reynolds. Flocks, Herds, and Schools: A Distributed Behavioral Model. Computer Graphics, 21(4):25–34, 1987. J. von Neumann. The Theory of Self-Reproducing Automata. University of Illinois Press, Urbane, 1966. A. W. Burks, ed. V. Vucic and H. H. Lund. Self-Evolving Arts — Organisms versus Fetishes. Muhely - Hungarian Journal of Modern Art, 104:69–79, 1997.

Information Flocking: Data Visualisation in Virtual Worlds Using Emergent Behaviours Glenn Proctor and Chris Winter Admin 2 PP5, BT Laboratories, Martlesham Heath, Ipswich, UK. IP5 3RE. {glenn.proctor, chris.winter}@bt-sys.bt.co.uk

Abstract. A novel method of visualising data based upon the schooling behaviour of fish is described. The technique allows the user to see complex correlations between data items through the amount of time each fish spends near others. It is an example of a biologically inspired approach to data visualisation in virtual worlds, as well as being one of the first uses of VRML 2.0 and Java to create Artificial Life. We describe an initial application of the system, the visualisation of the interests of a group of users. We conclude that Information Flocking is a particularly powerful technique because it presents data in a colourful, dynamic form that allows people to easily identify patterns that would not otherwise be obvious.

1. Introduction

1.1 Flocking & Schooling Birds flock, fish school, cattle herd. The natural world has many examples of species that organise themselves into groups for some reason, for example to reduce predation. It has been shown [1] that many predators are "tuned" to hunting individuals, and are confused by large numbers of animals organised into a flock or school. Although the evolutionary advantages in flocks had been well characterised no simple models reproducing such behaviours had been demonstrated. Reynolds created computer simulations of such flocks by modelling a few simple rules [2], and christened individuals that undergo such flocking "boids". Animations based upon boid-like motion have appeared in a number of Hollywood films1. The emergent behaviour of the flock is the result of the interaction of a few simple rules. In Reynolds' simulation, these were:

1

Examples: herds of dinosaurs in “Jurassic Park”, flocks of bats in “Cliffhanger” and “Batman Returns” and the wildebeest stampede in “The Lion King”.

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 168-176, 1998 © Springer-Verlag Berlin Heidelberg 1998

Information Flocking: Data Visualisation in Virtual Worlds

169

 collision avoidance  velocity matching (flying with the same speed and direction as the others)  flock centring (trying to fly near the centroid of one's neighbours)

These rules are sufficient to reproduce natural behaviours, particularly if a predator is treated as an obstacle. However their simplicity allows the use of such selforganised behaviour to be extended to serve a more useful purpose: data visualisation. In Information Flocking, a fourth rule is added which modifies the motion of the individuals on the basis of some similarity measure. This measure can be derived from a set of data, with each individual boid associated with a single data item. The flocking motion then becomes a means of visualising the similarities between the individual data items. A virtual world was created to display the flocking behaviour. Initially this consists of a school of fish swimming around in 3D, but it is easily extended to include such concepts as attractive and repellent objects (which might attract or repel specific items), and predators which might act as data filters. The initial problem to which Information Flocking was applied is that of visualising the interests of a group of people. Previously, hierarchical clustering techniques [5] have been applied to such data sets. Neural network approaches have also been used [6]. In particular, Orwig and Chen [7] have used Kohonen neural nets [8] to produced graphical representations of such data. While these representations proved to be much faster and at least as powerful as subjective human classification, they are essentially static. Information flocking is dynamic in that the fish in the simulation can change their behaviour in response to changes in the underlying data set. The output is also dynamic (the fish “swim”) which allows the human viewer to identify patterns more easily.

2. Methods

2.1 VRML The prototype Information Flocking system was developed using VRML (Virtual Reality Modelling Language), version 2.02. This is a powerful, emerging technology that allows rapid development of graphical programs. Objects in a VRML "world" can be controlled by means of a Java script. This results in a system that can produce 3D, interactive graphical simulations that can be controlled using all the features of the Java programming language. VRML is also platform independent and allows easy creation of multi-user environments. A schematic of the system is given in Fig. 1.

2

See the VRML Repository at http://www.sdsc.edu/vrml/ for more information on VRML.

170

Glenn Proctor and Chris Winter

VRML World

Java Script Node Anchors to URLs Events fish geometry ROUTEs

fish geometry fish geometry

WWW Browser

Fig. 1. Schematic of a VRML world. The world is organised hierarchically; different node types define geometry, appearance, camera views etc.. A Script node allows interfacing to a Java program. This controls the behaviour of the fish. Communication is event-driven, and events are connected by structures known as Routes. Objects (e.g. fish) can be “anchored” to URLs; thus when a fish is clicked upon, the relevant page can be opened in a normal web browser such as Netscape. A more detailed description of VRML can be found at the VRML Repository (see footnote on previous page).

2.2 Modelling Emergent Behaviour The fish in the demonstration are initially positioned randomly. They then swim along a heading vector, which is the resultant of several component vectors, one for each of the behavioural rules. In the current system, these are:  collision avoidance - avoid collisions with other fish if possible. The vector for this

behaviour is calculated to take each individual away from its close neighbours.  velocity matching - each individual tries to swim in the same direction and at the

same speed as nearby fish, subject to a minimum and maximum speed.

Information Flocking: Data Visualisation in Virtual Worlds

171

 origin homing - try to swim towards the origin. This has the effect of keeping the

fish on screen, and results in their swimming around in a simulated "bowl", rather than disappearing into infinity.  information flocking - swim closer to fish that are similar, and further from fish

that are different3. The vector for this is calculated as the weighted resultant of all the vectors between the current fish and its neighbours. The weights are obtained from the correlation matrix described in §2.3. The result of these calculations is a vector representing the ideal heading for the fish in question. This ideal heading vector is averaged with the current heading to give the fish’s actual new heading. The calculation of the heading vector for a fish A is as follows: New heading = wCA-(bi - A) + wVM(C - A) + wOH(A - O) + wIFSij(bi - A) where:

wCA , wVM, wOH , wIF = weighting applied to Collision Avoidance, Velocity Matching, Origin Homing and Information Flocking behaviours respectively A, bi, O = position vector of fish A, fish i and origin respectively C = position vector of the centroid of fish A’s neighbours (see below) Sij = similarity of fish A and fish i. Notes: 1. The collision avoidance component of the heading vector is repulsive, while all the other components are attractive. 2. The similarity matrix is calculated at the start of the simulation, to maximise the speed. There is no reason that it could not be calculated on-the-fly from “live” data; this would result in the fish changing their behaviour dynamically to reflect the changes in the data. 3. At each time-step, two main calculations must be performed: i.

3

The matrix of distances between the fish must be recalculated. This matrix is symmetric, so in order to optimise the speed of the simulation, only half of it is

The weight, wIF, applied to the Information Flocking behavioural component, determines the strength of inter-fish attraction. In the “interest” data set that is used for the current demonstration, wIF lies between 0.0 and 1.0, so completely dissimilar fish have zero attraction. It would be feasible for wIF to be in the range –1.0 to 1.0; this would result in a more marked separation. The “interest” data set was used as obtained by the authors, without modification.

172

Glenn Proctor and Chris Winter

calculated and then copied into the other half (since the distance between fish i and fish j is equal to the distance between fish j and fish i!) Distances are actually stored as their squares in order to avoid large numbers of calculations of computationally-intensive square roots. ii. The near-neighbours of each fish must be recalculated. This is based on the distance matrix described above. Each fish can sense its environment to a very limited extent. They can "see" for a fixed distance in a sphere around them. No attempt has been made to implement directional vision; the simple spherical model was considered sufficient for our purposes. The distance that the fish can "see" can be varied; in most of our investigations so far a distance approximately equal to four body lengths has been appropriate. This point is discussed in more detail below. Each fish can be given a colour and a label, which appears when the mouse pointer is moved over it. URLs (Uniform Resource Locators) can be specified for each fish; when this is done, clicking on the fish will open the relevant web page in a browser such as Netscape. It should be pointed out that the choice of the fish representation was an arbitrary decision based upon ease of construction and rendering. The behaviour of birds, bees or sheep could be modelled with very little modification. Other workers, notably Terzopoulos, Tu and Grzeszczuk have presented work that concentrates on the simulation of the behaviour of real fish [3]. 2.3 Data Used The data used for the Information Flocking needs to provide weights for all possible pairwise interactions between the fish. Thus, it needs to be in the form of a square, symmetric matrix. In the current application, this is in the form of a matrix of similarities between the interests of a group of Internet users [4], but any data in the correct format could be visualised using this technique. The colours, labels and URLs of the fish are also read in from files. In the case of the ‘interest’ data, the labels represent the interests of the individuals, and the colour of each fish represents the primary interest of that person.

Information Flocking: Data Visualisation in Virtual Worlds

173

3. Results An example screenshot is shown in Fig. 2.

yellow

light blue

white green

dark blue purple

Fig. 2. Screenshot of the Information Flocking simulation. The labels represent the colours of the fish when seen on a colour monitor. Only the colours referred to in the text below are labelled. The ellipses in the figure serve only to delineate the colour groups; they do not appear in the actual simulation.

Watching the simulation for even a short period of time is very instructive. Several closely-related groups are immediately visible; the green, white and dark blue fish form small groups, indicating that the users in each of these groups have very similar interests. However, the individual groups also tend to form larger, more diffuse “supergroups”; for example the green, white, dark blue and purple fish all tend to move around in a large group, indicating a lower, but still significant, level of intergroup similarity. Other small groups remain distinct from the main group. The light blue fish tend to exhibit markedly different behaviour, indicating that the users whom they represent have substantially different interests to the other users.

174

Glenn Proctor and Chris Winter

More subtle interactions can also be seen. The two yellow fish ostensibly represent people with the same primary interest, yet they do not swim close to each other. Upon further inspection, the two individuals turn out to be interested in different aspects of the same subject, which is the reason for the separation of the fish representing them.

4. Future Developments Feedback on the prototype information flocking demonstrator described in this paper has been exceedingly positive; for this reason we are actively developing the system. Development is proceeding in three main areas: 1. Speed: currently the system runs at an acceptable frame rate (30 frames per second on a high-end PC with hardware 3D acceleration) for data sets represented by up to about 250 fish. Clearly this needs to be extended to handle data sets involving 1000 or more individuals (see note 2 below). Recent advances in VRML browsers, rendering engines and hardware graphics acceleration are also yielding considerable speed increases. 2. Data sets: we are investigating the properties of the system when applied to data sets other than the one initially used. Other correlation methods are also being tried. 3. Statistical investigations: through initial experiments with the current Information Flocking demonstrator, it appears that several factors can significantly affect the emergent behaviour. For example: 



the weights applied to each of the component vectors in the calculation of the heading vector. the distance within which the fish can detect each other

Preliminary results from some of these further developments are very illuminating. For instance, when a simulation with over 300 fish was set-up it was found that if the vision horizon was too long they formed a single, undifferentiated clump. If it was too short they never flocked but moved as individuals. There is clearly an optimal setting which leads to the formation of fluid groups, that keep partially breaking up and reforming. The suggestion here is that this phenomenon may be explained as a phase transition between a gaseous random state and a solid one, as the order parameter is changed. Clearly the system needs to be tuned into the phase transition zone to work best. Such phenomena are also seen in classical hierarchical clustering [5], and work is underway to reconcile these explanations. As the exact values depend on all the parameter settings, and on the numbers of fish, their relative volume and that of the virtual space this could be difficult. We are

Information Flocking: Data Visualisation in Virtual Worlds

175

now developing a system where the fish increase or decrease their horizon depending on their ‘satisfaction’ - the degree to which they have found similar fish near them. Early results suggests this leads to some particularly interesting phenomena akin to courtship behaviours.

5. Conclusions This paper has described the concept of Information Flocking. It shows that ideas in artificial life, developed to mimic biology have much more powerful applications outside the arena of simulations. The technique is powerful because it presents data in a form which is particularly suited to the human brain’s evolved ability to process information. Human beings are very good at seeing patterns in colour and motion, and Information Flocking is a way of leveraging this ability. The dynamics of the system provide a much greater degree of understanding that the initial data would suggest. Virtual worlds with virtual organisms could have a powerful impact on the way we use and visualise complex data systems. A number of demonstrators have now been built that show these concepts in action.

Acknowledgements The authors would like to thank the following: Barry Crabtree and Stuart Soltysiak for providing the initial data set on which this work was based. Tim Regan at BT Labs, John DeCuir at Sony Pictures Imageworks and David Chamberlain at Silicon Graphics/Cosmo Software for VRML-related help. Mark Shackleton, rest of the BT Artificial Life Group and Alan Steventon for many useful discussions.

References 1. Partridge, B.L., “The Structure and Function of Fish Schools”, Scientific American, June 1983, 90-99. 2. Reynolds, C.W. "Flocks, Herds and Schools: A Distributed Behavioural Model”, Computer Graphics, 21 (4), 1987. 3. Terzopoulos, D., Tu, X. & Grzeszczuk, R. “Artificial Fishes with Autonomous Locomotion, Perception, Behaviour, and Learning in a Simulated Physical World”, Proceedings of the Artificial Life IV Workshop, MIT Press, 1994. 4. Crabtree, B. & Soltysiak, S. “Identifying and Tracking Changing Interests”, IJCAL ’97 Workshop on AI in Digital Libraries.

176

Glenn Proctor and Chris Winter

5. Salton, G., Automatic Text Processing, Reading, MA, Addison Wesley Publishing Company Inc, 1989. 6. Macleod, K.J. & Robertson, W., “A Neural Algorithm for document clustering”, Information Processing and Management, 27 (4), 337-346, 1991. 7. Orwig, R.E., and Chen, H., “A Graphical, Self-Organizing Approach to Classifying Electronic Meeting Output”, Journal of the American Society for Information Science, 48 (2) 157-170, 1997. 8. Kohonen, T., Self-organisation and associative memory, Springer-Verlag, 1989.

Nerve Garden: A Public Terrarium in Cyberspace Bruce Damer, Karen Marcelo, and Frank Revi Biota.Org, a Special Interest Group of the Contact Consortium Contact Consortium, P.O. Box 66866 Scotts Valley, CA 95067-6866 USA Tel: +1 408-338-9400 E-mail:[email protected]

Abstract. Nerve Garden is a biologically-inspired multi-user collaborative 3D virtual world available to a wide Internet audience. The project combines a number of methods and technologies, including L-systems, Java, cellular automata, and VRML. Nerve Garden is a work in progress designed to provide a compelling experience of a virtual terrarium which exhibits properties of growth, decay and cybernetics reminiscent of a simple ecosystem. The goals of the Nerve Garden project are to create an on-line “collaborative Artificial Life laboratory” which can be extended by a large number of users for purposes of education and research.

Introduction During the summer of 1994, one of us (Damer) paid a visit to the Santa Fe Institute (SFI) for discussions with Chris Langton and his student team working on the Swarm simulation system [8]. Two fortuitous events coincided with that visit: SFI was installing the first World Wide Web browsers, and digital movies of Karl Sims’ evolving “block creatures” [1, 7] were being viewed through the Web by amazed students (figure 1). It was postulated then that the combination of the emerging backbone of the Internet, a distributed simulation environment like Swarm and the compelling 3D visuals and underlying techniques of Sims’ creatures could be combined to produce something very compelling: on-line virtual worlds in which thousands of users could collaboratively experiment with biological paradigms. In the three years since the SFI visit, we founded an organization called the Contact Consortium [11]. This organization serves as a focal point for the development of online virtual worlds and hosts conferences and research and development projects. One of its special interest groups, called Biota.org, was chartered to develop virtual worlds

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 177-185, 1998 © Springer-Verlag Berlin Heidelberg 1998

178

Bruce Damer et al.

using techniques from the natural world. Its first effort is Nerve Garden which came on-line in August of 1997 at the SIGGRAPH 97 conference [12].

Fig. 1. View of two of Karl Sims’ original evolving creatures in a competition to control a block. Iterations of this exercise with mutations of creature’s underlying genotypes yielding winning strategies created a large variety of forms. Above we see a two armed grasping strategy being bested by a single armed ‘hockey stick’ strategy. Courtesy K Sims. See [7] for the original MPEG movies of Sims’ creatures.

Three hundred visitors to the Nerve Garden installation used L-systems and Java to germinate plants models into a shared VRML (Virtual Reality Modeling Language) world hosted on the Internet. Biota.org is now developing a subsequent version of Nerve Garden, which will embody more biological paradigms, and, we hope, create an environment capable of supporting education, research, and cross-pollination between traditional Artificial Life subject areas and other fields.

Nerve Garden I: Architecture and Experience

Nerve Garden I is a biologically-inspired shared state 3D virtual world available to a wide audience through standard Internet protocols running on all major hardware platforms. Nerve Garden was inspired by the original work on Artificial Life by Chris

Nerve Garden: A Public Terrarium in Cyberspace

179

Langton [3], Tierra and other digital ecosystems by Tom Ray [2,3] and the evolving 3D virtual creatures of Karl Sims [1]. Nerve Garden sources its models and methods from the original work on L-systems by Aristide Lindenmayer, Przemyslaw Prusinkiewicz and Radomir Mech [5, 6] and the tools from Laurens Lapre [10].

Fig. 2. Lace Germinator Java client interface showing the mutator outputs on the left, the growth and naming interfaces at the bottom with a plant production visualized in the main window. The first version, Nerve Garden I, was built as a client-server system written in the distributed Java language. The client program, called the Germinator (figure 2), allowed users to extrude 3D plant models generated from L-systems. The simple interface in the Java client provided an immediate 3D experience of various L-system plant and arthropod forms. Users employed a slider bar to extrude the models in real time and a mutator to randomize production rules in the L-systems and generate variants on the plant models. Figure 3 shows various plant extrusions produced by the Germinator, including models with fluorescences. After germinating several plants,

180

Bruce Damer et al.

the user would select one, name it and submit it into to a common VRML 2.0 scenegraph called the Seeder Garden.

Fig. 3. Plant models generated by the Germinator including flowering and vine like forms. Arthropods and other segmented or symmetrical forms could also be generated. The object passed to the Seeder Garden contained the VRML export from the Germinator, the plant name and other data. The server-based Java application, called NerveServer, received this object and determined a free “plot” on an island model in a VRML scenegraph. Each island had a set number of plots and showed the user the plot his or her plant could be placed by moving an indicator sphere operated through the VRML external authoring interface (EAI). “Cybergardeners” would open a window into the Seeder Garden where they would then move the indicator sphere with their plant attached and place it into the scene. Please see an overview of the client-server architecture of Nerve Garden I in figure 4. Various scenegraph viewpoints were made available to users, including a moving viewpoint on the back of an animated model of a flying insect endlessly touring the island. Users would often spot their plant as their bee or butterfly made a close

Nerve Garden: A Public Terrarium in Cyberspace

181

approach over the island (figure 5). Over 10MB of sound, some of it also generated algorithmically, emanated from different objects on the island added to the immersion of the experience. For added effect, L-system based VRML lightening (with generated thunder) occasionally streaked across the sky above the Seeder Garden islands. The populated seeder island was projected on a large screen at the Siggraph hall which drew a large number of visitors to the installation.

Fig. 4. Nerve Garden I architecture detailing the Java and VRML 2.0 based clientserver components and their interaction. NerveServer permitted multiple users to update and view the same island. In addition, users could navigate the same space using standard VRML plug-ins to Web browsers on Unix workstations from Silicon Graphics, PCs or Macintosh computers operating at various locations on the Internet. One problem was that the distributed L-system clients could easily generate scenes with several hundred thousand polygons, rendering them impossible to visit. We used 3D hardware acceleration, including an SGI Onyx II Infinite Reality system and a PC running a 3D Labs Permedia video acceleration card to permit a more complex environment to be experienced by users.

Fig. 5. Flight of the bumblebee above a Seeder Garden

182

Bruce Damer et al.

In 1999 and beyond, a whole new generation of 3D chip sets on 32 and 64 bit platforms will enable highly complex 3D interactive environments. There is an interesting parallel here to Ray’s work on Tierra [2,9], where the energy of the system was proportional to the power of the CPU serving the virtual machine inhabited by Tierran organisms. In many Artificial Life systems, it is not important to have a compelling 3D interface. The benefits to providing one for Nerve Garden are that it encouraged participation and experimentation from a wide non-technical group of users. The experience of Nerve Garden I is fully documented on the Web and several gardens generated during the SIGGRAPH 97 installation can be visited on-line with a VRML 2.0 equipped web browser [12]. What was Learned As a complex set of parts including a Java client, simple object distribution system, a multi-user server, a rudimentary database and a shared, persistent VRML scenegraph, Nerve Garden functioned well under the pressures of a diverse range of users on multiple hardware platforms. Users were able to use the Germinator applet without our assistance to generate fairly complex, unique, and aesthetically pleasing models. Users were all familiar with the metaphor of gardens and many were eager to “visit their plant” again from their home computers. Placing their plants in the VRML Seeder Gardens was more challenging due to the difficulty of navigating in 3D using VRML browsers. Younger users tended to be much more adept at using the 3D environment. In summary, while a successful user experience of a generative environment, Nerve Garden I lacked the sophistication of a “true -Life system” like Tierra [2,9] in that plant model objects did not reproduce or communicate between virtual machines containing other gardens. In addition, unlike an adaptive L-system space such as the one described by Mech and Prusinkiewicz [6] the plant models did not interact with their neighbors or the environment. Lastly, there was no concept of autonomous, self replicating objects within the environment. Nerve Garden II, now under development, will address some of these shortcomings, and, we hope, contribute a powerful tool for education and research in the ALife community.

The Next Steps: Nerve Garden II The goals for Nerve Garden II are:  to develop a simple functioning ecosystem within the VRML scenegraph to control polygon growth and evolve elements of the world through time;  to integrate with a stronger database to permit garden cloning and inter-garden communication encouraging cross pollination between islands;

Nerve Garden: A Public Terrarium in Cyberspace

183

 to integrate a cellular automata engine which will support autonomous growth and

replication of plant models and introduce a class of virtual herbivores (“polyvores”) which prey on the plants’ polygonal energy stores;  to stream world geometry through the transmission of generative algorithms (such as the L-systems) rather than geometry, achieving great compression, efficient use of bandwidth and control of polygon explosion and scene evolution on the client side; Much of the above depends on the availability of a comprehensive scenegraph and behavior control mechanism. In development over the past two years, Nerves is a simple but high performance general purpose cellular automata engine written as both a C++ and Java kernel [13]. Nerves is modeled on the biological processes seen in animal nervous systems, and plant and animal circulatory systems, which all could be reduced to token passing and storage mechanisms. Nerves and its associated language, NerveScript, allows users to define a large number of arbitrary pathways and collection pools supporting flows of arbitrary tokens, token storage, token correlation, and filtering. Nerves borrows many concepts from neural networks and directed graphs used in concert with genetic and generative algorithms as reported by Ray [3], Sims [1] and others. Nerves components will underlie the Seeder Gardens providing functions analogous to a drip irrigation system, defining a finite and therefore regulatory resource from which the plant models must draw for continued growth. In addition, Nerves control paths will be generated as L-system models extrude, providing wiring connected to the geometry and proximity sensors in the model. This will permit interaction with the plant models. When pruning of plant geometry occurs or growth stimulus becomes scarce, the transformation of the plant models can be triggered. One step beyond this will be the introduction of autonomous entities into the gardens, which we term “polyvores”, that will seek to convert the “energy” represented by the polygons in the plant models, into reproductive capacity. Polyvores will provide another source of regulation in this simple ecosystem. Gardens will maintain their interactive capacity, allowing users to enter, germinate plants, introduce polyvores, and prune plants or cull polyvores. Gardens will also run as automated systems, maintaining polygon complexity within boundaries that allow users to enter the environment. Example of a NerveScript coding language spinalTap.nrv: DEF spinalCordSeg Bundle { -spinalTapA-Swim-bodyMotion[4]-Complex; -spinalTapB-Swim-bodyMotion[4]-Complex; }

184

Bruce Damer et al.

We expect to use Nerves to tie much of the above processes together. Like VRML, Nerves is described by a set of public domain APIs and a published language, NerveScript [13]. Figure 6 lists some typical NerveScript statements which describe a two chain neural pathway that might be used as a spinal chord of a simple swimming fish. DEF defines a reusable object spinalCordSeg consisting of input paths spinalTapA and spinalTapB which will only pass the token Swim into a four stage filter called bodyMotion. All generated tokens end up in Complex, another Nerve bundle, defined elsewhere.

Fig. 6. Nerves visualizer running within the NerveScript development environment. In the VRML setting, the pathways spinalTapA and spinalTapB are fed by eventOut messages drawn out of the scenegraph while the Nerve bundles generate eventIns back to VRML using the EAI. BodyMotion are a series of stages where filters draw off message tokens to drive simulated musculature on the left and right hand side of the VRML model, simulating a swimming motion. Message tokens that reach Complex, a kind of simple brain, trigger events in the overall simulation, and exporting messages back into the external pool.

Figure 6 shows the visualization of the sample NerveScript code running in the NerveScript development environment. We are currently connecting Nerves to the VRML environment, where it will be possible to visualize the operation in 3D. Nerves is fully described at the web address referenced at the end of this paper.

Goals and Call for Participation The goals of the Nerve Garden project are to create an on-line “collaborative A-Life laboratory” which can be extended by a large number of users for purposes of education and research. We plan to launch Nerve Garden II on the Internet and look forward to observing both the user experience and, we hope, the emergence of complex forms and interactions within some of the simple garden ecosystems. The Contact Consortium and its Biota.org special interest group would welcome additional collaborators on this project.

Nerve Garden: A Public Terrarium in Cyberspace

185

Acknowledgments In addition to the extensive contributions made by the authors of this paper, we would like to thank the following sponsors: Intervista, Silicon Graphics/Cosmo Software, and 3D Labs. A special thanks goes to Przemyslaw Prusinkiewicz for inspiring this work in the first place and numerous other individuals who have worked on aspects of the project since 1995.

References 1. 2. 3.

4. 5. 6.

Sims, K., Evolving Virtual Creatures, Computer Graphics (Siggraph '94) Annual Conference Proceedings, July 1994, pp.43-50. Ray, T. S. 1994a. Netlife - Creating a jungle on the Internet: Nonlocated online digital territories, incorporations and the matrix. Knowbotic Research 3/94. Ray, T. S. 1994b. Neural Networks, Genetic Algorithms and Artificial Life: Adaptive Computation. In Proceedings of the 1994 ALife, Genetic Algorithm and Neural Networks Seminar, 1-14. Institute of Systems, Control and Information Engineers. Langton, C. 1992. Life at the Edge of Chaos. Artificial Life II 41-91. Redwood City CA: Addison-Wesley. Prusinkiewicz, P., and Lindenmayer, A., eds. 1990. The Algorithmic Beauty of Plants. New York: Springer Verlag. Mech, R., and Prusinkiewicz, P. 1994. Visual Models of Plants Interacting with Their Environment. In Proceedings of SIGGRAPH 96 . In Computer Graphics Proceedings, 397-410. ACM Publications.

Online References 7. 8. 9. 10. 11. 12. 13.

Karl Sims’ creatures are viewable on Biota.org at: http://www.biota.org/conf97/ksims. html The Swarm Simulation System is described at http://www.santafe.edu/projects/swarm/ Tom Ray’s Tierra is fully described at http://www.hip.atr.co.jp/~ray/tierra/tierra.html Laurens Lapre’s L-parser L-system software tools and examples can be found at: http://www.xs4all.nl/~ljlapre/ Full background on the goals and projects of the Contact Consortium are at http://www.ccon.org and its special interest group, Biota.org are at http://www.biota.org Nerve Garden I can be entered at http://www.biota.org/nervegarden/index.html with a suitable VRML 2.0 browser installed. The Nerves home page, with language specification, examples and downloads is at: http://www.digitalspace.com/nerves/

A Two Dimensional Virtual World to Explain the Genetic Code Structure ? Jean-Luc T y ran Electricit´e de France - Direction des Etudes et Recherches 6, Quai Watier - BP49 - 78401 Chatou Cedex - France [email protected] [email protected]

Abstract. In this study, we introduce a remarkable squared tiling of the plan whose characteristics meet in many points those of the genetic code: same number of structural levels (3), same number of elements at each level (4, 64 and 20), same relationships between the elements of the different levels. To conclude, we formulate one hypothesis to explain these results and consequently we propose new ways of research in Artificial Life, but also in structural molecular biology.

1

Introduction and Purpose

In ArtificialLife studies,M ath ematics and math ematicaltools are importantto explore and to experiment(by means of computer simulations) th e beh aviour and th e fundamental structures of artificialliving sy stems. In real living sy stems studies, biological experiments are essential. In th is domain, some recent experiments led by James Ferris (in collaboration with Leslie Orgelof th e Salk Institute) sh ow th e importance of mineralsurfaces1 and – more precisely – th e importance of geometricalconstraints in th e early stages of bio-molecular development(i.e. atth e origin of life): mineralsurfaces restrict th e elementary molecular movements into a weak number ofparticular directions and consequently favorize th e molecular poly merization (and poly condensation) processes on th ese surfaces [1]. Purpose: we propose to study, in a virtualmath ematical plan, i.e. in a two dimensional virtual world,th e consequences ofgeometricalconstraints th ath ave been observed in th e realworld (ph y sico-ch emicalrestriction ofth e directions of molecular movements). 1

The hypothesis of the origin of life on mineral surfaces had remained, longtime, without determining clue. Now, the experimental works of James Ferris and al. bring some fundamental and decisive elements to satisfy to this hypothesis: montmorillonite surface for nucleotides polymerization (ImpA) and illite and hydroxylapatite surfaces for amino acids polymerization (glutamic and aspartic acids).

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 186–192, 1998. c Springer-Verlag Berlin Heidelberg 1998 

A Two Dimensional Virtual World to Explain the Genetic Code Structure ?

2 2.1

187

Simplest Tiling of the Ideal Mathematical Plan Tiling by Squares Requires the Weakest Number of Tiled Directions

In th e ideal math ematical plan, wh ich is th e equivalent representation of an elementary bio-molecule? In response,we can ch oose an elementary piece ofth e plan surface: for example, a poly gonaltile. T o link th ese artificial bio-molecules (th e link ing of tiles is similar to a poly merization process), itis necessary to be able to pave th e plan with th e same (or identical) regular poly gonaltiles. Only th ree ty pes of regular poly gon can tiled th e plan: equilateral triangle, square or regular h exagon (see Fig. 1). Among th ese possibilities, th e simplest is th e tiling by squares: itrequires th e weak estnumber of tiled directions and satisfies to th e purpose ofth is study.Accordingly,we ch oose th e tiling by squares.

Fig. 1. T h e th ree tilings of th e plan by identicalpoly gons: equilateraltriangles, squares or regular h exagons.T h e simplestis th e tiling by squares: only two tiled directions are required in th is case (th ree tiled directions are required in th e oth er cases).

2.2

Simplest Tiling and Lowest Number of Squares

In biological world, diversity is every wh ere. Life, into many differentforms, is diversity in action.Also,we mustintroduce some minimaldiversity in th e tiling process. In th is goal, we ch oose to tile th e plan with th e lowestnumber of all differentsquares. T h is ch oice corresponds to a simple perfect squared square of lowest order (see Fig. 2). Itis th e mosteconomic tiling possible and its math ematicalform is uniqueness.As a resultofgraph and combinatorialth eories,th is economic tiling h as been found by A.J.W. Duijvestijn [2]. T o obtain th e first generation square (th e most external and integrative square of length 112), we add twenty different squares to th e first square or germ (th e mostinside and little square of length 2). And so on, all along th e successive generation squares...Itis a russian doll process: to obtain a new generation square,or new integron,we add 20new squares to th e former generation

188

Jean-Luc Tyran

square, or former integron. T h e properties of th ese math ematical integrons are analogous to th e properties of th e integrons defined by F. Jacob for th e living sy stems [3].

27

35 50 8 15

17 2

9 29

19

11 6

7

24

18

25

16

4

33

37

42

112

Fig. 2. An Artificial Life integron: th e simplesttiling of a square by th e lowest number of all differentsquares.

3

A Set of Analogies Between the Simplest Tiling of the Ideal Mathematical Plan and the Universal Genetic Code

T h e simplesttiling, or integron, sh owed Fig. 2, is an asymmetrical figure. T h is figure mark s th e four directions of th e plan: U p (U ), Down (D), Left(L) and Righ t(R): i.e. a four letters alph abetin relation to th e four sides of th e asy mmetricalfigure.Itexists also a twenty letters alph abet: th e setofth e twenty new squares added ateach new generation. Between th ese two alph abets (th e first of four letters and th e last of twenty letters), itexists an intermediate alph abet. T h e sixty-four letters ofth is intermediate alph abetare th e sixty -four elementary segments of th e simplesttiling. With th e same number of alph abets (3) and th e same numbers of letters in th ese alph abets (4, 64 and 20), th e structure of th e universal genetic code (see Fig.3) is similar to th e structure ofth e simplesttiling.T h e first alph abetof th e genetic code: th e four nucleotides U racil (U ), Cy tosine (C), Adenine (A) and Guanine (G). T h e intermediate alph abet: th e sixty-four codons. And th e last alph abet: th e twenty amino acids (to mak e th e proteins).

A Two Dimensional Virtual World to Explain the Genetic Code Structure ?

1st •

U

C

A

G

2d •

U

C

A

G

3d •

Phenylalanine Phenylalanine Leucine Leucine

Serine Serine Serine Serine

Tyrosine Tyrosine Stop Stop

Cysteine Cysteine Stop Tryptophan

U C A G

Leucine Leucine Leucine Leucine

Proline Proline Proline Proline

Histidine Histidine Glutamine Glutamine

Arginine Arginine Arginine Arginine

U C A G

Isoleucine Isoleucine Isoleucine Methionine

Threonine Threonine Threonine Threonine

Asparagine Asparagine Lysine Lysine

Serine Serine Arginine Arginine

U C A G

Valine Valine Valine Valine

Alanine Alanine Alanine Alanine

Aspartic acid Aspartic acid Glutamic acid Glutamic acid

Glycine Glycine Glycine Glycine

U C A G

189

Fig. 3. T h e universal genetic code for th e translation of th e 64 codons into th e 20 amino acids. A codon is a mRN A word of th ree letters among four mRN A bases or nucleotides (U (U racil), C (Cy tosine), A (Adenine) and G (Guanine)): first(1st), second (2d) and th ird (3d) letters.

N ow, we define a setof “at least – at most” rules, or existence and association rules,to specify th e relationsh ips between th e 64 elementary segments and th e 20squares. Each square (among th e 20squares) is bounded by β elementary periph eral < segments: 4 < = β = 7 (see Fig.2).Existence or “at least” rule: to exist,each square (by analogy, each amino acid), must depend on one of its β bounded segments (by analogy, of one codon). Association or “at most” rule: each square cannotdepend on th e totality of its β bounded segments. Atth e most, itdepends on (β − 1) bounded segments: itis th e only one manner to establish some minimal association links with in th e unified simplesttiling structure. By application of th ese rules,we h ave found th e relationsh ips sh owed Fig.4. T h is figure exh ibits some remark able analogies with th e genetic code.For example,th e analogies th atexistbetween th e squares of length 35,17and 25 and th e amino acids arginine, serine and leucine. T h ere are only th ree squares (35, 17 and 25) to h ave th e h igh est value of β (βmax = 7). Atth e most, each of th ese th ree squares depends on 6 bounded segments ((βmax − 1) = 6). In th e genetic code, th is value (6) corresponds to th e h igh estnumber of codons th atcode to one amino acid. And th ere are only th ree amino acids in th is case: arginine, serine and leucine... In th e genetic code, arginine and serine are adjacents (i.e. in th e same box of th e AGX codons), and leucine is alone. In th e simplesttiling, we h ave some similar relative positions: squares 35 and 17 are adjacents and th e square 25 is alone...

190

Jean-Luc Tyran

27

35 50 8

19

15

11

17

6 9

7

24

18

25

29

16

4 33

37

42

Fig. 4. Each arrow indicates a relationsh ip between one particular bounded segmentand one particular square.In th e genetic code,itexists a similar relationsh ip between one particular codon and one particular amino acid. T h e th ree outside arrows correspond to th e th ree Stop codons.

4

Conclusions

4.1

How to Explain the Set of Analogies Between the Simplest Tiling of the Ideal Mathematical Plan and the Universal Genetic Code?

T h e results obtained and th eir precision lead to ask th e question of a cause–to– effectrelationsh ip between th e structure ofth e simplesttiling and of th e genetic code. How to explain th e link s between th e math ematicalelements of th e 2D simplest tiling (bounded segments and squares of an unified math ematical form th atis independent of all matter substrates) and th e particular 3D molecular substrates of th e genetic code (mRN A codons and amino acids)? An answer can be found in a setoftwo dualgraph s2 th atis th e math ematical basis ofth e simplesttiling: a firstU p–Down graph 3 link s th e h orizontalopposite U p and Down sides of th e Fig. 2, a second Left–Righ tgraph link s th e vertical opposite Left and Righ t sides of th e Fig. 2. T h e first graph h as 11 vertices 2 3

More precisely, two dual 3-connected planar graphs. Every horizontal line of the simplest tiling corresponds to a vertice of the Up–Down graph, and each square (between two successive horizontal lines) corresponds to an edge. Every vertical line of the simplest tiling corresponds to a vertice of the Left– Right graph.

A Two Dimensional Virtual World to Explain the Genetic Code Structure ?

191

and 22 edges. T h e second graph h as 13 vertices and 22 edges. T h ese graph s are isomorph ic to convex poly h edra. Accordingly to th e Euler formula4 : th e first poly h edron h as 11 vertices and 13 faces5 (10 triangles, one quadrilateral poly gon, 2 pentagons);th e second h as 13 vertices and 11 faces6 (4 triangles, 4 quadrilateralpoly gons,2pentagons and one h exagon). In [4],itis suggested th atth e firstgenetic coding sy stem can be started from specific interactions between two complementary 3D molecular substrates. From an optimalspace filling point–of–view (to favorize ch emicalinteractions and to minimize energy ),we th ink th atth e two unk nown 3D molecular substrates h ave th e same structures th atth e two previous dualpoly h edra,and we propose to research th e bio-molecules th atcould correspond (new way of research). 4.2

Some Propositions for New Ways in Artificial Life Research

T h e simplesttiling ofth is study,or integron,sh ows th e emergence ofan h ierarch y of components (complexification process) and may be a potential source of new ideas (or open way s) in Artificial Life research : – N ew rules for programming cellular automata...For instance, in a “ particular” array (i.e.th e integron of th e Fig.4 and its successive generations) with a rule th at uses th e arrows (to specify th e auth orized displacements) and work s differently in each of th e 20squares of th e Artificial Life integron... – Wh ich is th e equivalentsimplesttiling,or paving,ofan oth er space? In oth er words,wh ich is th e equivalentgenetic code of th is oth er space? In spaces of dimension7 > = 3 ? In non euclidean spaces? Life “ as itcould be”... Life is it an exclusive property of th e euclidean space? – Cross fertilization between th e Artificial Life integron and th e arithmetical relator language [5].Arithmetical relator can express th e world in structural levels (a quadratic form express th e adaptation of th e natural sy stem to its environment). Artificial Life integron is also expressed by a quadratic form. T h e integron (see Fig.4) presents a setofparticular sy metries: we th ink th at th e arithmetical relator – associated to th e semisimple Lie algebra – can be an useful tool to study th ese sy metries.

References 1. Ferris, J., Hill, A.R., Liu, R., Orgel, L.E.: Synthesis of long prebiotic oligomers on mineral surfaces. Nature 381 (1996) 59–61 2. Duijvestijn, A.J.W.: A simple perfect squared square of lowest order. Journal of combinatorial theory B 25 (1978) 240–243 4 5 6 7

V + F = E + 2, where V is the number of vertices, F the number of faces and E the number of edges. Generally, non regular polygonal faces. Ibidem. The simplest paving of a cube, by all different cubes, is not possible.

192

Jean-Luc Tyran

3. Jacob, F.: La logique du vivant. Editions Gallimard (1970) 4. Tr´emoli`eres, A.: Nucleotidic cofactors and the origin of the genetic code. Biochimie 62 (1980) 493–496 5. Moulin, T.: Imbrications de niveaux et esquisse d’un langage syst´emique associ´e ` a des relateurs arithm´etiques. Revue Internationale de Syst´emique 5 (1991) 517–560

Acknowledgments We th ank M ich el Gondran (EDF) for h elpful discussions and encouragements, Antoine T r´emoli`eres (CN RS – Orsay ) for tech nicalinteresting clues and th e th ree anony mous reviewers for constructive comments on th e manuscript.

Grounding Agents in EMud Artificial Worlds nton

o

rt

r

h nt m rg u

h `l

our nt

Institut d’Informatique, Universit´e de Fribourg (IIUF), Ch. du Mus´ee 3, CH-1700 Fribourg, Switzerland, @unifr.ch

Abstract. This paper suggests that in the context of autonomous agents and generation of intelligent behavior for such agents, a more important focus should be held on the symbolic context that forms the basis of computer programs. Basically, software agents are symbolic entities living in a symbolic world and this has an effect on how one should think about designing frameworks for their evolution or learning. We will relate the problem of symbol grounding to that of sensory information available to agents. We will then introduce an experimental environment based on virtual worlds called EMuds, where both human and artificial agents can interact. Next, we show how it can be applied in the framework of multi-agent systems to address emergence based problems and report preliminary results. We then conclude with some ongoing and future work. Keywords: autonomous agents, symbol grounding, virtual worlds.

1

Introduction

h lopm nto rtu l orl pro n m n to r pr nt n orm ton tor or on om put r . n r nt r m u h orth n ot to m pro ng th o r ll n or r h n o u h orl to th n to h um n u r . h h m nl n m pl m nt th roug h th lopm nto tt r r n r ng lg or th m ffi ntg r ph l ng n th u o n m ton 1 or oun 12. n p r ll l toolk t m tpro ng lt or rtu l r lt m pl m nt ton ( r l or tr ut ) h n pro to ol t n r zton u ( or x m pl 14 ). n ontr t ork o u ng on rtu l orl n th m l h l to n lopm nt r h ng th l o rt l nt llg n or rt ll h r rtu l orl r n n ronm nt or th g nt un r tu . t p l nt g r ton o th t p oun n h r rtu lr lt toolk t om p t l th l ng u g or th tu o m ult g nt t m l l. Our ork nt r th r m ork tu ng g nt l ng n rtu l orl n tr ng th m port n o u h g n or n rt l nt llg n ( ). n ronm nt u m pl t xt rtu l orl n pr rom m ult u rg m ll u th th n r n . h tup llo or r h l l o nt r ton t n h um n rt l g nt n th n ronm nt pro ng on g ur ton ll u t to t t rt l g nt h or ho n n 1 . J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 193–204, 1998. c Springer-Verlag Berlin Heidelberg 1998 

194

2

Antony Robert et al.

Artificial Intelligence and the Grounding Problem

h l o rt l nt llg n torn t n th l l top o n p pro h n n r ottom up m th o olog . h g p or g n tng n orm o th m n o pro l m h l to m port nt lopm nt th n h p r gm h l n r llo ng or un ton o m th o . 2.1

State of the Art

n l l og n ton on r th p t to orm n m n pul t nt rn lr pr nt ton or m o l o th orl (th u u lm t ph or or m n n th p r g m th om put r . . m olpro ng ) 9. h l to th u o rul tm ppl ng tr n orm ton o r m ol th th gn ton th n r pr nt ton o pro l m om n.Su h tm h o m pr p r orm n o r p t k n pro g oo r m ork or h g h l lr on ng uth h g h n t t to nt rn l lur o u om pon nt nnot ppl to n pro l m out o th r r pr nt ton n g n r ll o not l up rom pro l m to m or g n r l on . h h n u um un r t o g n r l pro l m th frame problem 21 13 h r l l tm l to xt n or ptth r nt rn l r pr nt ton n th symbol grounding problem 11 2 h r l l tm l to r l t th r m ol r pr nt ton to th r n or n orm ton. On th oth r h n th n m th o olog tk m or m t r l t ppro h n l ng n “ g ro ” m n rom l m nt r nt r tng un ton lt n n nt llg n n g nt p t to nt r t th t n ronm nt 16. h u n on rn th g nt ptng th r h or to u om p t n n n n ronm ntth roug h l rn ng or oluton. rom th pro l m ol ng po nto th l to ro u t pt ulttol r nt lg or th m . h o n o th ppro h ng th tth o not h op tm l oluton n h n u h oluton x t th r nt rn l ork ng u u ll ffi ultto un r t n n th t u h t m pro th r u uln n ol ng ro ot t k n n m n ronm nt ( u h o t l o n ) muh ork tll h to n t n or r to t k l m or om pl x t k . n on lu on h r l l l to r l t m ol r pr nt ton to n or n orm ton (th g roun ng pro l m ) n u “ g roun ng h poth ” to u l t m rom n or n orm ton 2 3 ut l to pro u tm p l o r on ng (th h oul ll th reverse grounding problem). Wh l n h th h op o ng om pl x h or ( om p r l to th o pro u r on ng ) m rg rom th nt r ton o m pl un ton l t l th tth r g ng pro l m n tt r r t rtng o th p r pton o h g h r l l th n th tu u ll u . ll h o th m pl th u o m ul ton n m ol rtu l orl t rtng po nt or th g n o utonom ou g nt.

Grounding Agents in EMud Artificial Worlds

2.2

195

Embodiment

r ton ll ppro h to ol ng th g roun ng pro l m h pt ottom up p r p t to h h h r h r t uppo th t m n ng to om ntr n to om put r prog r m ( oppo to p r t lnk to h um n nt rpr t ton) u h prog r m ll h to lop t un r t n ng rom n or n orm ton. W ll th th g roun ng h poth n h t qu ton th l l t h h th h poth n t ll ppl . o t og n t n r ug g tu ng r pr nt ton p rto h o m ol r nt th n or p r pt n th ph l orl . . g roun m ol . N on th oth r h n u th on pt o m o m nt 23 to on r g nt lnk to th ph l orl th roug h n or m otor nput/output ( n th ro ot n ) th outg ng n tr t n t l m o l o th orl to th r g nt( l o 3). h ph l m o m nt th n on r th on h h to tr g n r tng nt llg nt h or. utth l to rg ng on ton outth on pto m o m nt h n n g nt tu t n n rt l n ronm nt n t n m o g nt or o th on pt o m o m ntonl ppl to g nt onn t to th r l orl ? h r r t o m n r on or th r n o op n on. r t th t m ul t n ronm nt lm n t th om pl xt oun n th r l orl lm tng th m ounto n orm ton th t n p r n g nt. h pp r ntno oul nt l or qu r ng th lt to tng u h t n th r l nt n th rr l nt n orm ton n th g nt n ronm nt ll l rn ng to pt n p r orm n r ng o tu ton . o th o j ton propon nt o m ul ton t t th tth r no un m nt l rn t n p r ptu ll g roun ng n g nt n n rt l n ronm ntor n th r l orl n th t th p rtur ton to th n or u to th r l orl r hn o n orm ton or om pl xt n on r r n om or llpr t l purpo n th u r pl p u o r n om g n l up r m po on n g nt n or nput. S on th t n g nt ol n n rt l n ronm ntm g h tn r l to r l t to th r l orl u t ll too tg h tl lnk th t “ tutor l” orl to l to g n r lz h or or r l orl op r ton. h o j ton r rom th th t ollo ng tur n n rl n ton th or 23 th tru tur l ouplng n u th m o m nto n g nt n n n ronm nt n t og n ton th u m ng l pr ntng th nt g r ton o u h n g nt n n n ronm ntor th r l orl . h t rm n op n lth oug h m n uth or no ptth o m ul t m o m nt. tu ll l th t n g nt oul g roun n n rt l n ronm nt n llo to ol uffi ntl om pl x r l ton h p th t n ronm nt th ouplng n u oul llo u h n g ntto ntro u nto n n ronm nt through t o n n ronm nt th out ruptng t un ton ng . h l u to ug g tth tth m o m nto n g nt n rtu l orl n t nt l or tto lop nt llg nt h or n th tth r on or th lnk th th t p o n or n orm ton o t r g nt n p r .

196

Antony Robert et al.

2.3

Sensory Information

W th th ottom up ppro h to u l ng g nt om th lng th t u h g nt r n tur ll g roun n h oul th u h th ttr ut to lop og n t pro or th r rom th n or po nto muh p rto th orl h um n r . h h ntl not n h o n to tru (to th ) n l on o th r on n oun n k ng m lr qu ton th tpo .N g l n h rt l “ Wh t tlk to t?” 19 . lth oug h o not h to l nto th x t n o qu l th qu ton “ Wh t t lk to om put r lg or th m ?” om to m n n h nt th t h t r n or pp r tu n rt l g nt qu pp th h p r pton ll r m n pur l m ol n n tur . h r g r tl th th p r pton l l to l ng org n m n th ph l orl . lth oug h h um n ng r g roun n th ph l orl th r n llo ng th m to g m n ng to tr t on pt t h oul r t tt m pt to g roun om put r t m n th r o n m ol orl .W l th tm n ng trong l root n th t p o n or n orm ton l l to n ntt n th r n no qu l nto n tur l n n o t r g nt. h u un r th h poth th tm n ng n om ntr n to prog r m u h prog r m llh to l to g m n ng to m ol nt n m ol t rm or g ng m n ng to m ol ll tr n l t ph l nt. Our ppl ton ll on ntr t on m k ng l l r h m ol orl h r g nt n p r n orm ton o th r o n n tur th t o m ol t p n ntu ll th ph l orl th roug h th orl th r g roun n. n th t our ppl ton pro r m ork or m ul t m o g nt.

3

EMuds

on th o ptng g nt p l o m ol l l p r pton h lop n ppl ton pro ng rtu l orl n h h g nt n nt r t. h ppl ton t k n p r ton rom nt rn tg m ll u or th r h n o ton th llo n xpl n th pr n pl on h h u un ton or r ng th gn o n u . 3.1

What Is a Mud?

h ron m tn or ult r ung on. u r nt rn t om put r g m h h r lop ollo ng th pr n pl o rol pl ng g m . n uh g m th pl r t k up th rol o h r t r n n n nt orl h r th t n pur u to g o l th th n tto th m th g n r o th g m tor ln . h m n nt r to u h g m to o r th orl n nt th g n r or oth r pl r n nt r t th th r tur th t popul t u h orl u u ll th th m o ol ng n n g m . u t k th nto th om put r orl ttng up n rt l n ronm nt ( orl ) h r pl r n onn t t ln t on n r tth ton o

Grounding Agents in EMud Artificial Worlds

197

h r t r th n th t n ronm nt. h roug h th h r t r th pl r n th n l rtu l l n th n ronm nt l ng to h um n o l nt r ton r n or m pl xplor th orl n nt r t th om put r ontroll r tur . 3.2

What Is an EMud?

u tn or oluton r . n u n n t n o th u ppl ton g n p orl tup. h r n m n u n th r lm tl num r o rtu l orl th t n g n n n t lzton or th ppl ton h tn t xp r m nt n ronm nt. og r pton o th ppl ton tru tur t n r to un r t n om t rm u to p k o n u . h n n u h t o om pon nt on h h tot ll r on g ur l t rtu l orl th on h h o th n th ppl ton th rul or n m oluton o th rtu l orl . h rtu l orl m o loci (pl ) elements ( tu t ”th ng ”) n exits r ng lnk rom on lo u nto noth r llo ng l m nt to m o n th orl n n ng t topolog . l m nt om n m n orm t p ll th r objects creatures ( utonom ou g nt) or player-characters (h um n g nt).N ot th t l m nt m ton ll lo n h h th r ll containers. ll o th l m nt n lo r r n n nt l zton l th tm u t pro th u r n th t h ll n th ttng or th rtu l orl n h h n xp r m nt n t k pl . h th orl ol ontroll th ppl ton h h n num r o skills th t l m nt n ppl to th n ronm nt th k ll h oul on r r fl tng th l g o rn ng oluton n th rtu l orl . r l m nth to k ll th t r th ton h m un rt k . o t k n x m pl uppo th orl h ou . n th h ou th r r room l r r h ll t . th r th lo . h l m nt n th h ou oul t l ook ut ( ont n r) th t( r tur ) ou ( pl r h r t r). h th th k ll o m o ng p t oor ( xt) tng n purr ng . ou oul h th k ll o op n ng oor p k ng up oth r l m nt n o on. h orl oul r n n t lzton l o th orm n n g ur 1. Wh tr m n to n h o th l m nt h oo to ppl th r k ll . h ppl ton n orpor t ontrol t m th t n o t n ontrol un ton to n l m nt pro th tth un ton h n nt r p l o nt rpr tng m g nt th n ronm ntto th l m nt n p l o n ng om m n m g to th l m nt. ontrol lg or th m m u t pro gr m m plug n n th m h um n pl r h to un r t n m g rom th u h n onn tng to t t ln t on. 3.3

EMud Agents

n n u g nt r th l m nt th t po k ll n th lon n tr n orm th orl ppl ng th k ll . u g nt h oul on r o t r g nt 20.Sn th u llo n ontrolm h n m to u

198

Antony Robert et al.

BEGIN LOCUS Library SIZE 200 CONTENTS Table, Book1, Book2, Cat, You EXITS Hallway, Livingroom END LOCUS Library BEGIN ELEMENT Cat SIZE 8 SKILLS look, move, purr, eat CONTROL CatBehavior END ELEMENT Cat BEGIN CONTAINER You SIZE 15 WHEIGHT ALLOWANCE 15 SKILLS look, move, get, drop, open, close CONTENTS Book3, Glasses CONTROL ByTelnet END CONTAINER You

Fig. 1.

tp

l

u

n t lzton l

or n g nt r ntl l o utonom n h . h r un m nt l l l n tng u h 26. rom utom t g nt utom t t m pl ng th tth g nth to t n ronm nt n n nflu n t o n utur x t n th roug h ton n th t n ronm nt ( pr r qu t or n g ntto utonom ou 22) on to op r ton ll n h or ll utonom ou g nt. Op r ton l utonom n th lt or th g nt to op r t th out h um n nt r nton n h or l utonom uppo th g nt p l o tr n orm ng t pr n pl o op r ton. Su h n g ntm u t l to xpr r n h or t n or n ronm nt h ng . h m o lo n utonom ou g nt r g ur 2. n th pr lm n r tok n g roup ng xp r m ntpr nt lo r lng th op r ton ll utonom ou g nt ut p rt rom llo ng h um n ontroll g nt to nt r n u n urth r xp r m nt ll m nl on rn th u l ng h or ll utonom ou g nt. 3.4

The EMud in Operation

rom th t rm nolog n r pton t l r th t tth nt r o th p pl ton l oor n tng ntt th t m u t h n l th nt r ton t n l m nt po ng k ll n th orl . n t u to th p r ton o th on m k ng lg or th m n th n ronm nt th t h n l h ul r h h n th l to l m nt qu nt ll r qu tng om m n rom h

Grounding Agents in EMud Artificial Worlds

199

Environment

Perception

Detector

Control Algorithm

Fig. 2. n g nt

o th r ontrol m h n m . h l m nt k ll n ppl th t p o ontrol t m th r m pl lnk t n tm n n on th 3.5

Action

Agent Effector

r h t tur

h

om m n th n nt rpr t tth l l o po l. h orl t l n p n nto t h um n ng or n lg or th m o om k n n l m nt n t ontrol un ton n th ontrol g ur 3.

Implementation Objectives

Wh n m pl m ntng th ppl ton h h l t o m n go l n m n ; r t to m nt n th n p n n t n th rtu l n ronm nt k rn l n th ontrol lg or th m or n u l or popul ton on to k p l rg p rto xt n lt to th ppl ton n o r ll g m or nt purpo . h r fl t our nt r t o ng l to tu r nt ontrol lg or th m th n th m n ronm nt n lo n on ph to l to ttr t th l rg num r o t t tr l l rom th nt rn t h n t rt tu ng th t o h um n nt r ton th om put r pt lt (not th tth t r on o th u l t ln tto u p .un r. h on port4 000 h n not un r r on n m l to h tr ). h t o t p r p rt o tr n ton rom th tu o pt ton th n pur l rtu l n m l orl to orl h r om r l orl p r m t r h to t k n nto ount th ontrol t m .

4

EMuds as Experiment Environments

u uh u to orl tm r n

n tur ll pro n ronm nt or th l l n m t xp r m nt th rt l nt pro l m 4 or W l on oo xp r m nt 2 ut th r h n o un ton lt l l or th r pton o rtu l n g nt ton th r n th r m or u t to ppro h m ult g nt on om p r t l l.Sn g nt r m pl m nt plug n ntt n p n nt ontrol lg or th m t po l to rr out om p r t

200

Antony Robert et al. E M ud w o rl d

Skills

Control system

Population 1

Elements Mobs 1

Loci

Mobs 2

Population 2

Players Server

Fig. 3.

u

tru tur “

rtu l orl

n

ontrol orl ”

xp r m nt th nt r tng g nt h o ontrol lg or th m r o r ll r ntn tur . Wh n u th popul ton o g nt th po lt op n n p rp t or th tu o oop r ton n om p tton. W urth r ntto l to on ront rt l g nt n th r n ronm nt th r l orl nflu n n th n on u ng th g m lk n tur o th u to u l rtu l orl r h noug h to ttr t h um n pl r rom th nt rn t. h rupton o r g ul r p tt rn th n n n ronm nt h n h um n nt r ton pp r h oul g n ton o th ro u tn o th ontrol lg or th m u . 4.1

Preliminary Experiment

n ntro u tor xp r m ntu to t tth u ppl ton h m pl m nt r g roup ng g nt orl n 6. n th xp r m nt op r ton ll utonom ou g nt r tu t th n t o m n on ltoro lg r ont n ng tok n . h n m o n th our r n l r ton p k ng up or ropp ng tok n long th . rom r m pl to rul r g roup ng or p r ng tur n o r to m rg n th n ronm nt. h ontrol lg or th m o th g nt ork ollo )

h n n g nt nt r m pt ll on; ) h n n g nt nt r non m pt tok n n m o on; ) h n n g nt nt r non m pt pro l t rul to h th m o ng on. h pro l t rul u N −α h r N th num lu . W h o α 0.3

noth ng h pp n ll h l

rr ng

ll n o noth r t ll p k up

n

th

g ntm o

tok n t rop th tok n t ppl tok n or not or

n th on to p k up tok n P (p k up) r o tok n lr n th ll n 0≤ α < 1 r l t xp r m nt ll n to g g oo r ult. n

Grounding Agents in EMud Artificial Worlds

201

th xp r m nt n g nt n rr tm o ton tok n t n g n tm . Wh n m o ng th g ntr n om l l t on o th our l l r ton th n qu l h n to g o n h r ton. h onl n or n orm ton l l to th g nt h r th num r o tok n n th ll h t n ng n n th pr n or noto tok n n h o n n ntor .On n th p up r ult o th xp r m ntu ng on to ourt n g nt n th orl on g ur 4 ). h r ult r g n r t u ng gr ont n ng t n tok n . or

8 30

1600

7 1400

25

6 1200

20

5

1000

4

15

800

3

600

2

400

1

200

0

0

10

5

0

5

Fig. 4. ) p

10

up

15

num

0

10

ro

20

30

40

g nt

50

60

)

70

80

90

100

) tp

0

0

200

400

600

800

1000

l xp r m nt(

1200

1400

1600

g nt)

h num r o g nt h un r xp r m nt r run u ng r nt n t l on g ur ton th r ult h o n ng n r g o r th h un r . on n th pro l m n ol th ln r to up r ln r p up u ng t o to x g nt. rom th n on m or g nt t n to o r ro th n ronm nt ruptng th r g roup ng p tt rn xh t h n r g nt r n ol . o um m r z oop r ton m rg n th m ult g nt t m h n th num r o g nt r ng rom t o to g h t. On g ur 4 ) n ) pl th o r ll r ult o h un r xp r m nt u ng g nt. g ur 4 ) plot th num r o t r ton n to h th t k n h xp r m nt. g ur 4 ) h o th tr uton o xp r m nt r u th num r o t r ton . h r g num r o t p u 4 th m xm um n m n m um ng 1 1 n 13 r p t l un rln ng th tth t u o th utonom o th g nt n th oug h k no th g o l ll r h nnotk no h o or h n. h or g n o th r g roup ng pro l m l n th tu o oop r ton th n m o l ro ot g nt h r th un ton ng o h ol g roup p n on th n u l om p t n o h ro ot ll th r nt r ton . m rg ntprop rt o u h tm r n lz n 6 n o portng th m h or to th h p r m o l ro ot 1 . h nt g o u h n ppro h n oll t ro ot th t t lm n t th n or g lo lpo ton ng t m or o up r or m on tor ng th t k o tok n . h lo ton o th t k ont n ng ll th tok n r ulto th g nt nt r ton .

202

4.2

Antony Robert et al.

Further Experiments

W r urr ntl m pl m ntng m or om pl t n ronm nt h r t o t p o h or ll utonom ou g nt o x t h ng r nt k ll . n th x p r m nt g nt h un up r l rn ng p lt n r r r or oll tng tok n .On t p o g nt llo to p k up tok n h l th oth r n onl t l th m rom g nt lr rr ng om . rom th tup xp tto o r ggr h or m rg ng n th th ng g nt h l th norm l oll tng g nt llm o tlk l lop p k up n run tr t g . h ontrol lg or th m or th g nt ll m o l rn ng l r tm ( S) 10 24 g ng th m oth m ol on ton ton rul m h n m n l rn ng th roug h r n or m nt n rul r ton. h nt r ton o t o l rn ng l r tm ll g n ton o h o rul ol h n on ront th n m l n ronm nt.

Rule base and decision system

Learning Algorithm

Fig. 5. th

Effectors

Detectors

LCS Agent

Rule discovery system

Sm o

lo

n g nt

r n g ur th S g nt r h t tur on t n rul r n or m ntl rn ng lg or th m n rul o r t m . n ronm nt m g r h th t tor o th g nt n r on to th rul h r on ton m t h ng p r orm o r th to l r n th rul . th l r th n or th r ppl ton p rt p tng n n u ton th nn r g n ng th r g h tto ppl t ton p rt. on qu n o ton r ontnu ll lu t th r n or m ntl rn ng lg or th m h h r r or pun h th l r or l r r pon l or th l t ton. h r r tr ng th n (r p. k n ) th l r or th n xttm th t k p rt n n u ton. t r pr t rm n num r o tm t p th rul o r tm g n r t n rul rom th tt tol rul n r pl un ton n th rul .

5

Conclusion

h nno t p t o th ork to r on l m ol ppro h th n nt g r tng h r t r t o oth top o n n ottom up tm nto ng l tm n m l n u to om p n t or th kn o h

Grounding Agents in EMud Artificial Worlds

203

t m . On th on h n top o n tm r non m o tm ho un ton ng on m ol m n pul ton. On th oth r h n ottom up tm r m o tm h o un ton ng on m rg n . u n orpor t oth m o m nt n m olm n pul ton tur . h nt g o th h r t m l n t prop n t to ol r on ng g nt n n th lt o th nt rn l tru tur o th g ntto th g n r. W h t k n th th tph l g roun ng not n r on ton or n t m to g roun r th r th t m ol g roun ng oul r l r t t p long th to u l ng ull g roun tm . n r th um pton tm l to g ntr n m n ng to th m ol th m n pul t n th r o n m ol un r ll not r r rom g ng m n ng to o j t out th r orl ( . n th ph l orl ) n p r th roug h n or . h g nt rtu l orl tng lt r th roug h h h th ph l orl r h m ol ll n m m tr to th t th h h h um n n r h nto th tr t orl th roug h th r h g h l l og n t un ton on ph l pro . W h th n pro to u l th u ppl ton pro ng r h rtu l orl n ronm nt or rt l g nt to ol n. Our m n g o l no to g n ull l xp r m nt t k ng nt g o th r t m ol to l m nt n th orl ll n lu ng h um n ontroll g nt h r ng th m n ronm nt. O nt r t oul l o to xt n th u orl po lt m pl m ntng tr ut r on u ng oor n ton l ng u g uh S 1 o th u ppl ton llo ng g nt to m o t n om put r r fl tng m ol olog ln h .

References [1] J.F. Balaguer. Virtual Studio: Un syst` eme d’animation en environnemen virtuel. PhD thesis, EPF, DI-LIG, Lausanne, Switzerland, 1993. in french. [2] R.A. Brooks. Intelligence without Reason. In Proceedings of IJCAI-91, Sydney, Australia, 1991. [3] R.A. Brooks. Intelligence without Representation. Artificial Intelligence, Special Volume: Foundations of Artificial Intelligence, 47(1-3), 1991. [4] R. Collins and D. Jefferson. Representations for artificial organisms. In J.-A. Meyer and S. W. Wilson, editors, Proceedings of the First International Conference on Simulation of Adaptive Behavior, volume From Animals to Animats, Cambrige, MA, 1991. MIT Press. [5] Pavel Curtis. Mudding: Social Phenomena in Text-Based Virtual Realities. In Proceedings of DIAC’92, Berkeley, 1992. [6] T. Dagaeff, F. Chantemargue, and B. Hirsbrunner. Emergence-based Cooperation in a Multi-Agent System. In Proceedings of the Second European Conference on Cognitive Science (ECCS’97), pages 91–96, Manchester, U.K., April 9-11 1997. [7] T. Duval, S. Morgan, P. Reignier, F. Harrouet, and J. Tisseau. AR´eVi: une boˆite ` a outils 3D pour des applications coop´eratives. Calculateurs Parall`eles, 9(2), 1997. [8] Leonard N. Foner. Entertaining agents: A sociological case study. In Proceedings of Autonomous Agents ’97, 1997. [9] S. Franklin. Artificial Minds. Bradford Books/MIT Press, Cambridge, MA, 1995.

204

Antony Robert et al.

[10] D. E. Goldberg. Genetic Algorithms in Search, Optimistion and Machine Learning. Addison-Wesley, Reading, Massachusetts, 1989. [11] S. Harnad. The Symbol Grounding Problem. Physica D, 42, 1990. [12] Jens Herder and Michael Cohen. Sound Spatialization Resource Management in Virtual Reality Environments. In ASVA’97 - International Symposium on Simulation, Visualization and Auralization for Acoustic Research and Education, pages 407–414, April 1997. Tokyo, Japan. [13] J.H. Holland. Escaping brittleness: the possibilities of general purpose learning algorithms applied to parallel rule-based systems. In Michalski, Carbonell, and Mitchell, editors, Machine learning, and artificial intelligence approach, volume II. Morgan Kaufmann, Los Altos, CA, 1986. [14] Hartman J. and Wernecke J. The VRML 2.0 Handbook: Building Moving Worlds on the Web. Addison-Wesley Publishing Company, 1996. [15] O. Krone, F. Chantemargue, T. Dagaeff, M. Schumacher, and B. Hirsbrunner. Coordinating Autonomous Agents. In ACM Symposium on Applied Computing (SAC’98). Special Track on Coordination, Languages and Applications, Atlanta, Georgia, USA, February 27 - March 1 1998. [16] H. Maturana and F.J. Varela. Autopoiesis and Cognition: the realization of the living. Reidel, Boston, MA, 1980. [17] Michael Mauldin. Chatterbots, tinymuds and the turing test: Entering the loebner prize competition. In MIT Press, editor, Proceedings of the Twelfth National Conference on Artificial Intelligence AAAI’94, pages pp. 16–19, Cambrige, 1994. [18] F. Mondada, E. Franzi, and P. Ienne. Mobile Robot Miniaturization: a Tool For Investigation in Control Algorithms. In ISER’93, Kyoto, October 1993. [19] T. Nagel. What is it like to be a bat? Philosophical Review, 83:435–450, 1974. [20] H.S. Nwana. Software Agents: an Overview. Knowledge Engineering Review, 11(3):205–244, 1996. [21] Z.W. Pylyshyn. The robot’s dilemma, the Frame Problem in Artificial Intelligence. Ablex, Norwood, NJ, 1988. [22] L. Steels. When are robots intelligent autonomous agents? Robotics and Autonomous systems, 1995. [23] F.J. Varela, E. Thompson, and E. Rosch. The embodied mind: Cognitive science and human experience. MIT Press, Cambridge, MA, 1991. [24] S. W. Wilson. Classifier Fitness Based on Accuracy. Evolutionary Computation, 3(2), 1994. [25] S.W. Wilson. Knowledge growth in an artificial animal. In Proceedings of the first International conference on Genetic Algorithms and Their Applications, pages 16–23, Hillsdale, NJ, 1985. Lawrence Erlbaum Associates. [26] T. Ziemke. Adaptive Behavior in autonomous agents. To appear in Autonomous Agents, Adaptive Behaviors and Distributed Simulations’ journal, 1997. [27] T. Ziemke. Rethinking Grounding. In Austrian Society for Cognitive Science, editor, Proceedings of New Trends in Cognitive Science - Does Representation need Reality, Vienna, 1997.

Towards Virtual Experiment Laboratories: How Multi-Agent Simulations Can Cope with Multiple Scales of Analysis and Viewpoints v

v

1,2

P

1

l1

P

l xs

o o l2

1

2

L o oi ’ n o m iqu ppliqu O om 32 u . gn 93 143 on y n [servat, perrier, treuil]@bondy.orstom.fr L o oi ’ n o m iqu l’Univ i i 4 pl u i u 2 2 i x0 n [David.Servat, Alexis.Drogoul]@lip6.fr

Abstract. n u ying ompl x p nom n w ug i ul i o on iv un n no o y n l yn i p o w i om m ny in ing v n p o u n m ging ogniz l p i n n u u lly l m o opi v n . u opi l i u ll o p i ool mong w i v lopm n o mul i g n im ul ion p ov p omi ing pp o . ow v u n mul i g n imul ion p ovi no m n o m nipul ing yn mi lly g oup o n i i w i m g i n g nul i y l v l . To ou min giving ull n o mul i g n imul ion woul on i oug in m king u o u po n i l g oup y g n ing m n xi n o i own n p i viou u p ovi ing m n o pp n ing mi o m o link wi in imul ion . p n on p u l fl xion on u n o g niz ion in lig o ou own xp i n in v lopm n o p oj O om w i im imul ing uno n in l ion p o . li v v lopm n o ou m o in l o p y i lp o will p ovi n wi n ool u ul o m ny mul i g n i u n mo lling pu po o o giv p o on p o vi u l xp im n l o oi . Keywords: mul i g n imul ion mul ipl l v l o ion n l m g n p nom n mi o m o link. T i i uppo y g n om n p m n o ig u ion n n y O om.

1 s

Introduction o x o o s s l o o o lx o . s o s o l o ls s o o s l y y v q vl y o

l y o l o

o

o l o s ls wy l ly ss s o .

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 205–217, 1998. c Springer-Verlag Berlin Heidelberg 1998 

sy s o s s. o y o

s o

s l l s o

s y o s o o l

20

vi

v

l.

sls ly oo o o ow o lo l y ss s s o l ss o ls. M o ov ls s l o o w o sy s sw v so ll yo so o l ss o ls . . osy s o ll s s’ so s s M N ( o o l l 199 ). ow v s o ss ov s o l w s y o lxs o s volv o w o s s l so s o s w o o lx o o s. l s l o s v so ov so o s v a posteriori s s o s v o k o s l o l l v w o sw s l o o s w w ly o o o s ols o o v wo o o w ws o o l ow o o s o o sso y o s s l o v lsw ow vo s sso so o s k w s s v s . y o y o so v w o v vl oll v s o v ls v lw sow xs o o s v l y s o ll l y s so s s s s s s o o v ools o o s l o x s. o o wo l v s o o o v l x l o o s o ( 199 4 04 ) o ( l l 1997). oj so (P 199 ol 199 v 1997) w y o l o o w ll o s y olo ss k wo o s v lw so w lls s w zo s flow w s . y v s o . o s ow l v l s o s o o o y s v o l l ws volv o o s s l s. ll v ls s l s o sls o l o o w lls s s s s l x so vol o o oll v v y o sw lls( s wy s s s s l ys s sl o oll v v y o z l s); so v w sls o o o o v w o w ll o s w l s s sy sso so l o so ll w lls o ly volv v . y olo s’so j o s y ss s o l sl o z sw lo o l v ls sv ly l o o . ow v s o o lx sv l l v ls s s l o sly o s o o o l y o v o o o lo l v l. so y olo sswo l v ll v sl oso o o ll w s ows ow o l x s o v x o o o o l s j s ss ls s o o v s o w lls. s v o l q vl o s l s w w ll o o ly o s v l a posteriori s l s l o s xl . o v s ss y kl y v ll s s o

Tow

i u l

xp im n L

o

oi

20

o

llow y o o sy s slv s. s l o s lo lly y lly y o o sw s o so s lly s l o v s o s o o o o l y. s o o o sl s o k owl o y s ll w l o flow vol o v s l x so v o o w ss o sl v l y o o ol vol o o low l y s. o s k w l fl x o s w o sss k sy s ’s o z o o j o sy s sl. s ll s o s sso w ov v w o so x l so l o s l o l s l o sw w ll s ow l y o s q s o s. l o s x ls l ly o oj w s o l fl xo o ow o o o s l s l o . l o o s v so o s l o l s o s vs o o s s( v 1997). so o w y os l o ollow o s o ls o o sl o ol ss w o o o s. o

2

The Need for Multi-Scale Viewpoints in Simulations

so l o o s o v l s s l o s l o . s o ll l o v lo o o os s o . l o fl s sl ol s o ls w fl ls l o llso o wo k o so k s w slv s j o s s fl . o ls ov l v o o sol o o N v ok s q o s o o vo o fl s w y v o o o s. N o s s s l s l w o s s ss( k 1990 o ol M ol s1990). s o s sy oss l o l o x s s o ssv o s v o o o o s slq o s l s s’ o . v y s o s low s l v l o l y w o s o o w ly ssl v l o s y x . s o ss ss l o l o o s v ’s ol s l o s( l 199 ) o lv s l o ly ss(P 1990). l s l o so ys l o ss ss o oso o lv l o oso o o s ss l o sl o o w ws o o l. o so l v lo sw wo l l ss o oso o oso lv l o s l ss ss (M l o 1997) s l o o q k s. l k s o o l s l o s sw ll. M N ( o o l l 199 ) o s s l o s v l o o

20

vi

v

l.

v so o wo k o s l s v sk s s k o so s o oo s y w o y o so o x s o xs k so l o s. o s ss l o j so s y o olo ss w v ssso o o o . ow v s o s o o xs s l o o o so v o o o x s ov . olo s wo l lk o o l o o s so l o s s’ s wo l o s o o ly s oll v y o v ls s o w o s. sv ly o l o w ol s’ s w sow s s o s so sly y ls . s s o s s s so l o o s’ s s y s o o q l y. o o so s l o w ll o o v s olo ss o o o s o so v w os l o sly l . s so

o w y k l 1997) w ll o l ys . k owl ss l y so . s oss l y o o l y vs o l . (M

look MM oj s s l o lo l s oj vol o o l ’s o o s s so so s k owl w l l l v ls o ov lv l o

s o oj w ll s ov o s s o v y. o o s w lls l v o o v s. s l o (s s1 2 ) s s s y so l wo k w ll s y v q w ll. ss l y o lly o w lls. v y y l lls ov o o ll o s ll o low s o s o oo o 2 lls. sv l oss l lls o s o ly os . ll s o s v s s ll ll s ov o s l o . lls y k s ss 0 s 1 o s 2 o y ll w s y o lo ov o ov ow o s o oo . lls so y o s so sso w y o o o s v o v o s. s w s k o ol ov o lls l y l w llflows slv s v o l s o o s o o sw o v lls ov o lls o lls. s o s v sl o s v o sw l o ow l ssol o w so o o ll s o o so v so k o w lls v o y l ss. s

w

Tow

Fig. 1. ls . w lls s

Fig. 2. o s lls l y o slo

Fig. 3. o lls

ls

s l

o

xp im n L

o

o

s k y o so l

y

o

s ow

.1 o v o sly oo x v o o

y

. o lls lso o

ss

i u l

y w o

oi

k

209

y.

ssolv s sl o lls

o v

.

so s. w

210

3

vi

v

l.

Regrouping Rules o

o s ss o o o so o l o w ss .4 w o s ss y s s v ls o o ss y o lly v o w wo l o o s o w lls o s o lz y l o so so o o l; s o w lls k los s o o w lls so v . s l o o lls o o y s o s o ollow o o s ( ov o o l) v o l o s s los o o o v o oo ( l l 1997). o ko o o o sl y o . l o o s s l o l k o oj s o o y o sos vs o o ls wo l o o o o s o s. o o vs o sl s o o s o o

1.

o

s w s so s s v so o s l o ll y k s s 0 1 o y w ll 2 ll o s s y o lo l v ll. ll s 2o s v s s o oo o ss o w lls s s o s so o . l s o y v o o sw v o s vl s ss o sk oo s k o s . 2. o l o so s so lls o vo s x l o so so l v s o l o s o slv s v o olo y. s o oo s o ss ly s o lls y ly o o o s w y o ll o s wy l y l sy s w s s s o s l w q o xo wo l o s s o oo . y o w y lso s l o s l k o w o s . s o s look o o o o w (M o k s 199 ) w o w s y so s so l o s ’s q ss o s lly look o o o . w s wo y so s xs l o so q s o o l ss o w s o s sso sso o l o w s o s ’ss s o .

l Eb

l

o s o o

o w s s

w o

so o v s v

s y s’ s o o w . os o sw o

s

s.

o o o lly s k o o so wo −Ea − Eb−< θ w

o s s o o j vs s Ea θ s s

Tow

i u l

xp im n L

o

oi

211

v v o o s llow w wo s o s s los o o . v lly s y v o o l w so o o s o o ss s o s volv o o ow o l o o o v so s o l w s s. o o s ly l k o o o o s. s s o s l o o s o s l l s so w lls. o o ss y o s sso so o y lo l o o o o so s l y l l1997 s o o o o l o s w s o s lly o lly o s o lz o o P y s s. o oo o o o so l s l o s s o o o y s slv s o l v sy s l v lo o l x y q os l so s“ s l o s y v ov s l o s so s lly so s s o s( s) s so s l o so o so o ly o o so ow so s” ( l 199 ).

o

Fig. 4.

4

o

v

l

s

Adhesion to a Group sso s sw

o l l s so o v o so

o

s s w ow v o o o o s sl. lls v llsw s so o s s v o . ow v ow o s o wo s ow o s ls v

k o

l s s o o ? s o ll w w o l o w o y s. y lly so l o o s o l ly l. ow v o x o o o j vs v l s o o o so o o y o y s. wo so s o o o o

o

y o s

s o

o

o o l

q o s so

o y o o

212

vi

v

l.

1.

o o o y s s w ll k ow o j so y o y olo ss s w y o v o o y o sly oj s l o k owl s v o so o y s. s sl y ov w x o o y os o s l o y xs o y o o y so s o o x s w ll o s v v o s o y o o o v o sly s lly oo x ss ow .1. s l o w ll o o vl xs o s o s. y o s ly o o o w lls os ss s vo o v o s y k ow lo ll v l flow w o o k ow ow o l s o w lls wo l o o . y o ly k ow o o vol ow y w lls w ll o o v l os o s s s. 2. s s v w w o o v o o (o o o w o s os o o )o o sw o v y o o o o o o sw lls water o . sw ll s x l o ’s x so y ll wo k w s s ( x l o 199 ) o s o l o o v w ll wo k s s o v s o s s s(s s s o olo s) o x lv w o w o v w o ( )w s o ly s o sow ( o x l w o s sw s). o v o o l w w o o s y o s l o . s

ll w y w oo o s o sw o y o o o y s. s s s sw l o sov w o y s w o o k ow o w v s so s so s o s w w y. ow v w v o wo k so o s o o ly sw ll l ll l o l w o ow lls y o so o o s . s v l s o sw os y s o os y s o wo s y o l y o o s. so o o y o s s l o o ss o s. y v so so o o s s s v o o vo o o o o o . o s l o ’s s slo s o o s s s y s o v l s. o o oosso w s o o s o w o s. s v so o s l o w v o o o y l v l wo l o . sv y o s sl o y l o o v ooss o w s los o sl. l o o s s s os sy s w y k o w s o sy s so vo so s o solv o l o fl s w sv l oss l o s.

Tow

o .

s o s

5

xp im n L

o

oi

213

o volv

o

o

i u l

vo o

v

s l ss ly y o v o s so o s o vo o s y o ov o o s o w o ’sl . sw y ssol o o o o s y s o s lo o o . o o s o o o o s o sw o y vo .

Control Issue Within the Group Agent v o v

l k)

o o

o

o sl (

lls o s ss o s o y l o o s? ss s o s y o v

o

w

1.

o s v

s l

so s v s o

l

s

y so

v o

l

w o lo

o y o s

s o

o o

o

l v s. ow o w ow o w s

.

o (

o l

wo v so s

k o xs s v ls. s y l o sw o o ’s s y o o w o s. o x l o xss o lly lls lo o o o s s ll o o o o sl o ssol o o v o so o o l xs o s l o ’s s. s o o s o o v o l o o y y s o so so v l s. vo s o o o o oso s vol s l x so o w s llo o s . s s l l x l sv o s v so v o o . 2. o so k o ol ov v l s o s v so sl o s v o o oll s o s lo o sl. s o o lly o s v s ss o s. w so o lls y v so o s solv s sl o x o v o l ly s o ll l ss ow .2. lls s y l w o o s s s ls .. s o o s slv s s o s l o l s s o l o o yw x o . s o lo j so s v s y sw ll o so o y o ol so o l y lo o . o

v lly oss l

ol s l

lso o s o y volv o s o l k ). s l y xs l y lv l

w l s

o

v vok ov o o y ( s) y w ow xs s oll v s o w o ssolv s sl sl o

214

vi

v

l.

s

s o o oo . s s lly o l l v l v sow o o y s s l v ls s o l ws. s lls x llso o w s v s ll x s w wo o s v o l s. sv so sw y olo s’sw o so s so flow s. M o ov s sl o o l v lsw s o s l y vs s s o s o o z o o o so s so l y o ll o o o o s. o o ls s ll l o w x s lw y s l low s l y l v l. o s lo o l y l v ls v l o lv l . x s w so s lv lo y o l v l’s ow ls s o lls ov . ss o o lv l o o o sw l o so s l v l. o s ov w sl s l y l o o l ssol o w o so y s low lv l s lo o . v l s lo o o s v vo l s w y s sol oo s o s ly j s sw lk w s o o s. s ll w lo s o v k so v w lls o s o lls o lo s y s (o o ). s lls s so k o o v w o ow o ly ls flows o o s o o s. lls o lo so o lls o llsw v o v ’s o o sflow . v l s o o so so o o o ly w o o lly olls o sl o o o sly.

6

Coexistence with Recursive Regroupings s l

l o l o s o v s o o y o l l lv l o ss .. lly ls o s l o ly sso o os l o sly s sv ll v ls o ly ss. s s so o y l o lll v ls ov y so w . y s o s k o s l so s s w so k ss s v s o s. v o s w o s. v l y s ok o o o o s s o o o o o oo . so l o s o w y so ls o l o s so l o s . v o s y k ow o s s w kso o l s. sso w s s flows v s y o o o w o l wo k w o l l w flowso sow . ow v wo k s s s o s w l y s w y so o s o . os s l wy o s y o ss o lz s o s o so l sy s s. o ly w v o ssolv sl ll

Tow

i u l

xp im n L

ll v ? s w ll o ll s q so s v o sw y s w v l o vs o s s wo k o lls o w s sy o s w lso s o l so o w

Fig. 5.

7

o xs

o sv

ll v l o

o

lw y s so o o

ly ss

oi

s

y so

ll s o

o

21

o o so w s.

o o s

sv

o

? s l

s

Conclusion

v s s o o o xs o so sw o o s o o s w l s w s o l o y ow v l o o y s ss so o o o lz

o

fl xo

l l w olo s’s. x o so s

o

o l o s

ow o o s. v so ls o s l o s

o so o j v o l w l s s o wo

k ss o l so o o l y

s l o o

s o

21

vi

v

l.

v y o o w ll k ow

o w w os l . v o y y o lo s y x o o w v x ss y o l o o l y olo y o ly ss s s y olo y o w v o o l o o s l w l o ls . o s s so l o v o o s o o lo o w v s v s lly w y s s ly q w w s o o s s o s l o so o o w oso l so s o . o wo k o k owl s o o w s o q s s s s y ss l o ss o j o o o w l o s. s s s l o o o o o o ss o sl s s o wo o v y ss o o . s s o ss ow so k o y w s oo j v s o l o s s lso o sy o z o o o o o l so v ls slv s s wy s ow o o s . s w o yo sol o j o ll ss s. o j o sy s s o s o z o lly s s y o . l sy s s ow v o z o v o s slv so j sw sy s lly o o v o lx y lv l o o j so sy s . so s s l o s oj s o ly l s s q so s s o ol w o o o s o o s o z o o sv l v ls so o o ls o s o o l v o y s s w ( w 1994). v s so o o v wso s s o wo k ss ll o ss w ll o ly w s s ol w ll v o s s o o sy s . o l ss ll lo w y o l y o s o s k o s o o sl. o o s s o s o s sv l so so o s o l o . y s o v so o s so o s o l sy s s w o v s o lso sv so . ow v s so v so l o ow w y o s l o j s vo o o s o o sw zs o sy o s . ly o j v o o o sw o s s ls w ll v o ov o w so sl. s’ s w ll o o ly s y w sow s vo s l ss l o s y w o sl o sol so o s s s ow so v l xs o o sl o oll v v y o o o xs w . s o o y wl

Tow

s o

s o

i u l

ov s s v

o

wy

xp im n L

o

o

oi

21

o j

l.

References 199

x lo . Mo l o m g n o w oli i l o . i i l o i i on n il . 199 . 199 li n . L mp m o opiqu . in fl i ion on i` . (199 ) (in n ). 1993 ogoul . o . n u . M T w xp im n l ul on m g n o ( i i l) n o i i . imul ing o i i ympo ium i n . l n i ( .) 1993. 199 . L y `m mul i g n un in llig n oll iv . n i ion 199 . (in n ). 1990 kin . igi l M ni n n o m ion l o on v i l Univ l llul u om . llul u om T oy n xp im n . u owi z ( )MT o oll n p2 4 2 0 m m 1990. 199 il . m g n in o i l imul ion. i i l o i i on n il . 199 . 199 M n . l oni . l O g ni ion in g n imul ion. o ing o M M ’9 onn y w n M y 199 . 199 M n . l oni . ou i . L m n . on u ion xp im n l ’un mo `l mul i g n . o ing o M m` 199 . (in n ). 199 Mouk . m l n o m ion i ov y n il ing U ing Mul i g n volving o y m. o ing o 1 n n ion l on n n x i i ion on i l ppli ion o n llig n g n n Mul i g n T nology 199 . 1990 i . Mo li ion u on ionn m n y iqu ol . g l’ ll mi o opiqu l’ ll m o opiqu . M O L n ’ ll O om i ion p m 1990. (in n ) 199 i . m i . Un pp o mul i g n pou imul l in ion n u og`n l’in l ion u ui ll m n ’ u u un u ol. T n n nouv ll n mo li ion pou l’ nvi onn m n l vi . 199 . (in n ). 199 v . m g n o xi n g oup n mul i g n . M T i Univ i y o i in i i l n llig n n ogni ion n i ppli ion i p m 199 . (in n ). 199 olign . oj ( ui ll m n n l ion u p g n ). M T i Univ i y o i in i i l n llig n n og ni ion n i ppli ion i p m 199 . (in n ). 1994 ’94 w m T m n Ov vi w o w m imul ion y m . n n i u 1994. 1990 To oli T. M golu . . nv i l llul u om vi w. llul u om T oy n xp im n . u owi z ( )M T o oll n p229 2 3 m m 1990. 199 T uil . i . m i . i ion pou un pp o mul i g n l imul ion p o u p y iqu . o ing o M m` 199 . (in n ).

A Model for the Evolution of Environments Claude Lattaud* & Cristina Cuenca** * Laboratoire d’Intelligence Artificielle de Paris5 – LIAP5 UFR Mathématiques – Informatique Université René Descartes 45, rue des Saints – Pères 75006 Paris E – mail: [email protected] ** Institut International du Multimedia Pole Universitaire Leonard de Vinci 92916 La defense Cedex E – mail : [email protected]

Abstract. In artificial world research, the environments as themselves are rarely studied. Both in multi – agents systems and artificial life domains, the dominant work turns around the active entities, the agents. The aim of this paper is to propose a theoretical approach of an environment model. It must allow to deal with the different existing environment types but also to add a creative touch by introducing the concepts of meta – environment and multi – environments. This model defines some evolutionary schemes for an environment, and the different ways of interaction between them.

Keywords: adaptive environments, environment interactions, multi – environments systems.

model,

environment

Introduction John Holland quotes in one of its first papers, [1], « The study of adaptation involves the study of both the adaptive systems and its environment. In general terms, it is a study of how systems can generate procedures enabling them to adjust efficiently to their environments ». However, since 1962, most of the research work are focused mainly on the adaptive systems, rather than on their environment. This paper is a first step into the task of filling this gap, and proposes a theoretical study of an environment model. Even if most of the works on adaptive systems integrate the idea of a dynamic environment, generally, it only has a weak level of dynamism and it doesn’t really have a truly evolutionary process. That is why it seams important to consider the J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 218-228, 1998 © Springer-Verlag Berlin Heidelberg 1998

A Model for the Evolution of Environments

219

environment not as a dynamic support where agents populations, [2], or animats, [3], evolve, but as an entity by itself. This entity will be able to develop particular behavior models, but also a non – fixed structure. These environment behaviors should be able to evolve following the different interactions with its objects and agents. Nowadays, most of the systems include an environment containing agents and objects. But, couldn’t a system integrate more than one environment ? In such a case, it is necessary to define, from on side, the different interaction laws that can exist between the environments and, from the other, a meta – environment being able to manage this laws considering each environment as an agent. The purpose of this paper is then to define an environment model which integrates all of these ideas. This model must allow to classify and to order the environments, giving by this way the possibility to determine transition and evolution stages between the different environment classes. The task of the first part is to establish this classification in a hierarchical way. Then, the second part presents the concept of meta – environment and the different interaction types that a multi – environments system can have. The next part concerns the definition of a protocol for the environments evolution and finally, the last part shows an example of a multi – environments system using the model previously defined in this paper.

1. Environment Model This model lays, from one part, on the definition of a taxonomy of environment classes, and, from the other, on the use of the characteristics and parameters of each of these classes. From an object classes point of view, and also for simplicity of use, the chosen hierarchy is a simple one, i. e. each object have one and only one father. Taking into account the important use of some artificial life techniques such as genetic algorithms, [4], and classifier systems, [5], a genotypic description of the model classes is presented to conclude this part 1.1. Taxonomy and Classes Characteristics This taxonomy defines five main classes. Two of them are abstract ones and the other three are concrete ones. The Fig. 1 shows this hierarchy. The abstract environment class is the root class of the hierarchy. It has some parameters, which are common for all its sub – classes :  The environment size, denoted S. This parameter determines if the environment is infinite or finite, and in this last case, S contains a positive integer value, which the unit depends on the environment type.  The set of the environment objects, O. This set can be decomposed in some parts : the active objects set, i.e. the agents, the inactive objects set and the sub – environments.

220

Claude Lattaud and Cristina Cuenca

 The set of interaction laws between the different environment elements, I. This laws concern the interactions between agents and objects, agents and agents, but also the interactions between the environment and its components.

Abstract environment

Situated environment

Discreete environment

Non-situated environment

Continuous environment

Fig. 1. : Environments taxonomy.

 The set of evolution laws of the environment, E. This set determines the rules followed by the environment to evolve in time, and also the transition functions which allow it to pass from one stage to another. The quadruplet defines then an abstract environment. The situated environment class is generally used in the simulations concerning the multi – agents systems, and the different problems in artificial life. An environment is called situated if it is constructed from a space having a metric, and if its objects and agents have a position in relation to this space. The parameters of this environment type are :  The structure or architecture of the environment, A. Different structure types exist, as a toroidal space, square, cylindrical or spherical.  The environment dimension, D. The environment can have one, two, three or n dimensions.  The environment metric, M. This metric defines the measure units and the scale to define the objects coordinates within.  The set of the space management properties, P. This set is formed by some properties as the projection properties, from a dimension n to a dimension n-1, for

A Model for the Evolution of Environments

221

example, or even more, the rotation properties, useful for the visualization processes. In this kind of environment, the agents may have some particular properties, related to their situated aspect, their coordinates, their movement coefficient - or their speed - representing their capacity to move. A situated environment is then defined by the set . The non – situated environment, N, is the class that can be used to define an environment in a ruled – based system. An environment is called non – situated if the concepts of space and position into space don’t exist in it. In this kind of environment, rules can be considered as the objects and meta – rules can be the agents. This class doesn't have any specific parameter, due to the fact that it is defined to be distinguished from the situated environment class. The discreet environment, D, follows the principle enounced by Bergson, [6] : « The intelligence only represents clearly to itself the discontinues ». An environment is called discreet if is it cut up in a finite number of elementary parts, the cells, each one having a fixed number of neighbors. The parameters of this type of environment are :  The cells structure, L. These may have different forms : square, hexagon, or triangle in a two – dimensional environment.  The integer environment metric, Mi. The environment being cut up in elementary parts, the computation of coordinates and moves are done in the integer set, or in one of its sub – sets. The continuous environment class, C, is the complementary class to class D. An environment is called continuous if, for every two points of it, there is at least one point between them. Even if, from a computing point of view, the idea of a strict continuity is impossible nowadays, a continuous environment can be built using a real metric, Mr, based on a real vectorial system. This kind of metric is the only parameter of this class. 1.2. Genotypic Approach All the previous parameters can be coded to form a string, genome or chromosome. The elements of this genome, the genes, define then the different environment properties. Once this coding has been done, it is possible to apply a particular version of Genetic Algorithm (GA), used in artificial life area. This kind of algorithm would be near to the BEA, Bacterial Evolution Algorithm, presented by Chisato Numaoka in [7]. In opposition to most of GAs, having a large number of strings, this one considers entities, the bacteroids, owning only one chromosome. The bacteroids behavior is then completely determined from this unique chromosome. The evolution phenomena is performed just when the bacteroid, weaken1, inserts in its organism the chromosomes of its neighbors. It chooses a chromosome in this set and, eventually, makes a mutation on this new chromosome. In the same way, environments following this approach have only one active string. This chromosome, which determines alone the environment genotype, defines

1

The bacteroid have an energy level.

222

Claude Lattaud and Cristina Cuenca

the environment itself, its phenotypic representation, and its behavior. The chromosome fitness is then defined according to some parameters like :  The number of objects and agents in the environment.  The agents efficiency, which can also be measured as a fitness.  The utility of the environment laws. However, as a difference with the bacteroid case, the environment keeps the old chromosomes in a kind of memory base. This memory is then used in the evolution process to eventually return to an older, but more adapted, state. After the description of an isolated environment, a description of the possible relationships between different environments is presented in the next part.

2. Meta – Environment Notion When the different elements of a system doesn’t act and interact with the same time scales, it can be useful to decompose it in a set of sub – systems. Effectively, it is not always coherent to consider in the same way the events produced in a micro – second scale and others produced in a year scale. The temporal windows of the evolution are completely different and cannot be managed with the same rules. At the environment level, this decomposition leads to the constitution of a multi – environments system. However, to take into account the different interactions that can be performed between the environments, it is necessary to establish the concept of meta – environment. This one doesn’t act as a supervisor but as a service manager, giving its services to its sub – environments, avoiding them some tasks. And even more, this approach allows a modular development, creating a hierarchy of environments and sub – environments, managed by a unique meta – environment. 2.1. Characteristics A meta – environment is a kind of specialized abstract environment, having the following properties :  An infinite size.  A set of objects, exclusively environments.  A set of laws for the interaction between its sub – environments.  A set of internal evolution rules. 2.2. Global and Local Meta – Environments It is important to point out the fact that a parallel can be established between the environment / agent relation and the meta – environment / environment relation. In the second case, the environment acts as an agent in relation to its meta – environment. Following this idea, an environment can contain other environments as components. In this case, the environment containing some sub – environments can

A Model for the Evolution of Environments

223

be considered also as a meta – environment. So, two types of meta – environment exist in this model :  Global meta – environment : the root of the multi – environments system hierarchy. Its properties are those defined in the previous section.  Local meta – environment : a node in the multi – environments system hierarchy. It is a component of another meta – environment, a global or a local one. It contains as a part of its objects set, a set of sub-environments, which can be considered as agents in relation to it. 2.3. Inter – Environments Interactions Types A large number of interactions between environments can appear. This section describes some possible ones:  Environment dependence. An environment is completely dependant from another and can only perform its own actions according to its master environment.  Transfer. An environment can transfer either agents, or objects, or laws to another environment.  Communication. An environment can establish a communication with another environment in order to ask for, or to offer, either some information or a transfer.  Union. Two environments, both having almost identical elements, can join in order to form an unique environment, composed by the set of all the elements of both original environments2. The environment evolution can be caused, as in the next example, by its components, but also as a consequence of the evolution process of the other environments in its system. In this case, the environment evolution will be caused by a state change, controlled by its transition functions. Its corresponding meta – environment will detect this change and will inform the other environments. For example, if some environments share some resources, and one of them disappears, the meta – environment will transmit this information to the others to allow them to adapt to the new existing conditions. The next part describes an example of an environment evolution caused by its agents.

3. Environment Evolution In the evolution process, an environment can have different states. An environment has an initial state just after its creation, a set of transitory states during its life period and a final state, the one before its death. All this states are linked together by transition functions. This environment evolution concept is important for the meta – environment, who needs to know the situation of its sub – environments in order to make the necessary changes for all of them, but also to create a new one or to destroy an existing one. 2

An environment could also have the possibility to be cut, to give two distinct environments.

224

Claude Lattaud and Cristina Cuenca

3.1. Evolution Types In the frame of the situated environments, these can be confronted to different kinds of evolution. This section presents the laws that allow them to evolve in different ways :  Structural evolution, the environment deeply modifies its structure. For example, lets have E, a discreet situated environment. If E is a two dimensions environment, defined by a 15*15 grid then, after a structural evolution, E could increase the grid size to 16*16, or decrease it to 14*14. This type of evolution can also occur on the dimension parameter, so a two dimensions one can evolve to a three dimensions one.  Behavioral evolution, the environment internal laws are modified. Considering the previous example environment, E, if it has some resources as a part of its objects set, an internal law can allow it to control the quantity of renewed resources at each step. A behavioral mutation will change this kind of criteria, increasing or decreasing it, in order to encourage or not the expansion of the agents feeding with this resource.  Component evolution, the environment acquires or destroys types of objects that may compose it. In this way, the environment E, evolving by this principle, could define a new kind of resource, more appropriated to one type of agents.  Geographic evolution, the environment creates or destroys some instances of its composing objects. In the environment E, after a geographic evolution, E would be able to create, on some cells3, new objects, new resources or new obstacles. This evolution type, where instances of objects are created or destroyed, must be distinguished from the previous one, where types of objects are created or destroyed. 3.2. Evolution Example This evolution example uses a two layers neurons network to determine the mutation of the environment genome. This kind of evolution fits to the interaction between the environment and its agents. Then, some specials environment interaction effectors, represented in the network by input neurons, are affected to each agent. At each gene of the environment genome fits also threshold functions, causing a gene mutation after their overall activation. These threshold functions are defined by the output neurons of the network. As shown in Fig. 2, each output neuron is connected to each input neuron. When some agents activate the same environment interaction effector, the signals are transmitted to the corresponding output neuron. After that, if the function threshold is reached, a mutation of the corresponding gene will occur. This mutation may be interpreted as a demand of an environment modification from its agents. If a sufficient quantity of agents makes the demand, then, as a answer, the environment will adapt and transform itself, following the agents exigency. 3

If the environment is discreete, or at a position in space if it is continuous.

A Model for the Evolution of Environments

225

It is necessary to make the right distinction of the output neurons, which represent the activate functions, and the genes of the environment genome. In the following example, the environment E is a discreet one, defined by a 10*10 cells grid. Then, a gene codes the environment size, however, two outputs neurons will be associated to this gene. The first one represents an increase of the environment size, if it is activated, while the second one represents a decrease of this value. Then, if a sufficiently important agents group demands to the environment, by the means of the neurons network, to increase its size, it could be able to undergo a mutation of its corresponding size gene. The environment will mutate then to a size of 11*11 cells.

Input neuron

Agent

Set of associated fonctions to the environment genome

Output neuron

Fig. 2. : Two – layers neurons network.

4. Multi – Environments System Example In [8], Nils Ferrand quotes some possible characteristics of a multi – agents system, applied to spatial planning : «  Multiple scales : in space (from close neighborhood, to the nation or even continent), in time (from an instant for the perceived feelings of negotiators, to the centuries for the evolution of large scale ecosystems), and in organizations (from one – to – one interaction, to the transnational organizations) ;  Multiple actors : except in some very local and specific situations (like a landowner planning some limited works on its own territory, with no visual or ecological impact), spatial planning implies many actors, with different interests, social and organizational position, spatial attachment, and personal qualities ;  Multiple objectives : environmental, political, personal, economical, etc.

226

Claude Lattaud and Cristina Cuenca

 Multiple modes : integrative (toward consensus through cooperation) or distributive (toward compromise through negotiation) ;

 Multiple criteria : ecology, economy, landscape, agriculture, laws and regulations, culture, society, ... » The number of characteristics, non – exhaustive, denotes the difficulty of conceiving a complex multi – agents system using a unique environment and controlling a unique time unit. So, the necessity of building a multi – environments system appears, each of them having its own time and evolution criteria. In a more precise example, which the aim is to simulate a ground area with its different components, the evolution time scales are very variable. If the simulation takes into account the underground level, the ground level and the atmosphere, it’s clear that each of these elements and their internal elements doesn’t evolve at the same speed. In this way, the life time of an ant is shorter than the anthill life time, which is shorter than the tree life time. However, the relationships between them are non – negligible and must lead to a co – evolution of each element. These distinctions induce then the creation of many environments having interactions properties to allow this co – evolution. The Fig. 3 shows an environment taxonomy allowing to model the anthill evolution, according to some possible sub – environments.

Planet Earth : Rotation cycles, Seasons ...

Earth's crust : Tectonic, Volcanos ...

Ground : Vegetables, Animals ...

Underground : Minerals, Ground water ...

Anthill : Queen, Worker ants ...

Forest : Trees, Flowers ...

Atmosphere : Weather, Aerian animals ...

Fig. 3. : Example of multi – environments system.

A Model for the Evolution of Environments

227

Focusing only on the development of two sub – environments, one corresponding to the ground definition and its properties [9] and the other, modeling an anthill [10], the following characteristics are considered :

 Ground type (prairies, agricultural lands, forests, ...), water exchange (rain, evaporation, humidity level), carbon and nitrogen flow, temperature, plants decomposition (divided itself in some parts, following a chemical and physical protection index of the organic mater), plants production, ground textures (sand, clay, granite), ...  Ants types (queens, workers, soldiers), cocoons and larvae with different states, ants cadavers, multiple ants nutrition forms, ... Then, some interactions laws between these environments are indispensable to cause a co – evolution of the different components. This laws concerns :  The substances transfers, an environment creates food for another environment agents, and these agents produce some rejects that allows the evolution of the first one.  The geographic aspect, the anthill being an environment geographically placed at the ground level, as shown in the Fig. 3. According to this decomposition principles it is possible to integrate, in a modular and progressive way, the other environment types to a developed system.

Conclusions As it is shown in the example described in section 4, this environment model allows, from a practical point of view, to develop progressively a complex multi – agents system, defining and conceiving the sub – environments one by one. This model proposes different kinds of evolution for an environment. In this way, the agents evolution and the environment evolution leads to a co – evolving system. In order to simulate artificial ecologies, the environment model described in this paper, can be combined to a food – web model, like this presented by Lindgren and Nordahl in [11]. These authors show different existing food links between many types of animals and vegetables, but they also put forward some special links : « Not only do plants and trees provide food for the herbivore, they also provide shelter from sun and wind, escape routes and places to hide from predators, and branches and twigs to make nests and squats from ». These types of interactions prove the difficulty to conceive a realistic model of a natural ecosystem, but the concepts brought in this paper should help to develop modular platforms. Practically, the componential model, developed by Marc Lhuillier in [12], combined to this environment model would offer the possibility to construct multi – agents generic platforms. Those will tie together the notions of flexibility and dynamism of the componential model to the environmental evolution idea, described in this paper. In the future, a first platform of this kind will be developed, integrating the concepts of environmental evolution and meta – environment. Agents of this platform will be built with the macro – mutation concept, described in [13], allowing

228

Claude Lattaud and Cristina Cuenca

an agent to undergo a deep modification of its genotypic structure, according to environmental conditions.

References 1. 2. 3. 4. 5. 6. 7. 8.

9.

10. 11.

12. 13.

[Holland, 62] Holland John, Outline for a Logical Theory of Adaptive Systems, in Journal of the Association for Computing Machinery, volume 9 : 3, July 1962, pages 297-314. [Ferber, 95] Ferber Jacques, Les Systèmes Multi-Agents : vers une Intelligence Collective, Inter-Editions, 1995. [Wilson, 85] Wilson Stewart, Knowledge Growth in an Artificial Animal, in Proceedings st of the 1 International Conference on Genetic Algorithms, 1985, pages 16-23. [Goldberg, 89] Goldberg David, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, 1989. nde [Holland, 92] Holland John, Adaptation in Natural and Artificial Systems, MIT Press, 2 Ed., 1992. [Bergson, 1907] Bergson Henri, L’évolution créatrice, PUF, 1907. [Numaoka, 96] Numaoka Chisato, Bacterial Evolution Algorithm for Rapid Adaptation, in Lecture Notes in Artificial Intelligence 1038, Springer-Verlag, pages 139-148. [Ferrand, 96] Ferrand Nils, Modelling and Supporting Multi-Actor Spatial Planning Using rd Multi-Agents Systems, in Proceedings of the 3 International Conference on Integrating GIS and Environment Modelling, January 21-25, Santa Fe, New Mexico, USA, 1996. [McKeown & al., 96] McKeown R. & al., Ecosystem Modelling of Spatially Explicit Land rd surface Changes for Climate and Global Change Analysis, in Proceedings of the 3 International Conference on Integrating GIS and Environment Modelling, January 21-25, Santa Fe, New Mexico, USA, 1996. [Drogoul, 93] Drogoul Alexis, De la simulation multi-agents à la résolution collective de problèmes, Thèse de Doctorat, Laforia, Université Paris VI, 1993. [Lindgren & Nordahl, 95] Lindgren Kristian & Nordahl Mats G., Cooperation and Community Structure in Artificial Ecosystems, in Artificial Life : An Overview, C. Langton Eds., MIT Press / Bradford Books, 1995, pages 15-37. [Lhuillier, 98] Lhuillier Marc, Une approche à base de composants logiciels pour la conception d’agents, Thèse de Doctorat, LIP6, Université Paris VI, 1998. [Lattaud, 98] Lattaud Claude, A Macro – Mutation Operator in Genetic Algorithms, in th Proceedings of the 13 European Conference on Artificial Intelligence, August 23-28, Brighton, England, 1998, pages 323-324.

AR´ eVi: A Virtual Reality Multiagent Platform P R ig ni r1 1

rrou t1 S

orv n 1

iss u1

nd

uv l2

LI2, ENIB, Technopole Brest Iroise, CP 15, 29608 BREST cedex, France [reignier, harrouet, tisseau, morvan]@enib.fr 2 ENSAI, rue Blaise Pascal, Campus de Ker Lann, 35170 BRUZ, France [email protected]

Abstract. AR´eVi (in French, Atelier de R´ ealit´e Virtuelle) is a distributed virtual reality toolkit. Its kernel (a group of C++ classes) makes it possible to create cooperative and distributed virtual reality applications by minimizing the programming effort. AR´eVi is built around a dynamic multiagent language: oRis. At any time, this language allows to stop the ongoing session, to add new entities (known or not when starting the session), to modify an entity or an entire entity family behavior. More generally, oRis gives the user the ability to interact with the agents by directly using their language, thus offering a way of immersion through the language. Keywords: Virtual Reality, Software platform, 3D simulation, Multiagent Systems, Dynamic Languages

1

Introduction

ur study is rtofr s r h work sin virtu lr lity nddistri ut dsim ul tion toolk its( R oolk it 12 R K 9 6 N PSN / S 5 15 nd 3 nim tion ndint r tion ( 23 - 11 irtu l Studio 1 n th l stf w y rs ig fforth s n m d y uild rs ndd sig n rs to im rov th r nd ring lg orith m s ndto l or t i ntg r h i ng in s rious om m r i l rodu ts r now v il l orld oolk it(S ns8 1991 d S ( ivision or lovis( L 1995 r now lso witn ssing th m rg n of d sri tion st nd rdofvirtu lr lity li tions R L 20 S / 14 772 14 v n th oug h th h vior lor distri ut d s ts r notr lly t k n into ounty t n th om m uni tion rt th n xtv rsion of th P roto ol ( P v6 R 1883 ( m r 1995 introdu sth notion of flow ndin lud sth m ulti st h is s tis ssnti lin distri ut dvirtu l r lity h r for ro os lfor n w toolk it r h it tur for virtu lr lity h sto wid ly o n d ndm d sm u h ind nd nt s ossi l from th r nd ring li r ri s itm ust l to us th m osto tim l solutions 19 20 tsh ould lso l to d sri l rg r nd l rg r nvironm nts in luding g ntswith m or ndm or om l x h vior R i is l tform for th d v lo m ntof istri ut d irtu l R lity li tions tiswritt n in ++ nditsk rn lisind nd ntfrom 3 li r ri s J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 229–240, 1998. c Springer-Verlag Berlin Heidelberg 1998 

230

P. Reignier et al.

tm y usdfor th r idd v lo m nt(sri t rog r m m ing of li tions using “ r ltim ” 3 visu liztion with or with outth im m rsion ofh um n o r tors h s li tionsm y distri ut dfor oo r tiv work h y n int g r t univ rssinh it d y g ntswh os h vior ism or or l ss om l x n th is rti l w willm inly d t ilth h vior s tsofour l tform ft r indi ting th rti ul r onstr intsofvirtu lr lity w will r sntAR´eVi in d t il itssoftw r r h it tur sw ll sth oRis m ulti g ntl ng u g th h rtof th sy st m h n w will r snt n x m l d sri ing th sy st m ’s ossi iliti s in lly w will on lud on th is l tform volution ros ts

2

Virtual Reality System Needs

h o j tiv of distri ut dvirtu l r lity li tion isto llow g rou of dist nt o l to work tog th r on om m on roj t(for x m l int r tiv rototy ing n ord r to l tth usrs on ntr t on th ro l m th y r try ing to solv th sy st m th y r using h sto sfl xi l s ossi l srsdon’t h v to d tto th tool th tool m ust l to t v ry th ing th y n d to do 2.1

Modularity

h int r tiv rototy ing notion involv s t l st th o ortunity to dd r m ov or m odify l m ntsin th sy st m h us if w w ntth sy st m to t su st nti l h ng sin itsstru tur itm usth v rt in m ountof fl xi ility h m ulti g nt ro h s m sto th m ost ro ri t to rry outth is g o l h sy st m d sig n r’s tt ntion isfo usdon th individu lfun tioning of l to dis rn nd ton h ofits om on nts h s om on ntsh v to th ir nvironm nt(sy st m ’soth r om on nts f w h v w y to d sri om on ntint rn l fun tioning sw ll sitsint r tionswith its nvironm nt th is om on ntwill l to t k rtin th g n r lwork ing of sy st m m d of l m ntsr lizdon th s m w y h r won’t ny g lo l ontroll r nd th sy st m will v ry m odul r h g n r l fun tioning will th r sultof om in d tionsfrom h of th om on nts ndwill h ng ording to th usr’sint rv ntion on h of th m 2.2

Immersion Through the Language

h m in diff r n tw n sim ul tion nd virtu l r lity isth t v ry th ing nnot l nn dwh n l un h ing sssion n f t rson initi ting virtu l world h s r viously l nn d to fi ll it u with v rious t g ori sof ntiti s sw ll sth ossi l int r tionswith th m h n onn ting usr h s to dis rn it ut do snot h v to lim it d to th is h sto g iv n th o ortunity to ring into th sy st m h isown ntiti s v n ifth y w r unk nown don with out int rru ting th to th d sig n r h is ntity su ly h s to sy st m nd r onfi g uring v ry th ing n th s m w y to r liz h iswork

AR´eVi: A Virtual Reality Multiagent Platform

231

usr m y n d to int r tin th univ rs diff r ntly from th d sig n r’s l n m usth v sm u h fr dom of tion ndint r tion s ossi l h tv r virtu l g ntis l to do h um n o r tor h sto l to do ittoo in ord r to on th s m l v l ndto h v th s m fr dom h o r tor h s to s k th g nt’sl ng u g in th is s w n t lk of im m rsion th roug h th m nsof th l ng u g tfollowsth tth isl ng u g h sto dy n m i it h sto tr qu sts ndm odifi tionswh n x uting 2.3

Generics

h li tionsof virtu l r lity r num rous ndinvolv tivity fi ldsv ry f r from h oth r (for x m l t h ni l uilding rg onom i s ins t o ul tion study rowdm ov m nts n vig tion in r onstru t d r h olog i l sit om ut r- id d surg ry t tisim ort ntfor th on rn d sy st m notto int g r t r str intson th ty of univ rs it n r r snt o sum u int g r ting g n ri nd dy n m i m ulti g nt l ng u g in virtu l r lity l tform llowsto o t in tool wh i h g iv sth usrsfr dom on rning th k indof ntiti sth y wish to im l m nt ndon th int r tions th y wish to r liz

3

AR´ eVi

n th iss tion w r sntth AR´eVi (in r n h At li r d R´ e lit Virtu ll l tform h is l tform isth r sultof our r s r h work 3.1

General Presentation

AR´eVi li sin th entity scene nd viewer on ts h ntiti s r lo t d g ntswith 3 r r snt tion h isr r snt tion in lud s nim tions(su ssion of sv r l r r snt tions sw ll sl v lsof d t ils(r du tion of th f tnum r ording to th dist n from th m r h ntiti sm y d riv d (in th sns of o j tori nt d rog r m m ing into m r s lig h tsor wh t v r th usr wish sto o r du th dis l y osts th ntiti s r roug h ttog th r in s n s wh n in rti ul r s n th only ntiti sto visu lizd r th os wh i h r rt of it h ntiti sin s n r d t t d y m r lo t din it( ith r rovid d y th usr or d f ult r t d h im g ssh ot y th is m r r sntto vi w r (2 window to dis l y d ll th s notions r illustr t don ig 1 3.2

Software Architecture

h AR´eVi k rn l is g rou of ++ l sss ig ur 2r r snts sso i t d m od l h wh ol fun tion liti sof th sy st m x

rtof th tfor th

232

P. Reignier et al.

Fig. 1. AR´eVi rin i l

dis l y r r lizd y l sss( str tor not n rti ul r w h v l sss for th ntity notion nditsd riv tiv s(ArEntity ArLight ArCamera s n (ArScene ndvi w r (ArViewer h om m on d riv tion from th Agent l ss is x l in din S t33 n ArEntity h s g om try (ArGeometry r s onsi l for m n g ing its osition nd in m ti h link with rti ul r r nd ring li r ry isr lizd y d riv tion of th orr s onding l sss or x m l n ArCamera isd riv d into ArXXCamera wh r XX r r sntson of th th r li r ri s lr dy su ort d n nv ntor (S ision ( log nd n L

Agent

ArEntity

ArViewer

ArCamera

ArLight

ArXXLight

ArScene

ArXXViewer

ArUnivers

ArXXScene

ArXXCamera

Fig. 2.

xtr tof th

AR´eVi k rn l

di g r m

ArEvent

AR´eVi: A Virtual Reality Multiagent Platform

3.3

233

oRis Language

snot din S t2 th m ulti g ntform lism sw ll sth dy n m i ndg n ri h r t risti sof l ng u g r v ry int r sting ro rti sin virtu l r lity sy st m ont xt m ong th xisting int r r t dl ng u g s(for th dy n m i s ts w onsid r two t g ori s 1 ulti g ntl ng u g s 2 “ n r l”l ng u g s(m or

r isly o j tori nt dl ng u g s

Multiagent Languages. ostof th xisting m ulti g ntsy st m s r d v lo dfor v ry rti ul r li tion ow v r m or g n r lsy st m s xisttoo ri fly d sri two of th m St rlog o 18 is sim ul tor nd l ng u g llowing m ulti g ntuniv rs d sri tion h g ntfrom th isuniv rs is quiv l ntto Log o turtl wh os h vior is sily s ifi d y th usr th nk sto th l ng u g h s g nts m ov in 2 univ rs m t ri lizd y squ r swh i h nh v h vior sim il r to th turtl ’s( x tfor th ir ility to m ov or x m l th istoolg iv s th ossi ility to sim ul t ro dtr t rm it oloni s ndfood oll ting y nts ll g g r g tion r y / r d tor osy st m s h im ofsu h toolisto und rlin diff r ntw y ofth ink ing th d t ro ssing ro l m (strong ly r ll l ro h Sim ul tionswith St rlog o im l m nth undr dsor v n th ous ndsof g nts h nvironm nt 17 m k sit ossi l to d sri m ulti g nt sy st m s y im l m nting diff r ntl ng u g s d t dto h l v l of d sri tion or d ision m k ing th LL l ng u g sdon rul sg iv sth ility to d sri g nts wh r sl ng u g ssu h s ++ om m onLis nd v g iv th ility to d sri th low l v l rog r ssof h h vior l m nt ry ross om m uni tion tw n g ntsm y sy n h ronousor sy n h ronous ut n g nt nnottot lly ontrol noth r g nt(for x m l d stroy it nfortun t ly k nown li tionsof th isl ng u g only im l m nt sm ll num r of g nts General Languages. n th r stof th r ov rd fi nition m nsdir tly h ng ing th od of n xisting m th od n r l l ng u g s r o n d noug h to llow th r r snt tion of ny univ rs f th l ng u g in lud so j t on ts th n itisr l tiv ly sim l to r h th g nt l v l y r - rog r m m ing “ sh dul r” h is sh dul r is in h rg of w k ing u in y li l w y h o j t wh i h th n om s n g nt 7 v h v som int r sting dy n m i ro rSom l ng u g s( rti ul rly ti s t ny tim itis ossi l with outsto ing th int r r t r to r iv som od th roug h th n twork nd to x ut it h is od m y ont in tions (m th od tiv tions sw ll sth d fi nition of n w l sssor th ov rd fi nition ofm th odsof lr dy xisting l sss h isdy n m i s tof th l ng u g

234

P. Reignier et al.

would rtly llow usto r liz wh tw w ntto do m odify th ntity ’s h vior or ddn w ntiti swh n x uting utth g r nul rity of th s tionsissitu t don th l ssl v l or m or fr dom w wouldlik to h v g r nul rity on th inst n l v l w ntto l to r -d fi n m th od not only on th l ssl v l th usfor ll th inst n s tth s m tim ut lso for sing l inst n h issing l inst n will in w y slig h tly om d t h dfrom its orig in l l ss or x m l l t’s onsid r virtu l univ rs m d u of ro d with rsr ing on it h rsm y inst n sfrom th s m l ss rovid d y th r tor of th isuniv rs noth r rson onn ting to th sssion m y o srv th volution of th r nd fi nd it notr lly s tisf tory h n h h sth o ortunity with outsto ing th ro ss to h oos on r to sug g st only for th is r n w lg orith m of driving ndto o srv th w y th isv h i l ro h is onsid r dto tt r is ting om r dto th oth rs f th n w th lg orith m of iloting m y th n h ng d from st tusof m th odsfor on inst n to st tusof m th odsof l ssso sto l t llth inst n s n fi t from it noth r x m l for th us of inst n m th odov rd fi nition isg iv n S t4 3 Sin th r viously quot dl ng u g sdo nottot lly fulfi llour r str ints w h v d v lo dour own l ng u g th oRis l ng u g 13 oRis Main Characteristics. oRis is m ulti g ntl ng u g tiso j torint d( l ssswith ttri ut s ndm th ods m ulti l inh rit n h sy nt xis los to ++ tis lso n g ntl ng u g v ry o j twith void main(void) m th od om s n g nt h ism th od is y li lly x ut d y th sy st m sh dul r ndth us ont insth ntity h vior oRis is dy n m i l ng u g itis l wh n x uting to t n w od dfi n n w l ss h ng th m th od’s od on th l ssl v l sw ll son th inst n l v l tis lso ossi l to ddon th inst n l v l th d fi nition of n w m th odwh i h do snot xistin th orig in l l ss oRis llows d ou ling with ++ th rog r m m r n s ify onn tion tw n n oRis l ss nd ++ l ss h ++ o j t wh i h r r snts th oRis g ntin th int r r t r isth o j t ro osd y th usr (d riving from th ++ g nt l ss sis l ssfor th oRis g nts h llof n tiv oRis m th odtrig g rsth sso i t d ++ m th od oRis iso n dto oth r l ng u g s r lr dy l to ll Prolog r dit s 22 or uzzy rul sinsid n oRis rog r m to r t r tiv nd og nitiv g nts oRis w ssu ssfully usdin li tions sv ri d sim g ro ssing 2 im m unolog y 3 d t ro ssing n twork s 4 or ins t o ul tions 16 Integration to AR´ eVi. h nk sto th oRis ↔ ++ ou ling AR´eVi off rs th usr r d fi n doRis l sss(n tiv m th ods wh i h orr s ondto th rviously r snt d ++ l sss(S t 32 ArEntity ArScene ArViewer t ig ur 3 r r sntsth di g r m of th m in r d fi n d l sss h isdi g r m isth x tr fl tion of th ++ l sss( ig 2

AR´eVi: A Virtual Reality Multiagent Platform

235

Agent

ArEntity

ArLight

Fig. 3.

ArViewer

ArScene

ArUnivers

ArEvent

ArCamera

di g r m of th

oRis r d fi n d l sssfor

R

i

N ot llth l m ntsm ni ul t d y AR´eVi r g nts notonly th “ h y si l” ArEntity o j tsfrom th univ rs ut lso th stru tur sn d dfor th sim ul tion stting h m in dv nt g of th is ro h isth titis sy y d riv tion to g iv th m h vior or x m l orig in lly th ArScene g ntis only listof g ntswh i h r visu lizdwh n w t h ing th s n ith oRis it is ossi l to r t ProxyScene d riv dfrom n ArScene roviding h vior th roug h main() m th od or x m l th is h vior m y th utom ti int g r tion to th s n of ny virtu lo j t( ny g ntd riv dfrom ArEntity nt ring th on rn dzon n th s m w y n AR´eVi vi w r (ArViewer h s ssto th fr m r t n v ry sily r t vi w r wh os h vior is to r du th g n r t dim g qu lity (wir t if th r t om stoo low ow r of th g r h i ng in n th n d tth r nd ring l v l to th

4

Examples

n th is s tion w illustr t th R i m ulti g nt nd dy n m i iti s irstly w r sntth m od ling of sim l m h nism ontinu with th r snt tion of m ulti g ntv rsion of th P tri n twork s in lly w sh ow v ry sim l w y to ou l oth r viousuniv rssth nk sto th ov rd fi nition of m th odsof inst n 4.1

Trains of Gear Wheels

wish to m od l tr in of g r wh ls h ro l m h r isnotto r liz h y si l m od ling ( ont t tw n th og s ut fun tion l m od ling two g rsin ont trot t in th o osit dir tion th ng ul r s dr tio ing id nti l to th r diusr tio h ro l m m y t k l don g n r l ointof vi w th tr in h sfour g rs g r 1 rot t s lo k wis g r 2 nti- lo k wis t tm y t k l don lo lw y g r E is n o j t( ssiv tk nowsits su ssor ndits r d ssor in th m h nism t n ll d( llofm th ods y its r d ssor P P indi t sto E h ow m u h ith sjustrot t d sw ll sits

236

P. Reignier et al.

h r t risti s h us E issu osdto d du itsown rot tion ndtr nsm ititto itssu ssor th m ov m nt ro g t sst y st in lly w r t n ng in g nt(itisth only g ntof th is x m l wh os h vior (main m th od isto rot t tonly r m insto “ onn t” th ng in to th fi rstg r to st rtth lin n th d stru tor od from th r l ss w sk th k ill dg r to disonn t from its r d ssor h us wh n x uting if g r isk ill d y th usr th r will no m ist k indu d(th r viousg r no long r tr nsm itsm ov m nts ndth following g rswill utom ti lly sto sin no ody sk for th m ff tiv ly g t sy st m wh os h vior is onsist ntwith itsr l-lif quiv l nt h r vious x m l im l m nts ssiv g rs wh i h r o j ts utnot g nts n ord r to g t m or r listi univ rs w n rovid th m with h vior – S rutiniz th univ rs h n g r E isd t t d los y (th ddition of itsr dius nd th – r dius= th dist n tw n th ntiti s • onn tion to E ⇒

utom ti lly st rtrot ting if E w srot ting

– f w r onn t dto t nt • isonn tion ⇒

E

g

r ndif th dist n

from it

om stoo im or-

utom ti lly sto rot ting

h us now y m oving th g rs with n int rf ri h r l(m ous or ls w r l to m odify th m h nism wh n th ison ism oving (s ig 4

Fig. 4.

rs n

dy n m i lly

utin or t k n off th

m

h nism

AR´eVi: A Virtual Reality Multiagent Platform

4.2

237

Petri Networks

h s onduniv rs x m l w r snt on rnsth P tri n twork s su lly th isk ind of sy st m do snoth v g r h i r r snt tion ow v r w g v it on in ord r to visu liz its h vior (s ig 5 N ot S v r l ro h s

Fig. 5.

r h i r r snt tion of P tri n twork

xistfor th rog r m m ing of P tri n twork s h n using oRis w h v sl t d n g nt’s ro h l s onn tions ndm rk s r o j ts tr nsitions r g nts h y d id th m slv swh n v r th y g t tiv t d it olvo t97 21 4.3

Gears and Petri Network Coupling

N ow th n xtst onsistsin r g rou ing oth r vious x m l sin ord r for th P tri n twork to t s ontroll r for th g rswh i h th uswill t s n o r tiv rt oth univ rssw r uilts r t ly h r for th y don’tk now h oth r h ou ling isr lizdin th r st s 1 Lo ding of th g rs 2 h il th g rs r fun tioning lo ding ofth P tri n twork s oth univ rss o xistin m m ory ndh v th ir own dis l y windows 3 rig g ring ofth om m ndon on ofth n twork l s on m rk ntr n w st rton th ng in ou l dto th g rs n n xitm rk w swit h off th is ng in Point3 isr lizd y dy n m i lly nt ring (with out sto ing th sy st m od on ( ig 6 h is od ov rd fi n sth ddition ndsu r ssion m th odsof Pl 2 h m th ods r notov rd fi n d on th l ssl v l (th usfor ll th inst n s ut only on rti ul r inst n l v lwh i h th n do ts h vior slig h tly diff r nt from th oth rs

th

238

P. Reignier et al. Overdefinition of a method "add" of the instance 2 from the "Place" class

void Place.2::add(agent mark) { if (!_marks) { Engine.1->wakeUp() ;

We start on the engine when the place contains at least one mark

} Place::add(mark) ; }

Call of the method "add" ftom the "Place" class (common to all instances)

int Place.2::del(void) { agent mark = Place::del() ; if (!_marks) { Engine.1->sleep() ;

We switch off the engine when place 2 is empty

return (mark) ; }

Fig. 6.

sso i tion of

N ot during th ll m th od Place.2::add() h vior ro osd

5

h vior to

l

of th

P tri n twork

ov rd fi nition of n inst n m th od itis lw y s ossi l to of l ss (for xm l ll of Place::add() in h isv ry sim ly llowsto dd noth r tion to th d f ult y th l ss

Conclusion and Perspectives

h AR´eVi l tform uilt roundth oRis l ng u g is d v lo ing l tform for virtu lr lity li tions th s n su ssfully usdin sv r lindustri l roj ts( rg onom i sstudy of t h ni l uilding s s d s3 visu liztion 3 int r tiv r r snt tion of distri t rototy ing of m nuf turing sy stm 8 h xist n of dy n m i m ulti g ntl ng u g in th sy st m m k s it ossi l for usto – r t m odul r univ rss;th m odul rity isr lizd y th sn ofg n r l ontroll r ndth us y th us of l m nt ry “ ri k s” with th ir own g o ls th g nts – int r t fr ly in th s univ rss;th dy n m i h r t risti sof th oRis l ng u g llow us susrs to ton th low stl v lon th ntiti s( v ry th ing n g ntis l to do usr is l to do ittoo th n r lizing n im m rsion th roug h th l ng u g h r viousv rsion of th l tform didnotin lud th oRis l ng u g llow d th r tion of univ rs distri ut d th roug h th n twork 10

ut

AR´eVi: A Virtual Reality Multiagent Platform

239

now work on distri ut dv rsion of AR´eVi / oRis llowing th r rtition of g nts ndso of univ rssth roug h th n twork h low l v l om m uni tions r m d th nk sto v ndsu ortth diff r ntm od s( ni st ro d st ulti st n our roj t th h oi of th isl ng u g is n lizing wh n t lk ing out rform n s(-20 utitoff rssom v ry int r sting d v lo m nt f iliti s in t rm s of sim li ity (m ultith r d rog r m m ing su ort d y th is l ng u g s f ty ( r - xisting g rou of s f om m uni tion l sss ( vns int g r tion of m ultim di ro ssing ( v di Pl y rs h ig h l v l n twork xt nsion (srvl ts R g l ts int ro r ility ( R nd onn tion to th d t ss(

References 1 2

3

4

5 6 7 8

9

10

11 12

l gu r Virtual Studio : Un syst`eme d’animation en environnement virtuel Ph th sis P L L 1993 P ll t R odin nd iss u dg t tion using ulti g nt Sy st m n SCIA’97, IAPR Scandinavian Conference on Image Analysis g s621 629 L nr nt ( inl nd un 1997 P ll t iss u nd rrou t ulti g ntSy st m to od l n um n S ond ry m m un R s ons n IEEE Transactions on Systems, Man, and Cybernetics rl ndo ( S to r 1997 ourdon h nt r tion l S m nti sof K nowl dg n International Joint Conference on Artificial Intelligence, Poster Session g 15 N g oy ( n ug ust1997 rutzm n P A Virtual World For an Autonomous Underwater Vehicle Ph th sis N v l Postg r du t S h ool ont r y liforni 1994 rlsson nd g s ng Pl tform or ulti usr irtu l nvironn m nt Computer and Graphics g s663 669 1993 rroll tiv j ts d sy Software, Practice and Experience 28(1 1 21 nu ry 1998 h v illi r P iss u rrou t nd u rr R Prototy ing nuf turing Sy st m s ontri ution of irtu l R lity g nts ndP tri N ts n INCOM’98 N n y t tz( r n un 1998 od ll lili R K ov dL ndL wis oolk itfor v lo ing ulti- sr istri ut d irtu l nvironn m nts n IEEE Virtual Reality Annual International Symposium 1993 uv l S orv n P R ig ni r rrou t nd iss u R i un o t ` outils3 our d s li tions oo r tiv s Calculateurs Parall`eles 9(2 239 25 0 1997 o tti Virtuality Builder II, vers une architecture pour l’interaction avec des mondes synth´etiques Ph th sis P L L 1993 r n Sh w nd h it L inim l R lity oolk it v rsion 1 3 h ni lr ort rtm ntof om uting S i n niv rsity of l rt 1993

240

13

14 15

16

17

18 19

20 21

22 23

P. Reignier et al.

rrou t R ozi n P R ig ni r nd iss u oR is un l ng g our sim ul tionsm ulti- g nts n Journ´ees Francophones de l’Intelligence Artificielle Distribu´ee et des Syst`emes Multi-Agents L oll -sur-Lou ril 1997 rtm n nd rn k The VRML 2.0 Handbook : Building Moving Worlds on the Web ddison- sl y Pu lish ing om ny 1996 doni R A Network Software Architecture For Large Scale Virtual Environnements Ph th sis N v l Postg r du t S h ool ont r y liforni 1995 rtin li tion d’un sim ul t ur m ulti- g nts` l m od lis tion d o ul tionsd’inv rt r s n m ili u o g r st r’sth sis ol N tion l Su ri ur g ronom iqu d R nn s S t m r 1997 Pog g i nd dorni ulti-l ng u g nvironm ntto v lo ultig nts li tions n ECAI g s325 34 0 ud st( ung ry ug ust 1996 R sni k St rLog o n nvironm ntfor ntr lizd od ling nd ntr lizd h ink ing n CHI g s11 12 ril 96 R ooh lf nd lm n R SP rform r ig h P rform n ulti ror h i s n ACM SIGGRAPH g s ssing oolk itfor R l im 3 381 393 1994 Str ussPS nd r y R n j t- ri nt d 3 r h i s oolk it n ACM SIGGRAPH g s34 1 34 7 1992 iss u h v illi r P rrou t ndN d l s ro dur s ux g nts li tion n oRis h ni l R ort L 2 98R P 01 N L or toir d’ nform tiqu ndustri ll ru ry 1998 n i l m k r S -Prolog 29 ft //swi sy uv nl/ u /S -Prolog 1997 l znik R onn r lo k li g ng N u rd P Kn K ufm n ug u s nd v n m n j t- ri nt d r m work for th nt g r tion of nt r tiv nim tion h niqu s n ACM SIGGRAPH g s105 112 1991

Investigating the Complex with Virtual Soccer tsuk i N o

n

n

nk

Complex Games Lab Electrotechnical Laboratory Umezono 1-1-4, Tsukuba Ibaraki, Japan 305 {noda,ianf}@etl.go.jp

Abstract. We describe Soccer Server, a network-based simulator of soccer that provides a virtual world enabling researchers to investigate the complex system of soccer play. We identify why soccer is such a suitable domain for the creation of this kind of virtual world, and assess how well Soccer Server performs its task. Soccer Server was used in August 1997 to stage the simulation league of the first Robotic Soccer World Cup (RoboCup), held in Nagoya, Japan. This contest attracted 29 software soccer teams, designed by researchers from ten different countries. In 1998, an updated version of Soccer Server will be used to stage the second RoboCup in France, coinciding with the real World Cup of football.

1

Introduction

ul sofso o sso i tion foot ll w fo m ul t in 1 63in t K y t oot ll sso i tion n t g m ssin om on oft m ostwi ly pl y in t wo l . popul ity of t g m isillust t y t following p ss g : ont y to popul li f t wo l ’sg t stspo ting v ntin t m s of p olong wo l wi int stisnott ly m pi m s. t it ist o l up of foot ll w i lik t ly m pi s is l juston v y fou y s n ispl y outov p io of two w k so m o . nit t t s ost t is g lo l sp t l in 1994 n w t sp t l ittu n outto . m pions ip g m tw n zil n t ly w switn ss liv y ow of ov 100000 p opl in t os owl in s n lifo ni n y ow of tl st illion on t l vision t wo l ov . quot t is s iption in ou into u tion notsim ply us itillust t s popul ity of so o us t pu li tion of t isp p oin i swit n 9 t su sso to t m i n o l up. t w just s int st in t sou of t x pt not spo ts ook o n wsp p ut t op ning p g p of t op ning pt of ook on si n : Would-be Worlds y o n sti sti 97 . t

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 241–253, 1998. c Springer-Verlag Berlin Heidelberg 1998 

242

Itsuki Noda and Ian Frank

sti’spu pos in s i ing t foot ll o l up isto into u t no tion ofcomplex systems. oot llis n x ll nt x m pl in t is ont xt us itisnotpossi l to un st n t g m y n ly sing justin ivi u l pl y s. t itist int tionsin t g m ot tw n t pl y st m slv s n tw n t pl y s n t nvi onm nt t t t m in t out om . is ist ssn of om pl x sy st m s: t y sist n ly sis y om position. s sti’s li f t tt iv l of p pow ful wi sp om puting p ility ov t p st o so sp ovi uswit tool fo stu y ing om pl xsy st m s s om pl t ntiti s. sti im slf sug g stst tsu om put sim ul tions(vi tu l wo l s woul wo l s) pl y t ol of l o to i sfo om pl x sy st m s w i in ont stto m o onv ntion l si n l o to i s llow usto xplo info m tion inst ofm tt sti 97 . tis to p ovi t isk in of l o to y fo s on om pl xsy st m st tw v v lop sim ul tion oft g m ofso . issy st m o v n ls so m t to pl y tw n two t m sof pl y p og m s n s l y n us to st g 29 t m ont st tt i st o oti o ol up ( o o up) in N g oy p n. n t isp p w s i o v issu suit l oi s sim ul tion o i ntify ing in p ti ul w y so m in. lso t k to v lu t t su ssof o v m onst ting t tt p op ti sof fidelity simplicity clarity bias-free n tractability tiv m su sfo ssssing t isk in of sim ul tion.

2

Soccer as a Domain for Modelling

w m ny sons in t oi ofso fo ou sim ul to . niti lly of ou s t w st pp l t t so is fun n sily un stoo y l g num sofp opl . o im po t ntt n t is ow v w st sig ni nt ll ng o y ty ing to fo m lis n un st n t om in. 2.1

Soccer as a Complex System

o illust t t i ulty oft so om in l tus tu n to t isussion of om pl xsy st m st tw st t in t nto u tion n to sti’s ook Wouldbe Worlds. n isussing ow t o y oft om pl xm ig t fo m lis sti into u s num of t isti s s i s st k y om pon nts of om pl x ptiv sy st m s. i fly t s (1) m ium siz num of g nts t t(2) int llig nt n ptiv n (3) only v lo linfo m tion. ts oul not too to onvin t t tt g m of so is v y g oo tfo t s p op ti s. o s22 g nts. isf lls om fo t ly tw n sti’s x m pl sof sy st m swit l g num sof g nts(g l xi s w i v noug g ntsto t t st tisti lly ) n sm ll num sof g nts( onfli ts tw n two sup pow s). g ntsin so lso in t llig nt n ptiv wit t pl y son t m stiving to p fo m w ll tog t n to outm nœ uv t oppon nts. in lly t info m tion in so is l ly lim it st pl y s n only s in t i tion t y f ing

Investigating the Complex with Virtual Soccer

Fig. 1.

in ow im g of o

243

v

n v of on

l om m uni tion is m p ot y ist n n y t im po t n ling int ntionsf om t oppon nts. us un st n ing so is sig ni nt ll ng : it st t isti s of om pl x sy st m fo w i s sti m p siss t xistsno nt m t m ti l t o y sti 97 . ow v sw ll s ing g oo x m pl of om pl xsy st m not im po t ntf to lso influ n ou oi ofso : t f si ility of onstu ting sim ul tion. x m in t isqu stion low. 2.2

The Nature of Play

n oft fo m ostt sk sin t sig n of sim ul tion ist tion of model of t om in in qu stion;itist im pl m nt tion of t ul s n p in ipl s of t m o lt tp o u st sim ul tion. n t is sp t w foun t tso w s v y m n l om in sim ply y vi tu of ing g m . n x ll ntstu y of t n tu of pl y isg iv n y uizing uizing 0. lt oug p im ily int st in t sig ni n of pl y s ultu l p nom non som of t m in t isti sof pl y i nti y uizing

244

Itsuki Noda and Ian Frank

l v ntto ou t m . o inst n t stp op ti sof pl y i nti y uizing t titisvolunt y (o s uizing s y s l y to o isno long pl y ) n t tpl y isnot o in y o l lif . s p op ti s l y sug g stsom k in of m o lling p o ss. uizing t n not st t ll pl y islim it in lo lity n u tion not f tu t tm k spl y p om ising n i t fo onv ni ntsim ul tion. u t g n lp op ti si nti y uizing t t llpl y s in ing ul s n t t pl y n p t n t nsm itt . p sn of ul sm nst tp tof t wo k of m o lling is l y on . n in im pl m nting o v w took t sim pl pp o of sing ou m o l p im ily on just t ul sof t g m . m in of t o v m o l w st n fo m ul t to llow m xim um sop fo t inv stig tion of ow t ul sof so n tu lly follow in t stw y ;t tis ow to t t m t t n pl y w ll. uizing ’s n l p op ty of p t ility n t nsm itt ility t n in i t s ow ou m o l n us s n i l t st fo llowing s sto inv stig t n isusst p op ti sof t om in. 2.3

The Type of Model Represented by Soccer Server

sw sug g st ov o v m o lst ul soft g m ofso in w y t t llowst sy st m to us s t st fo s on t n tu of t om in. int ntion ist t i ntso pl y ontol lg o it m s n t st g inst ot to isov w i stong . us o v n vi w s p i tiv m o loft l tiv st ng t soft s lg o it m s. g wit sti 97 g 2 t t t st n fo m ostt stt t m o l m ustp ssist titm ustp ovi onvin ing nsw sto t qu stionsw putto it . N ot t oug t tt qu stionsw w ntto nsw wit o v notjustw i lg o it m sp fo m tt ut lso why t tt lg o it m s sup io . isis onsqu n of vi wing t sy st m s t st fo s ; t ultim t g o listo v n t t o y of om pl x sy st m s. s oul lso not t t lt oug t isp p on nt t son t vi tu l wo l t y o v t o o up tou n m ntst m slv s lso in lu om p titionsfo t m sof l o ots. n ou vi w t islink isv y im po t nt sin t m ny sultst t s ow t t l wo l xp im nts np o u sultst t oul not xp t f om sim ul tion lon (e.g. s om pson 97 ook s 6). it t is v ts i t n l tusm ov on to t k t il look tou im pl m nt tion of m o l of so .

3

Soccer Server Itself

o v n l s so m t to pl y p og m s(possi ly im pl m nt in i ntp og using o v is ontoll using fo m of o v p ovi s vi tu l so l (su ig u 1) n sim ul t st m ov m ntsof pl y s

tw n two t m sof pl y m m ing sy st m s). m t li nts v om m uni tion. st on w p snt in n ll. li ntp og m

Investigating the Complex with Virtual Soccer

245

Soccer Server Socket

Client

Message Board

Client

Socket Socket

Socket

Client

Referee

Client

Field Simulator

Client

Socket

Network (UDP/IP)

Fig. 2.

X window

v vi w of o

Socket

Client

Network (UDP/IP)

v

n p ovi t in’ of pl y y onn ting to t o v vi om put n two k (using / so k t) n sp ify ing tionsfo t tpl y to y out. n tu n t li nt iv sinfo m tion f om t pl y ’ssnso s. ig u 2 g iv s n ov vi w of ow t o v om m uni t swit li nts. t m in m o ul sin t o v itslf : 1. A field simulator module. is t st si vi tu l wo l of t so l n l ul t st m ov m ntsofo j ts k ing fo ollisions. 2. A referee module. is nsu st tt ul sof t g m follow . 3. A message-board module. ism n g st om m uni tion tw n t li ntp og m s. low of t lt v 3.1

w g iv n ov vi w of oft s m o ul s. m o t il s iption o v n foun in N o et al 9 . lso l g oll tion of m t i l in lu ing sou s n fullm nu ls ism int in tt o om p g : http://ci.etl.go.jp/~noda/soccer/server.

Simulator Module: The Basic Virtual World

so l n llo j tson it 2 im nsion l. ism nst t unlik in lfoot ll t ll nnot k i k in t i n t pl y s nnotm k us of sk illssu s ing o voll y ing . siz of t l m su in t notion lint n l quiv l ntofm t s is10 m × 6 m . n t l t o j tst tm ov (t pl y s n t ll) n o j tst t st tion y (fl g s lin s n g o ls). st tion y o j tss v sl n m k st t li nts n m k us of to t m in t i pl y ’sposition o i nt tion o sp n i tion of m ov m nt. li nts notg iv n info m tion on t solut positionsof t i x own pl y s utonly on t l tiv lo tion of ot o j ts. us t o j ts im po t nt f n pointsfo t li ntp og m s’ l ul tions. ll om m uni tion tw n t s v n li ntisin t fo m of sting s. fo li nts n liz in ny p og m m ing nvi onm nton ny it tu t t st f iliti sof / so k ts n sting m nipul tion. p oto ol of t om m uni tion onsistsof:

246

Itsuki Noda and Ian Frank

– Control commands: ss g ssntf om li ntto t s v to ontol t li nt’spl y . si om m n s turn dash n kick. om m uni tion is on u t t oug t say om m n n p ivil g g o li li nt n lso tt m ptto catch t ll. sense body om m n p o vi sf k on li ntst tussu sst m in n sp n change view sl ts t o tw n qu lity of visu l t n f qu n y of up t . – Sensor information: ss g ssntf om t s v to li nt s i ing t u ntst t oft g m f om t vi wpointoft li nt’spl y . two ty p sof info m tion visu l(see) n u ito y (hear) info m tion. o v is is t sim ul tion of ontinuoustim . us ot t ontol om m n s n t snso info m tion p o ss wit in f m wo k of sim ul to st ps’. l ng t oft y l tw n t p o ssing st psfo t ontol om m n sis100m s w st l ng t of t st p y l fo t snso y info m tion is t m in y t m ost ntchange view om m n issu li nt(w isusst tim ing of t s st psin m o t il in §4 ). N ot t t ll pl y s v i nti l iliti s(st ng t n u y of k i k ing st m in snsing ) so t tt nti i n in p fo m n of t m s iv sf om t tiv us of t ontol om m n s n snso info m tion n sp i lly f om t ility to p o u oll o tiv viou tw n m ultipl li nts. s n lf tu w n invok wit t -coach option t s v p ovi s n xt so k tfo p ivil g li nt( ll o li nt) t t st ility to i t ll sp tsof t g m . o li nt n m ov ll o j ts i t t f m o ul to m k isions n nnoun m ss g sto ll li nts. is f ility is xt m ly usfulfo tuning n ug g ing li ntp og m s w i usu lly involv s p t t sting of t vio sof t li ntsin m ny situ tions. n xt nsion ing onsi fo fu t v sionsof o v is m o i v sion of t -coach option t t llowst m sto in lu tw lft li ntt t s g lo lvi w of t g m n n on u tsi lin o ing u ing pl y y s outing st t g i o t ti l vi to pl y s. 3.2

Referee Module: Playing by the Rules

f m o ul m onito st g m n sin lso g ul t st flow of pl y y m k ing num of isions s on t ul sof t g m ( n noun ing g o ls t ow ins o n k i k s n g o l k i k s n tim k ping ). n t st o o up tou n m nt t f m no isions outfouls. n w v sion of o v ow v into u s n o si ul ing to t possi l ju g m ntsoft f . m ining fouls lik o stu tion’ i ult to ju g utom ti lly st y on n pl y s’ int ntions. s v t fo lso in lu s n int f llowing um n us to instu tt f to w f k i k . in su isions outf i pl y tu lly llfo sig ni ntun st n ing of t g m futu v sionsof o v m y in lu i t so k tfo f li nts t us n ou g ing s on utom ti f ing .

Investigating the Complex with Virtual Soccer

3.3

247

Message Board Module: Communication Protocol

m ss g o m o ul is sponsi l fo m n g ing t om m uni tion tw n t pl y s. si p in ipl of o v ist t li nts oul ontol just on pl y n t t om m uni tion tw n in ivi u l li nts s oul only i outvi t say n hear p oto olsinto u in §3.1. n say om m n isissu t m ss g is o stto all li ntsim m i t ly s u ito y info m tion. n ly v sionsof o v t m s oul us t is om m uni tion to si st p t lo l n tu of info m tion in t g m y fo x m pl ving pl y o st isown lo tion n int p t tion of t g m t tim st p. not possi l st t g y w sto us on li nt in t m (e.g. pt in’) to i t n info m ll t ot s. n t u nt o v ow v not ll t ot li nts g u nt to ll t o stinfo m tion sin t is m xim um ng of om m uni tion of 0m n ny pl y n only on m ss g f om t m u ing two sim ul tion y l s. lso t l ng t of t m ss g itslf is sti t . us tiv n i ntus of t say om m n is n ou g im p ting info m tion only w n itisusful n tim ly. N ot t tt s v n onn twit up to 22 li nts utt tto f ilit t t sting itispossi l fo sing l p og m to ontolm ultipl pl y s( y st lis ing m ultipl so k t onn tions). n om p titiv situ tion su s o o up su p og m s only p m itt on t un st n ing t tt ontolof pl y issp t log i lly. n p ti it is lso not t ni lly i ult to tt s v n st lis int li nt om m uni tion outsi t s v so t on li nton pl y ’ ul is tiv ly g ntl m ns’ g m nt. 3.4

Uncertainty

n o to ty p sof un

fl t t n tu of t l wo l t t inty into t sim ul tion sfollows:

s v

into u sv ious

– N ois to t m ov m ntof o j ts. m ountof t isnois in ss wit t sp of t o j t. – N ois to om m n p m t s. m ll n om num s to t p m t sof om m n ssntf om li nts. – Lim it om m n x ution. s v x ut sonly on om m n fo pl y in sim ul tion y l . li ntp og m n g tf k on ow m ny om m n sitspl y s x ut vi us of t sense body om m n (successful x ution m ustof ou s m onito y t li nts t m slv s;fo inst n dash sno tif pl y ’sp t is lo k ). – n x tsnsing . fu t n o j t t l ss li l t info m tion tu n outitf om t s v .

io

p sn of t s un t inti s nfo st im po t n of o ust n of t tiv m onito ing of t out om sof pl y ’s tions.

v

248

4

Itsuki Noda and Ian Frank

Assessing the Soccer Server

ow n w m ning fully sssst o v ? n §2.3 w sug g st t t t m in t stof g oo m o lisw t itp ovi s nsw sto t qu stionsw sk of it. us sin w onstu t o v to n l t inv stig tion of so w oul sim ply x m in t qu lity of t on lusionsl n out g nt viou in t om in. ow v m o fun m nt l vi w is lso possi l . o inst n sti 97 g 17 176 sum m isssv l p op ti s t t n us to ssssm o ls n sim ul tions: – Fidelity. m o l’s ility to p snt lity to t g n ss y to nsw t qu stionsfo w i t m o l w s onstu t ; – Simplicity. l v lof om pl t n ssoft m o l in t m sof ow sm ll t m o lisin t ing slik num ofv i l s om pl xity ofint onn tions m ong su sy st m s n num of ad hoc y pot ss ssum ; – Clarity. s wit w i on n un st n t m o l n t p i tions/ xpl n tionsito sfo l wo l p nom n . – Bias-free. g to w i t m o lisf ofp ju i soft m o l ving not ing to o wit t pu po t fo uso pu pos of t m o l. – Tractability. l v l of om puting sou sn to o t in t p i tions n /o xpl n tionso y t m o l. N ot t t t s qu liti s m o su tl t n sim ply ssssing w t m o l f it fully ptu s ll sp tsof t p nom non it p snts. L t us x m in t sultsof v lu ting o v g inst of t it i . 4.1

Fidelity

n of t m ost im po t nt onsi tions u ing t v lopm nt of o v w st l v l of st tion fo p snting t li nt om m n s n t snso info m tion. n possi ility w s low l v l p y si l s iption fo x m pl llowing pow v lu sfo iv m oto sto sp i . ow v it w sf ltt tsu p snt tion woul on nt t us s’ tt ntion too m u on t tu l ontol of pl y s’ tions l g ting tu inv stig tion of t m ulti g ntn tu of t m pl y to t l v l of s on y o j tiv . u t itis i ultto sig n low l v l s iption t tisnotim pli itly s on sp i notion of o ot w ;fo x m pl ontolof sp y iv m oto s is i s tow s p y si l im pl m nt tion t t uss w ls. n t ot n mo st t p snt tion m y using t ti l om m n ssu s pass-ball-to n block-shoot-course woul p o u g m in w i t l wo l n tu of so om so su n in w i t v lopm ntof so t niqu snoty t lis y um n pl y s om sp o l m ti . us ou p snt tion using si ontol om m n ssu sturn dash n kick is om p om is. o m k g oo us of t v il l om m n s li nts will n to t k l ot t p o l m sof ontol in n in om pl t info m tion y n m i nvi onm nt n lso t stw y to om in t o tsof m ultipl

Investigating the Complex with Virtual Soccer

249

pl y s. us w li v t t o v i v sou g o l of p ovi ing sim pl t st n wit sig ni nt l wo l p op ti s. oi of st tion l v lis lso l v ntto fu t qu stion t toft n o upi us u ing t sig n of o v : w t t sim ul to s oul sig n to m o l um n so pl y o o ot so pl y. isqu stion sm ny f ts ut in g n l t solution opt in t o v is to f it ful to um n so w n v possi l . justi tionsfo t is in lu t soning t t l wo l so ism o im m i t ly un stoo (n mo tt tiv ) to t su l o s v n t t o ot t nolog y n ng w st n tu of um n so pl y sisl g ly onst nt. lso t los t sim ul tion to l wo l so t m o lik ly t t k nowl g quisition f om um n xp tis (e.g. s nk 97) will i tly ppli l. f ou s t i n s tw n lso n t m o l p snt y t s v m ostnot ly t 2 im nsion l n tu of t sim ul tion. lso t sim ul tion p m t s tun to m k t s v susful spossi l fo t v lu tion of om p ting li ntsy st m s t t n to i tly fl t lity (fo x m pl t wi t oft g o lsis14 .64 m ou l t siz of o in y g o ls us so ing ism o i ultw n t ll nnot k i k in t i ). is qu stion of w t um n so is ing i tly sim ul t is lso im po t nt in t ont xtoft ov ll o o up ll ng w i w isussfu t low. 4.2

Simplicity

ultim t g o lof o o up s s i in K it no et al 97 is to v lop o otso t m t t n tt zilwo l up t m . in t isg o lis w ll y on u ntt nolog i s t o o up initi tiv n s s i sofw ll i t su g o lst t i v l in t s o t n m i t m . m pl m nting li ntsfo o v ison of t s ll ng s. o j tiv of o o up is t t st l v lofsop isti tion of xisting t nolog y im p ov s t sim pli ity of t sim ul tion s oul lt pp op i t ly. n of t p im y on ns outsim pli ity ist tim ing of t sim ul tion y l st tg ov n t p o ssing of om m n s n snso info m tion. sn so info m tion iv y li ntsis i utt tions sim pl . isis n g um ntfo m k ing t tim tw n info m tion up t slong t n t tim tw n tion x ution (i.e. ving n tion p o ssing y l t tiss o t t n t info m tion snsing y l ). f t info m tion snsing y l om s too long t ility to tto t oppon nts’ tions om s m p . n t ot n if t info m tion snsing y l om stoo s o t t n ts of tt m pting to l n n p i tt oppon nts’ tions im inis . u ntl ng t soft s tim st ps om p om is int n to stik ln tw n t s two onfli ting g o ls. 4.3

Clarity

l g num of s s v l y us o v (w v l m ntion t tt st o o up ont stf tu 29 t m sf om t n i

y nt

250

Itsuki Noda and Ian Frank

ounti s). isis vi n t tt wo k ing sof o v n sily un stoo . n onsi ing t l ity ’ oft sy st m t oug w lso v to t k into ountt s wit w i on n un st n t l ssonsl n t oug t ou s ofsu s . o w int st in qu stionslik t sults v m g outof t t m sp o u y t st o o up ont sts? o som xt nt t nsw to t isty p of qu stion is p n nton t o tsof t o v us st m slv s. ow v t o o up initi tiv qui s ll t t m s nt ing ny tou n m ntto w it p p s i ing t i pp o . s p p ss ow t tin t o ig in l ont sts t p og m st tp fo m w llw sim pl o sy st m swit littl l ning ility. o x m pl t t o o up’96tou n m nt l in p n in N ov m 1996 t winning t m w st g l ts w i ssnti lly li on onst ining pl y to st y wit in sm ll p t m in of t pit n to oos tw n sm ll num of o p ssing i tionsw n t y oul t ll. n t o o up 97 tou n m nt l in ug ust 1997 on t ot n t winning t m sw m o sop isti t . winn s f om um ol t niv sity us s s soning n g nto i nt p og m m ing t unn sup us info m ntl ning n t t i pl t m us n xpli itm o l of t m wo k s on jointint ntions. sw ll s m onst ting t l ity of o v its oul point outt tsom tim st us sof t sy st m n s ow lm ostt opposit : t t sig n st m slv s. is t sy st m w snot tu lly fully un stoo y t pp nsw n ug s foun in t sim ul tion t t n xploit to t v nt g oft li ntp og m s. n x m pl ofsu ug w s n o in t im pl m nt tion of st m in . n lly t st m in of pl y ssw n t pl y issu s dash om m n s t us lim iting t pl y ’s ility to m k fu t m ov m nts. ow v itw sfoun y som us st t y using negative p m t sin t is om m n pl y ’sst m in oul in s ! fu t fl w w s foun wit t v sion of o v us fo t o o up 97 ont st in w i p o l m wit t n ling of sim ul tion y l s som tim s llow pl y sto k i k t ll t tim sin v y qui k su ssion t usim p ting t tim st no m l m xim um ’ sp . ow v itisin t spi itof t o o up ont stt tsu ug s po t to t sig n s so t t llus s wo k ing f om n v n footing . isspi itis lso n illust tion of t i sf n tu of o v . 4.4

Bias-Free

lt oug v lop sol ly t t l tot ni l L o to y o v s n t su st nti lly f om t opinions n sug g stionsof m ny us s. n p ti ul t is m iling list i t to isussing o o up on w i onsnsusisusu lly fo m o ify ing t m o l p snt y t sim ul tion (t ism iling list v g ov 200m ss g sp m ont tw n 1997 n 199 ). o g iv f l fo t sp of ng of t sim ul to o o up 97: t is p ti l list of som of t f tu s ng sin ility to see t i tion t t into u tion of t sense body om m n t

Investigating the Complex with Virtual Soccer

251

ot pl y s f ing t into u tion ofo si s t into u tion of g o li n m o lling of t im pl m nt tion of st m in . ll t s ng s sig n to in s p op ti ssu st sim il ity of t sim ul tion to t um n g m t im po t n ofm o lling t oppon nts’ pl y o t im po t n of m o lling t t m wo k ing p o u y li nt’sown t m m t s. ut vi n of t i sf n tu of o v is t p sn of pu li li y of o (t o o up im ul tion o iv lo t thttp://www.isi.edu/soar/galk/RoboCup/Libs/). s s n us t is o to lp u t v lopm nttim of o o up li nts. om o issp i to p ti ul situ tions w ssom isg n l noug to p ovi inf stu tu fo om pl t g nts(t o fo t winning li ntsf om o o up 97 is v il l ). n p ti ul t iv s ont in t li sli nt’ li y oll tion of low l v lfun tionsint n to st t w y m ny t ilsof t sk ssu sm int ining so k ts p sing t info m tion f om t s v n sn ing om m n sto t s v . 4.5

Tractability

o v sto sim ul t g m of so n om m uni t wit li ntp og m in ltim . sy st m n un on o wo k st tion wit only m o st m m o y n p o sso sou s( lt oug ty pi lly us will lso qui tl ston fu t om put to un t t m sof li nts). m o p o l m ti lim it tion on o v in p ti ist p ity oft n two k n l to onn ting t sy st m to t li nts. l g lo on t n two k ollisionst tp v nt li nt om m n s ing o v o info m tion ing tu n . n som t m s t o o up 97 foun it n ss y to li t t i softw to t p ti ul on itionsfoun t t tou n m nt sit . ow v it is i ult to p i tt x t tsof ng sin n two k on itions so li ntp og m swit som fl xi ility in t i int p t tion of t st t of pl y n ou g . n g n l t ism y not n v s p op ty of t sy st m ;t n to un st n pt tion n to op wit only lo l info m tion w two oft p im qu liti sof om pl xsy st m si nti in §2.1.

5

Conclusions: Towards a Theory of the Complex

v isuss ow t g m of so is g oo x m pl of om pl x sy st m w ll suit to t t sk of m o lling . lso s i n ssss ou im pl m nt tion of o v l ify ing t n tu oft m o lit p snts. L t us los y noting t t t ll ng s p snt y t om in of so v ntly l to it ing p opos s n w st n p o l m fo s K it no et al 97 . f ou s t notion of st n p o lm s long n iving fo fo ng in ing s . isto i lly fo x m pl t pt n of t u ing st’ u ing 0 fo us tt ntion on t m im i k ing of um n vio s t st of m in int llig n . o ntly ss s iv sig ni nt tt ntion g n ting im po t nt v n sin t t o y of

252

Itsuki Noda and Ian Frank

s lg o it m s n s t w y st t um ns pp o

ontol sw ll sm otiv ting og nitiv stu i sinto t s m p o l m s(e.g. s L vinson et al 91 ).

o (in p ti ul ont stto ss) is y n m i l tim m ulti g nt sy st m wit in om pl t info m tion. s o v s vi tu lwo l t t p ovi s toolfo inv stig ting su om pl x om ins. t o st is y p ovi ing p i tiv m o l of t st ng t sof softw lg o it m sfo ontolling g nts in t s nvi onm nts. n t wo sof sti 97 o v is woul wo l t t st p ity of s ving s l o to y wit in w i to t stt y pot ss outt p nom n it p snts . tisou op t t fo om pl xsy st m ssu sso t o tsof s susing t l o to y of o v will tify t situ tion w tp snt t s m sto no k nown m t m ti lstu tu swit in w i w n om fo t ly om m o t s iption sti 97 g 214 .

References

Brooks 86.

R. A. Brooks. A robust layered control system for a mobile robot. Journal of Robotics and Automation, RA-2(1), 1986.

Casti 97a.

John L. Casti. Would-be Worlds: how simulation is changing the frontiers of science. John Wiley and Sons, Inc, 1997.

Casti 97b.

John L. Casti. Would-be worlds: toward a theory of complex systems. Artificial Life and Robotics, 1(1):11–13, 1997.

Frank 97.

I. Frank. Football in recent Times: What we can learn from the newspapers. In First Intl. Workshop on Robocup in conjunction with RoboCup-97 at IJCAI-97, pages 75–82, Nagoya, Japan, 1997.

Huizinga 50.

Johan Huizinga. Homo ludens: a study of the play element in culture. Beacon Press, 1950. ISBN 0–8070–4681–7.

Kitano et al 97a. H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa, and H. Matsubara. RoboCup: A challenge problem for AI. AI Magazine, pages 73–85, Spring 1997. Kitano et al 97b. H. Kitano, M. Tambe, P. Stone, M. Veloso, S. Coradeschi, E. Osawa, H. Matsubara, I. Noda, and M. Asada. The RoboCup synthetic agent challenge 97. In Proceedings of IJCAI-97, pages 24–29, Nagoya, Japan, 1997. Levinson et al 91. R. Levinson, F. Hsu, J. Schaeffer, T. Marsland, and D. Wilkins. Panel: The role of chess in artificial intelligence research. In Proceedings of the 12th IJCAI, pages 547–552, Sydney, Austr alia, 1991. Noda et al 98.

I. Noda, H. Matsubara, K. Hiraki, and I. Frank. Soccer Server: a tool for research on multi-agent systems. Applied Artificial Intelligence. To appear.

Investigating the Complex with Virtual Soccer Thompson 97.

Turing 50.

253

A. Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. In T. Higuchi et al, editors, Proceedings of ICES ’96 — The First International Conference on Evolvable Systems: from biology to hardware, pages 385–400. Springer-Verlag, Berlin, 1997. A.M. Turing. Computing machinery and intelligence. Mind, 59:433– 460, 1950.

Webots: Symbiosis Between Virtual and Real Mobile Robots livi

i

l

Microprocessor and Interface Lab, Swiss Federal Institute of Technology, Lausanne, Switzerland Tel: ++41 21 693 52 64 Fax: ++41 21 693 52 63 [email protected], http://diwww.epfl.ch/lami/team/michel

Abstract. This paper presents Webots: a realistic mobile robot simulator allowing a straightforward transfer to real robots. The simulator currently support the Khepera mobile robot and a number of extension turrets. Both real and simulated robots can be programmed in C language using the same Khepera API, making the source code of a robot controller compatible between the simulator and the real robot. Sensor modelling for 1D and 2D cameras as well as visualisation and environment modelling are based upon the OpenGL 3D rendering library. A file format based on an extension of VRML97, used to model the environments and the robots, allows virtual robots to move autonomously on the Internet and enter the real world. Current applications include robot vision, artificial life games, robot learning, etc.

1

Introduction

utonom ou o oti p nt v y wi in lu in ontol m in l nin volution y om putin ti i l li vi ion m n-m in int m ni l i n t. n o to inv ti t t i o tn m u o l o ot w ll o tw tool o im ul tin n m onito in t l vi .T i p p p nt n w n tion o m o il o ot im ul to op n to t lwo l n to t nt n t.T ot o tw i t tt m ptto inv ti t t i p om i in y p opo in li ti n o im ul tion n t n nition l n u o vi tu l n l o ot.

2

Real Mobile Robots

o il o to o t ult

o otlo w l o ot i om t

om otion i i v y quippin o ot wit w l o l . o ot ly on two in p n ntm oto w l .T lin p iv n t v p o w l n t n ul p i n o p tw n ot w l . t w l o ot

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 254–263, 1998. c Springer-Verlag Berlin Heidelberg 1998 

Webots: Symbiosis Between Virtual and Real Mobile Robots

255

v w l ontoll y n tu to nin t o i nt tion o t o ot n oupl o w l onn t to in l m oto nin t p o t o ot.L o ot u u lly v tw n two n i tl .T wo-l o ot o t n um noi . o t ou -l o ot ty to qui t pp n n y n m i o m m m li n 2 w il ix l o ot ty to im it t in t it (li t lt n t tipo it). L o ot mu mo i ultto i n n ontolt n w l o ot utt y m o p om i in in t y n n l om pl x nvi onm nt ( t i ou t in) w t i w l ount p t l quit un om o t l . T K p m ini- o ot9 i 5 m i m t m o il o ot i ti ut y K -T m . . ti wi ly u o n u tion pu po . t it own 6 331 m i o- ontoll n m m o y two in p n nt m oto w l n i tin n o . ti po i l to ontolt o ot y o - om pilin n ownlo in ny u p o m w itt n in wit t K p ( ppli tion o m nt ).T i p ovi to t n o n tu to o t o ot n i om p ti l wit t ot im ul to .T ti t m ou o n om pil it o t l o oto t im ul to wit out n in lin o o . lt n t ly it i po i l to ontol t o ot wit m ot om put onn t to t o ott ou i llin .T m oto w l o t o ot n t i nt po itiv o n tiv p v lu o t tt o ot n m ov o w w tu n i to l t n pin oun . T in n o u to t to t l oun t o ot. T y n l o u to m u t l v l o m i ntli t.

Fig. 1. L t ipp tu t.

i K

p

m ini- o ot. i

t

K

p

quipp

wit

256

Olivier Michel

T K p o oti xp n l num o xt n ion tu t i v il l n p ovi t o otwit n w n o n tu to .Lin n olou m tix vi ion tu t llow t o otto p iv tt it nvi onm ntw il ipp tu t llow t o otto p o j t n m ov t m oun ( u 2).

3 3.1

Mobile Robot Simulation Simulation Software

o n mo u m o il o ot im ul tion to v m on y n v lopm nttim . n u im ul to n u to i nt m ni l tu tu o o ot w ll to v lop int lli nt ontoll ivin t o ot. im ul to p i lly pp i t w n u in om put xp n iv l o it m o l nin o volution o int lli nt ontoll .T y oul on i p ototy pin tool . n t pit t tit n m on t t t tt inin o volvin o ot in t l nvi onm nti po i l t num o ti l n to t tt y t m i ou t u o p y i l o ot u in t t inin p io . 3.2

Realism Versus Symbolism

T

t o ot(o utonom ou nt) im ul to w i n to o v ov viou (l nin volution m ulti- nt y n m i ). n t y i n’t n to m o l li ti n o n m oto . T nvi onm nt m o l w lo v y im pl o t n m up o i . j t w ly in on t i i n t o ot oul jum p om on i qu to not .T im ul to i to y m oli u n o tu n y m oli v lu li n ppl in t ov qu . n u im ul tion oul ly t n to l wo l wit o ot quipp wit l n o n tu to . T in o om put pow p i lly in 3 p iliti llow o m o n m o p i m o l l in to m o n mo li ti im ul tion . li ti im ul to u u lly p i to o oti vi . om m o il o ot om p ni (li N om i n .) v lop li ti m o il o ot im ul tion n m onito in o tw o t i m ily o m o il o ot. ow v u im ul tion o not n l vi ion n o . ll

3.3 K

Khepera Simulator

p im ul to 5 i im ul tion o tw p i to t K p o ot. t li on 2 nvi onm ntm o llin .T in n o m o ll o t t t y n n l ot li tm u m nt n o t l t tion. T u n w it p o m to ontolt o ot.T im ul to i l to iv l o ot onn t t ou t i lpo to t om put . t ny tim iti po i l to wit tw n t im ul tion n t l o ot y li in on u int utton. T n t o ot ontoll t input om n n output to t l o ot. T o ot ontoll n i pl y ny t xt o p i in o m tion

Webots: Symbiosis Between Virtual and Real Mobile Robots

257

in p i win ow o t tt o v n un t n w t i oin on wit t o ot. ulti- o ot im ul tion uppo t . T i im ul to w m v il l o on t nt n tin 1995 . in t i t m o t n 1200 p opl ownlo it n m ny o t m u in it o n u tion pu po .

4

Webots

(1)

(2)

(3)

Fig. 2.

4.1

no m i vi ion m o ul (1) 3

n

ito (2) n

o ot ontol(3)

Virtual Robots in a Network of Virtual Environments

ot i n m itiou p oj tin utonom ou o lw to im p ov K p im ul to y

nt im ul tion. u p lim in y in vi ion n o m o ll wit

258

Olivier Michel

li ti 3 n in n in ( u 4) n y op nin t im ul to to ny o oti it tu (w l o ot l o ot n o ot m ). T p iliti to u om p ti l ou o tw n l o ot n im ul t o ot i n w int tin p p tiv in im ul t o ot n y om ti l tu tu ontoll n po i ly t m m o y u o ot l to m ov om on vi tu l nvi onm ntto not t ou t nt n t. n w m y on i n two o vi tu l nvi onm nt popul t y vi tu l o ot. llt nvi onm nt woul onn t to t y vi tu l t t t o ot oul nt to m ov to i t ntvi tu l nvi onm nt unnin on m ot om put . n o to p opo i o tn i p oto ol llowin vi tu l o ot to m ov on t nt n t t ot nition l n u w i n n xt n ion o L9 ( i tu l lity o llin L n u ).T on pto lin up im ul tion to t l v l o t w ol nt n ti im il to t T i p oj t11 . 4.2

Virtual Robots Entering the Real World

y m io i tw n vi tu l n l o ot i m po i l wit lity t . vi tu l nvi onm nt lity t llow t vi tu l o ot to nt t lwo l vi tu l o ot nt u t it ontoll n t m mo y ownlo onto l o ot o t t t oul o t im ul t o ot n nt t o y o l o ot n ontinu to un. T i i m po i l wit t K p t t n u ou o om p ti ility tw n vi tu l n l o ot. ow v o - om pil tion t i m n to y to p o u n x ut l l unnin on t l o ot. ou wy to t im ul tion oul n in t l wo l . t n o to w n t o ot nt it top n it ontoll n t m mo y nt to t o t om put unnin t im ul t wo l .T onn tion tw n t l wo l im ul tion n t nt n t lo to t on pt v lop in t l o oti y tm n oul om t o ti l i o n w ppli tion in t l op tion n u v ill n . n

4.3

Realistic Sensor Modelling

no to i v flu ntt n to t lwo l n o m o llin i n im po t nti u . T p o l m o t v li ity o im ul tion i p ti ul ly l v nt o m t o olo i t tu m in l nin t niqu to v lop ontol y t m o utonom ou o ot. ny t niqu m y u to i v li ti n o m o llin n u t m o lo p ti ul o ot- nvi onm nt y n m i n uilt y m plin t lwo l t ou t n o n t tu to o t o ot 3. ot u t i m plin p in ipl to m o l t pon o t in n o o t K p ( u 3).T p o m n p tw n t o t in viou in im ul t n l nvi onm nt m y i ni ntly u y into u in on v tiv o m o noi in t im ul tion . in p o m n i o v w n t y tm i t n to t l

Webots: Symbiosis Between Virtual and Real Mobile Robots

nvi onm nt u ul n o u t ult ptiv p o in t l nvi onm nt o

Fig. 3. n

4.4

-

n o

mo

llin

n

o t in w m o it

o o t l

259

y ontinuin t tion .

t tion.

Robot Metabolism

T on pt o o ot m t oli m 6 i u ul to v lu t t viou o itu t o ot in t i to w w y. t oul on i in o tn un tion. T o oti iv n n initi l m ounto n y o pon in to oo o l ti l pow upply. n t o otm ov u it n o n tu to m in it n y l v l lowly (no m l on um ption). T n i t o ot p o m tion o x m pl um p into w ll it loo n i m ount o n y (puni m nt). ut i it p o m oo tion i. . n in n onn tin to t tion it n i m ounto n y ( w ). ow v oo n tion p n . n t i n o t vi tu l ( n l) nvi onm nt v to uil t i n io upon t i i. . m t t tion up n y only w n t o ot ul ll n it y t in t nvi onm nt o t vi t t p n t o ot ( um p) w n it v w on . n t n y l v lo o ot zo t o ot i it ontoll p o m top n it t m m o y i .T n it i up to t i n o t nvi onm nt to m ov t i o y (o ownlo n w ontoll n m m o y in i ). T i w y it i xp t t t l tion p o will m o ot utom ti lly i pp w il oo o ot will u viv .

260

5 5.1

Olivier Michel

Current Applications Vision

y nt ti i ion 1012 o m n o m o llin ly on pow ul3 n in p n L. n ot n im i o t in vi p n L n in t in into ountt l o vi w li tin on ition m t i l t . T n t i im i p o to opti l i to tion n v ntu lly om n om noi . m o lo 24 0 p no m i m ( u 4) w v lop u in two n im on ov in 120 .T i i u p n L n in n in o n’t uppo t l o vi w t t n 1 0 . om m t m ti l t n o m n y to p t on im n xt to t ot in o to m p im p op ly. T m i lin t ti t ultin im on pix l i t. in iti vi p ovi in up to 64 y lv l tn L olou t n o m n ppli to t y l v l om olou im . o oti xp im nt u in ot l o ot n t ot im ul to m on t t t po i l t n o ontol l o it m 4 . t n 2 olou m m o l (wit 60 l o vi w) i l o v il l in ot. li

5.2

Artificial Life Games and Robot Football

o ot m n p i lly oot ll tou n m nt tu n outto v y tim ul tin v nt o t m o il o oti om m unity. n t v iou t m o il o ot to in o to pl y oot ll v y int tin om t pointo vi w o t utonom ou nt t o y om p tito m u t t o i n p in ipl o utonom ou nt m ulti- nt oll o tion t t y qui ition m in l nin l-tim onin o oti n n o u ion. T num o o ot oot ll tou n m nt in l ntly i quit im p iv –

– – –

– – –

utonom ou o ot oot llT ou n m nt( i ton - K uly 199 Lon on - K uly 199 ). http://www.dcs.qmw.ac.uk/research/ai/robot/football/ FirstARFT.html n SecondARFT.html oup = 6 o otiqu (L tn n y 199 ). http://www.ifitep.cicrp.jussieu.fr/coupe98.html ni m pion ip in o ot oot ll ( u nm m 199 ).http://www.daimi.aau.dk/ hhl/robotDME.html tiv l nt n tion l i n tT nolo i ( i n uly 199 ) http://www.planet.fr/techno/festival/festival98/ Robot Footballeur.html i o oT (T jon - K o uly 1996 T jon K o un 199 i n uly 199 ).http://www.fira.net, http://www.mirosot.org o xpo ( o - witz l n 199 ) uni T ni l niv ity T ou n m nt( uni m ny uly 199 ). http://wwwsiegert.informatik.tu-muenchen.de/lehre/prakt/ khepera/bericht/bericht.html

Webots: Symbiosis Between Virtual and Real Mobile Robots



o o up (N oy - p n http://www.robocup.org

Fig. 4. o o up o . n ll )

u u t199

l 2x5 K

p

i -

n

o ot pl y in

261

uly 199 ).

o

(m o

ll

y

om o t tou n m nt in lu l wo l l u w ll im ul tion l u wit o i l o oti im ul to . ut u u lly t o tw not li ti .T y o t n ly on noi y m oli t t twoul lm o tim po i l to t om l o ot wit l n o . n t t n om im ul tion to l o oti not on i . o ov non o t im ul to m o l vi ion n o w i pp to v y ton lim it tion o oot ll m -pl y in . om l y t t to u ot to p p o o ot oot ll tou n m nt n ot m ulti- nt m . . n ll om ov niv ity ( t ly ) v lop o l o t o o up tou n m nt( u 5 .2) n i v lopin int lli nt ontoll u in 2 olou vi ion n o v il l in ot.

6

Learning, Evolution, and Multi-Agent

in o otl nin o t n m u o om put xp n iv l o it m . volution y l o it m w ll l nin y t m u n u l n two u u lly qui l m ount o om put tion lpow w i i not lw y v ill on l o ot. o ov o- volution o o ot p ov to l to int tin ult 1 . n o tun t ly w n involvin m o t n two o ot u xp im nt n ly on on l o ot. im ul tion v y w ll uit o u m wo . T K p im ul to o tw i o tn u to p up l nin o volution y p o .T ot o tw o n w

262

Olivier Michel

n o (vi ion) n n w up vi in p iliti to ilit t om pl x pow on um in xp im nt wit volution l nin n m ulti- nt. o ov wit t po i ility to t int onn t vi tu l nvi onm nt vi vi tu l t it om i l to t-up volution y xp im nt wit popul tion o vi tu l ( n l) o ot o m in t w ol nt n t. T i i mi t ni w y to i v v y i om put tion lpow y i ti utin xp im nt on v lm in llov t wo l n l tt m un only w n t ou notu t ti u in t ni t. o ov i vi tu l o ot w o t lo on t i lo l o t om put n i t t o in low i t in (i. . om t n tu l l tion pointo vi w) t y oul i on t i own to m ov to not om put wit m o ou v il l 11 .T i w y w oul im in t t om put on t i o t t woul wlin wit vi tu l o ot m ovin lowly om om put to om put to m in in t i n n n to l om put o pow .

Fig. 5.

7 T

ulti- o ot im ul tion

Conclusion and Perspectives

ot im ul to tpot nti liti o v lopm nt p i lly i n ti i l m t oli m i on i . tm i tl to t m n o in o iov o vi tu l o ot i ti ut in u -n two o int onn t vi tu l nvi onm nt. o ot volvin in t vi tu l p oul nt t lwo l om pli om m i ion n o to t nt n t in in

Webots: Symbiosis Between Virtual and Real Mobile Robots

263

om in o m tion t in t l wo l . nt n tu oul int twit t livin o ot. T y oul v t i own p t- o ot n t t m to o m ny t in ( m u v ill n t l op tion). tu lly t p p tiv o v lopm nto u t nolo y unp i t l utw li v t ot willon on n onti ut to t v lopm nto ti i lli in t n t t itwill p ovi pow ul tool o inv ti tion n on ont tion o ult n on t ot n itwill n w im n ion to t nt n t y onn tin l o ot to n two o vi tu l wo l .

References 1. Floreano, D. and Nolfi, S.: Adaptive Behavior in Competitive Co-Evolutionary Robotics. In P. Husbands and I. Harvey, Proceedings of the 4th European Conference on Artificial Life, Cambridge, MA: MIT Press, (1997). 2. Fujita, M. and Kageyama, K.: An Open Architecture for Robot Entertainment. In Proceedings of the First International Conference on Autonomous Agents, Marina Del Rey, California, USA (1997) 435-442. 3. Jakobi, N.: Half-baked, Ad-hoc and Noisy: Minimal Simulations for Evolutionary Robotics. In Advances in Artificial Life: Proc. 4th European Conference on Artificial Life, Phil Husbands and Inman Harvey (eds.) MIT press (1997). 4. Lopez de Meneses Y.: Vision Sensors on the Webots Simulator. Submitted to VW’98. 5. Michel, O.: Khepera Simulator Package version 2.0. Freeware mobile robot simulator dedicated to the Khepera robot. Downloadable from the World Wide Web at http://diwww.epfl.ch/lami/team/michel/khep-sim/ (1996). 6. Michel, O.: An artificial life approach for the synthesis of autonomous agents. In J.M. Alliot, E. Lutton, E. Ronald, M. Schoenauer, and D. Snyers, editors, Artificial Evolution, volume 1063 of LNCS, Springer Verlag, (1996) 220-231. 7. Michel, O., Saucy, P. and Mondada, F.: “KhepOnTheWeb”: an Experimental Demonstrator in Telerobotics and Virtual Reality. In Proceedings of the International onference on Virtual Systems and Multimedia (VSMM’97). IEEE Computer Society Press (1997) 90-98. 8. Miglino O., Lund H.H. and Nolfi S.: Evolving Mobile Robots in Simulated and Real Environments, Artificial Life, (2), 4, (1995) pp.417-434. 9. Mondada, F., Franzi, E. and Ienne, P.: Mobile robot miniaturisation: A tool for investigation in control algorithms. In: Yoshikawa, T. and Miyazaki, F., eds., Third International Symposium on Experimental Robotics 1993, Kyoto, Japan (1994). 10. Noser, H. and Thalmann, D.: Synthetic vision and audition for digital actors. In Proceedings Eurographics, Maastricht, The Netherlands (1995) 325-336. 11. Ray, T.S.: An evolutionary approach to synthetic biology: Zen and the art of creating life. In C.G. Langton, editor, Artificial Life, an overview, MIT Press (1995) 178-209. 12. Terzopoulos D., Tu, X. and Grzeszczuk, R.: Artificial fisheds: Autonomous locomotion, perception, behavior, and learning in a simulated physical world. In Artificial Life, (1), 4, December (1994) 327-351.

Vision Sensors on the Webots Simulator u i Lop z

M n ss n Olivi

Mi h l

Microprocessor Lab (LAMI) Swiss Federal Institute of Technology (EPFL) Lausanne, Switzerland. Tel: +41 21 693 39 08 Fax: +41 21 693 52 63 [email protected], [email protected]

Abstract This paper presents the implementation of the panoramic, linear, artificial retina EDI within the frame of the Webots mobile-robotics simulator. The structure and functioning of the simulation are presented. Realistic sensor readings and software compatibility are the two key concerns of the simulator. A realistic sensor output is obtained by modelling the physical processes underlying the sensors. Interface compatibility between the real and simulated retina has been made possible through the use of the Khepera API. The simulator is validated by running a control algorithm developed on a real robot equipped with such a vision sensor and obtaining the same behavior.

1

Introduction

im ul to sh v l y s n us in m o il s h sto v lop om pl x l o ith m s l n som tim s xp nsiv h . sim pl n y t li l m o il o ots su h h s llo t p to th m o il - o om sim ul to sto l pl to m s.

o oti s 6 2 3 10 . T h y llo ith outh vin to su om un lintly th possi ility o o k in ith sth h p 7 n om oti s s h om m unity to s it h

o v som sstill n t om th us o m o il - o oti ssim ul to s. o inst n m ultipl -p tn p oj tsin h i h th s h t sk s isti ut m on sv lp tn s h il h vin sin l o oti pl to m on h i h to t st. n th s situ tions o otsim ul to s p ti l solution to fl xi ly t st th o ot in i nt nvi onm nts ut th y sh oul listi nou h to llo th v lop l o ith m sto us on th l o ot. noth th t n ts om o oti sim ul to sis volution y o oti s n o otL nin . xp im nts s on th pt tion p iliti sth t volution p ovi s l o ith m s) o sim pl l nin y n in ivi u l o ot( in o m nt ( n ti L nin ) n un o sv l y so k s.T h t n sily ov th m n tim t n ilu s(M T ) o th h n it n v y uousto sup vis y h um n i h isint v ntion isn .Th o m ny o k son l nin n pt tion h v n v lop on sim ul t o ots 10 2.

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 264–273, 1998. c Springer-Verlag Berlin Heidelberg 1998 

Vision Sensors on the Webots Simulator

265

M o il - o oti ssim ul to s n ou h ly l ssi into t o l sss h i h h v n sp o sso - s sim ul to s n snso - s sim ul to s. n th o m th o otsusu lly m ov in is t o k sp h th su oun in o st l so upy llsin th i 9 .T h o ot iv sinputs om n i h o in lls n th s inputs o t n o sy m oli n tu su h s“ p son isin th ont ll . T h p o ssin p iliti so th m o il o ot ll sim ul t utnotth snso y on s. T h s sim ul to s v y st n th y o to v lop om pl x l o ith m s. o v th l o ith m s i ultto t ston l o ots.On th oth h n snso - s sim ul to sty to m im i th spons o th snso sto th su oun in nvi onm nt . T h is ui s om ti n ph y si lm o lo th nvi onm nt s ll s u t m o llin o th ph y si l p o ssson h i h th snso s s .T h sim ul t o otm ov sin ontinuoussp int p tin th snso si n ls n i in th tionsto tk n s on th isin o m tion. nt x m pl o su h snso - s sim ul to s Th otssim ul to 4 is o m o il o oti s. tuss n xt nsion o V M L (V i tu l lity M o llin L n u ) to sto s iption o th o ot n its nvi onm nt n n im o th s n n o t in y usin n Op n L- om p ti l ph i s li y. T h isin tu n llo sto sim ul t vision snso s h i h h v l y s n i ultto into u in o oti sim ul to s.T h isp p p sntsth sim ul tion o p ti ul vision snso th ti i l tin ith in th otssim ul to . tion 2 into u sth ti i l tin n s tion 3 isusssits un tion l sim ul tion. tion 4 ls ith n xp im nt om p in th h vio o l n sim ul t h p o ot uipp ith su h tin . T h on lu in m k s isussth p oj t its ppli tions n utu o k.

2

The Panoramic, Linear, Artificial Retina EDI

1 Th p no m i lin ti i l tin is n n lo -V L vision h ip si n y Oliv L n olt t M ( u h tl itz l n ) int n to v lut th p iliti so su h ioinspi snso s.T h M i op o sso L o to y (L M ) o th L is u ntly o k in on th ppli tion o su h snso sto m o il o oti s n m ostp ti ul ly on h p o ot.

Th tin h s lin y o 15 0 ph oto io sly in on n th t ov s24 0 so vi . ut sitsn m su sts th ism o th n sim pl vision snso p l o t ns u in li h t. sin m ost nim l tin s som sim pl si n l p o ssin t k spl in th vi inity o th ph oto t to s still in th n lo om in. T h islo l p o ssin h sth t o lt in th in o m tion u in its n i th n p ovi in th n xtst ith m o l v nt in o m tion. n s s 3 sistiv i s 11 n om in to 1

EDI is a shorthand for ED084V2A, the CSEM’s device number.

266

Yuri L´ opez de Meneses and Olivier Michel

pply lo p ss n p sso h i h p ss lt on th im . n o -im pulsspons lt n lso ppli p o u in th iv tiv o th im .T h lt output n ss y m i op o sso th nk sto th int n l / onv t p snton th tin . n th is y 64 - y -l v lim n us y th o ot o ny u th p o ssin . Th tin h s n m ount on h p tu tk no n s no m i tu t to th ith th n ss y n ill y vi s n i s i uits.T h tu t h s n in lu s plu -in m o ul in th otssim ul to . T h ollo in s tion isusssth sim ul tion o th no m i tu t n th tin .

Figure1. T h tin .

3

h p

no m i tu

tin o po t s n

ti i l lin

The Panoramic Turret on the Webots Simulator

T h sim ul tion o th tin ollo sits un tion l lo k s h i h om p is th opti s th ph oto t to s th lt s n th i it l onv sion. h lo k ill i fly isuss in th su s tionsth t ollo . n l su s tion ill ov th so t int n its om p ti ility ith th l tin . 3.1 Th

Optics

st st p o th sim ul tion isto l ul t th in i nt li h t int nsity on h ph oto t to . ill onsi th pix l v lu so th Op n L n in n in sth li h tint nsity llin on th pix lsu .T h l tin h s sph i m i o n l nsth t o usth im on th ph otosnso s h i h li on th h ip ss n in u 2. lly th in i ntli h tth t h sth sili on su sh oul th s m sth in i ntli h tth th its in on th sph i m i o . T k in th ish y poth sis ill ty to o t in th in i ntli h ton th is in .

Vision Sensors on the Webots Simulator

267

Spheric Mirror Lens

Support

Photodetectors

Chip

Figure2. T h

opti so th

tin

T h Op n L n in n in p o u sth im o th s n sit oul s n y fl ts n m . o v th tin issph i l sits s n im on 24 0 sin zim uth n 11 sin l v tion. T o solv th is p o l m th 24 0vi iso t in om t o 120( zim uth ) y 11 s( l v tion) p oj tionson fl tsu h on p oj tin on 75 x7 pix l y ssh o n in u 3.

120

120

Flat Screen

Flat Screen

Edi Circular Retina

Figure3.

om position o

24 0-

s y lin

i lvi

into 2fl tvi

s.

xt oth ys p y usin th t nso m tions s it n in om u 4 . T h fl t-s n oo in t sx y t nso m into th y lin i l oo in t sφ n θ s un tion o th o l ist n f h i h is u l to th iuso th y lin x = f · t n(θ) =⇒ θ = y=



x2

f 2 · t n(φ) = f · t n(φ) ·

1

t n(xf )

t n 2 (θ) =⇒ φ =

t n(√

y ) x2 +f 2

268

Yuri L´ opez de Meneses and Olivier Michel

v ntu lly th v ti lpix ls v to o t in t o 75 x1 pix lv to s n th t o v to s on t n t to p o u 15 0pix l v to .

P=(r, θ,φ) 0 1 0 1

1 0 0 1 0 1

y

Focal Point

x

φ θ

f

Optical axis

Focal distance

Figure4.

3.2

oj tion o

o l in sph

i l oo in t son to

fl ts

n

Photodetectors and Normalization Circuit

T h im o t in in th p vioussu s tion is onsi s m su o th in i ntli h tint nsity on h o th 15 0ph oto t to s. T h im is onv t to y s l y usin th to (lum in n ) t nso m tion o th L T V st n . T h ist nso m tion iv s t in i h t to h olo h nn l. tt m o liztion oul h i v i th sp t l spons o th ph oto t to s us to n t th lum in n .T h lum in n th us n t is onsi sth ph oto u nton h ph oto io . xt no m liztion i uit ivi s h pix lsph oto u nt y th v u nto th h ol y. T h is o sto o t in n illum in tion-inv i ntim th tis n im th t ill not p tu y h n sin th lo l illum in tion sttin s o th s n . T h no m liz u ntissto in th m o lsanalog[ ] y. 3.3 Th

Filtering Layer

lt in l y sim ul t sth lt in p iliti so th ti i l tin .T h s im pl m nt ith V L sistiv i usion n t o k s 11 th t n om in to nh n th im n to xt tsom h t isti tu s om it. t isti th t n sim pl sistiv i usion n t o k h s sp ti l-lo p ss h us to u nois in th im n t tuni o m ions. Oth lt s n t y om inin su h n t o k s. o inst n th h i h p sso nh n in lt iso t in y su t tin om th o i in l im itslo p ss

Vision Sensors on the Webots Simulator

269

v sion.T h o im puls- spons lt is n t y su t tin th output om t o uni i tion l i usion n t o k s. tp o u sth iv tiv o th inputim n it n us to t tsh p sin th s n . 1puls

sistiv i usion n t o k s h v spons in th sp ti l n sp ti lh(n) = e

− n λ

slo -p ss lt so u n y om ins

F

⇐⇒ H(ω) =

λ π

·

xpon nti lim -

1 1+(λ ω)2

sistiv i usion n t o k s p ll l lo lly - onn t sy st m s th tp o m om put tionson l-tim sis. T h ism k sit i ult to t on sin l p o sso . o v sin th sy st m ism outo sisto s th tis lin vi s it n sh o n th tth outputisth onvolution o th input si n l ith k n l o im puls spons o th o m sth t n s n in u 5.

30

25

20

15

10

5

0 65

Figure5. m puls

Th im puls 3.4

70

75

80

85

sponsso th lo p ss(-) h i h p ss(- -) n o

sult o onvolvin th analog[ ] spons issto in th filtered[ ]

y

ith th

(-.) lt s.

sistiv n t o k

y.

Digital Conversion

Th tin h s 6- itsu ssiv - pp oxim tion / onv t th t i itizsth output u nt om th sistiv i into 64 y l v ls. in th n lo lt s n p o u n tiv v lu s th h i h itisus s si n it. ] ) p n s T h onv sion n y th o m ul digital = 31.5 · (1 f iltered[ Ibda p snts i s u nt on th h ip. n th on th v i l Ibda h i h sim ul to th isv i l h s n st to v lu th t m k sth output ly s tu t h n th is sin l m xim lly ilum in t ph oto t to .

270

Yuri L´ opez de Meneses and Olivier Michel

3.5

Software Interface: The Khepera API

Th n tso su h snso - s sim ul to oul losti th l o ith m s v lop on otssim ul t o ot nnot sily ppli on l o ot. T o th t n h v v lop om m on int o th l n sim ul t no m i tu t n h p o ot in th o m o n ppli tion o m nt th h p 5 . uh om m on int llo sth us to us th s m sou o ith out h vin to h n sin l om m on oth pl to m s. T o i ntli i s on o th l o ot n on o th vi tu l on p ovi ith th otssim ul to . Th h p ont inssv l un tionsto ssth no m i tu t. om un tions llo th us to n l o is l iv n lt n oth un tions us to th output o th si lt . T h on u tion un tions llo th us to stth lt sp ti l uto u n y th y -s l solution n th ion o int st.

4

Experiments: A Robot Regatta

T o n ly z th v li ity o th otssim ul to n th no m i tu t h v un on sim ul tion ontol l o ith m th t s l y ti on th l h p .T o o so st h to it th ol o usin h p n t stiton th l o ot. T h n ppli th sultin o to th ots sim ul to n o s v h th th s m h vio so t in . Th l o ith m om in t o i nt h vio s o t inin n m nt h vio .T h st h vio is s on it n so st l voi n h vio 1 usin th snso so th h p o ot. T h p oxim ity -snso v lu s ontol th m oto om m n th ou h -o n u l n t o k sit n s n in u 6. T o i sn u ons nsu o m otion o th o ot in th sn o ny -snso stim uli. T h s on h vio isli h ts k in y usin th no m i tu t. T h o im puls- spons lt isus to o t in iv tiv o th im om h i h th li h t-int nsity lo l m xim t t . T h o otst sto sth m xim um (i. . li h tsou ) th tis los to th nt pix l y om m n in th m oto s ith si n lp opo tion lto th pix l ist n t n th li h t n th nt pix l. T h ism oto om m n is to th n t o k output p o u in sm ooth t nsition om o st l voi n h vio to li h t ollo in h vio . Th l n sim ul t o ots pl on s u n on h i h 3li h t sou so “ uoy s st o m in ti n l ( u 7). T h li h t uoy sli t th h i h t o th tin n th o l y svisi l x pt i h i n y noth uoy. T h h p ill m ov to sth uoy th t li s h n on it h sit th snso sm k ittu n y om th li h t. o v th tin still ti sto st th o otto sth uoy n th om in tion o

Vision Sensors on the Webots Simulator

271

ObstacleAvoidanceBehavior Bias

IR 1 IR 2 IR 3 IR 4 IR 5 IR 6 IR 7

Motor L

IR 8 Motor R

EDI LOCAL

E

MAXIMA

Light Seeking Behavior

Figure6. T h

oth om m ly in h only on li n l ssly

n u l n t o k us in th

n sm k sth o ottu n oun th uoy until its s n uoy . t illth usm ov om uoy to uoy sin tt . n s th is h t uoy on th n th o ot ill st to sit n th n tu n oun it.

Figure7. T h

Th h n om in y t o otn on th

tt xp im nt

ontol l o ith m o uoy is n ount tion o t o sim pl n o s v th m vi t om uoy to ln t n th

l( ) n sim ul t

( )

h p

tt .

snot ont in s iption o th st psto tk n no nition o h t uoy is. tis sim pl h vio s o st l voi n n li h t-s k in n n o n un o s n h vio th tm k sth uoy.T h m n o th isn h vio p n s l o ith m p m t s i. . th n t o k i h ts.

272

Yuri L´ opez de Meneses and Olivier Michel

n th sim ul t p t o th xp im nt th li h t uoy s sim ul t s y lin s ith i h to j ton top o th m ( u 7 ). T h s m sou o p o m is o nlo on th sim ul t h p n th s m h vio n o s v th h p m ov s om uoy to uoy ith out unnin into th m n tu ns oun th li h t uoy h n th isonly on on th n . Th no m i tu t sim ul tion isv li t y th t th t not sinl p m t (i. . th -n t o k i h ts n th n l -to-sp onst nto th li h t-s k in h vio ) o th l ontoll h to h n . o v sim ul t h vio los to th l on iso t in i th -n t o k in is ou l . T h iso u s us th l uoy s ov ith - fl tin lm to llo th o otto sns th m o m h i h ist n sin th y lin i sh p n th m t i l om h i h th uoy s m p o u sm ll osss tion. T h tth t listi h vio iso t in h n fl tin inst fl tsu s m sto suppo tth ish y poth sis. in - fl tin o tin s not sim ul t in th otssim ul to th snso snsitivity o th n to k uoy s tth in h v to ou l to llo th sim ul t o otto t tth sm ist n sin th l xp im nt.T h is vi n in i t sth tth ots sim ul to m o ls listi lly th -snso sponss.

5

Conclusion

Th p p s i sth m o llin o vision snso ith in th m o th otsm o il - o oti ssim ul to . t h s n iv n to m o lth ph y si l p o sssun ly in th snso to o t in listi snso in . T h snso p snt in th isp p is v y p ti ul on sin th ti i l tin is lin p no m i (i. . sph i ) snso ith si n l p o ssin p iliti s. o v th s m p in ipl s n ppli to sim ul t oth vision snso s n st n m s. T h h p s om m i lly v il l 213 vision tu tis u ntly in v lop in th otssim ul to lon th s m lin . Ou s on on n in v lopin th sim ul t tin n tu t sso tom p ti ility. s sim ul to s s tool o th tim - onsum in v lopm nto om pl x l o ith m s n o sh in t n sults m on i nt s h t m s. n oth ssitish i h ly si l to l to t stth l oith m on l o ot n to sily o so th o sh oul om p ti l . o ( )h s n n th otssim ul to n ppli tion p o m int llo in to int th l n sim ul t h p o ot n itstu ts ith th s m sou o . sim pl xp im nth s n us to v li otssim ul to . m onst tion p o m th l h p o oth s n t st on th p o u th s m h vio ith outh vin to

t th sim ul t th p viously otssim ul to . h n sin l p

n th n un on l to m t o th

Vision Sensors on the Webots Simulator

o ot ontoll . m o th o ou h v li tion h vio on l n sim ul t o ot.

oul

ui

volvin th

273

sm

Th otssim ul to n th no m i tu t u ntly in us l n m k -n vi tion xp im nt. T h sim ul to isus to p ti lly t in o ot n v ntu lly th volv ontoll ill t ns to th l h p o n l sh o t l nin st . T h su sso su h xp im nt ill u th p ov th u y o th sim ul to .

in th

Acknowledgements ish to th nk i on n lo V L i uits.

ni`

n Olivi

L n olt o th

h lp ul isussions

References [1] Valentino Braitenberg. Vehicles: Experiments in Synthetic Psychology. MIT Press, Cambridge, MA, 1984. [2] L.M. Gambardella and C. Versino. Robot Motion Planning Integrating Planning Strategies and Learning Methods. In Proceedings of 2nd International Conference on AI Planning Systems, 1994. [3] M. Mataric. Designing and Understanding Adaptive Group Behavior. Adaptive Behavior, 4(1):51–80, 1995. [4] O. Michel. Webots, an Open Mobile-robots Simulator for Research and Education. http://www.cyberbotics.com/. [5] Olivier Michel. Khepera API Reference Manual. LAMI-EPFL, Lausanne, Switzerland, 1998. [6] J.R. Mill´ an. Reinforcement Learning of Goal-directed Obstacle-avoiding Reaction Strategies in an Autonomous Mobile Robot. Robotics and Autonomous Systems, (15):275–299, 1995. [7] F. Mondada, E. Franzi, and P. Ienne. Mobile Robot Miniaturization: A Tool for Investigation in Control Algorithms. In T. Yoshikawa and F. Miyazaki, editors, Proceedings of the Third International Symposium on Experimental Robotics 1993, pages 501–513. Springer Verlag, 1994. [8] Nomadic Technologies, Inc. Nomad User’s Manual, 1996. [9] P. Toombs. Reinforcement Learning of Visually Guided Spatial Goal Directed Movement. PhD thesis, Psychology Department, University of Stirling., 1997. [10] C. Versino and L.M. Gambardella. Ibots. Learning Real Team Solutions. In Proceedings of the 1996 Workshop on Learning, Interaction and Organizations in Multiagent Environments, 1996. [11] E. Vittoz. Pseudo-resistive Networks and Their Applications to Analog Computation. In Proceedings of 6th Conference on Microelectronics for Neural Networks and Fuzzy Systems, Dresden, Germany, September 1997.

Grounding Virtual Worlds in Reality Guillaume Hutzler, Bernard Gortais, Alexis Drogoul LIP6 - OASIS/MIRIAD, Case 169, UPMC, 4, Place Jussieu, 75252 Paris Cedex 05 France {Guillaume, Bernard, Alexis}.{Hutzler, Gortais, Drogoul}@lip6.fr

Abstract We suggest in this article a new paradigm for the representation of data, which is best suited for the real-time visualization and sonorisation of complex systems, real or simulated. The basic idea lies in the use of the garden metaphor to represent the dynamic evolution of interacting and organizing entities. In this proposal, multiagent systems are used to map between given complex systems and their garden-like representation, which we call Data Gardens (DG). Once a satisfying mapping has been chosen, the evolution of these Data Gardens is then driven by the real-time arrival of data from the system to represent and by the endogenous reaction of the multiagent system, immersing the user within a visual and sonorous atmosphere from which he can gain an intuitive understanding of the system, without even focusing his attention on it. This can be applied to give life to virtual worlds by grounding them in reality using real world data.

1 Introduction Let's imagine a virtual garden whose visual and sonorous aspects continuously change to reflect the passing of time and the evolution of weather conditions in a distant place. Looking at it or simply listening to its musical rhythm will make you feel just as if you where there, looking at your garden through the window. ‘It's raining cats and dogs. Better stay home!’ Connected to real meteorological data, it really functions as a virtual window, opened on a distant reality. This is what the computer-art project called The Garden of Chances (GoC to make it short) [11] is all about. Beyond its artistic interest, we believe it to have very important implications for the representation of complex systems by means of visual and sonorous metaphors. Keeping a close watch on meteorological data in order to secure airplanes landings, monitoring the physical condition of a patient during surgical operations, observing Stock Market fluctuations so as to determine the best options to chose, are three examples of situations where decisions are subjected to the real-time understanding of complex systems, respectively physical, biological, and social or economical. Those representation and interpretation issues are transposable for artificial complex systems such as multiagent systems, for which adequate real-time representation may provide insight into the inner mechanisms of the system at the agent level, or topsight [10] over the functioning of the system as a whole. Visualization in Scientific Computing (ViSC) has proven very efficient to represent huge sets of data, by the use of statistical techniques to synthesize and class data in a hierarchical way, and extract relevant J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 274-285, 1998 © Springer-Verlag Berlin Heidelberg 1998

Grounding Virtual Worlds in Reality

275

attributes from those sets, before presenting them to the user (Fig. 1). But it has not been so successful when dealing with distributed and dynamic systems since it is based, among other things, on a delayed treatment of the data. The basic proposal is to consider any complex system one wish to represent as a metaphorical garden, the evolution of which reflects in real-time the evolution of the system. In this paradigm, the measures made on the system are not only stored, waiting for a further statistical processing, but they are also immediately transmitted to a Data Garden, a virtual ecosystem with the same global dynamics as the system to represent but with a stronger visual and sonorous appeal (Fig. 1). Indeed, the garden metaphor has the interesting property to be both very complex in its functioning, and still completely familiar to anybody, enabling a very fast and intuitive perception. Moreover, it doesn't require a sustained attention, since it relies for the most part on peripheral perception mechanisms, following the same principles as those that make us perceive weather conditions effortlessly. Finally, the Data Garden paradigm doesn't reduce the complexity of the system to represent but transform this complexity to integrate it into a meaningful environment, creating a visual and sonorous ambient atmosphere from which to gain a continuous understanding of the studied system. In section 2, we present the concepts of scientific visualization and we explain why they fail to satisfy the needs of complex systems representation. By contrast, we present in section 3 The Garden of Chances, an artistic project which succeeds in mapping numerical meteorological data in an abstract, yet meaningful, representation of the weather. We finally extend the principles developed with this project in section 4, explaining the characteristics that Data Gardens should share in order to prove meaningful for complex systems representation, before concluding.

2 Scientific vs. Artistic Visualization and Complex Systems Scientific visualization on the one hand is based on quantitative information display [22], visually in most cases but also using different modalities [4]. Fig. 1 shows the classical iteration cycle of scientific visualization whereby experiments are undertaken, during which data are collected. Only afterwards are the data analyzed and visualized, which allows to draw conclusions about the experiment and design complementary experiments. The cycle then iterates. This is well fitted for a great number of applications but doesn't qualify for the representation of complex, dynamic and distributed phenomena. Painting on the other hand, considered as a system of colored graphical elements is inherently distributed and based on the organization of those elements. According to Kandinsky, “analysis reveals that there is a construction principle which is used by nature as well as by art: the different parts become alive in the whole. Put differently, the construction is indeed an organization” [15]. Furthermore, painters try, and sometimes succeed, in transmitting complex perceptive and emotional experiences to their spectators. They so establish what J. Wagensberg [23] calls a “communication of unintelligible complexities”, complexities that language and numbers cannot express since they cannot be formalized. We're now going to analyze in further details the reasons why scientific visualization concepts appear inadequate to us for the representation of dynamic and complex data. It appears in the classical taxonomy that a processing of the data is necessary in order

276

Guillaume Hutzler et al.

to extract from huge sets of data, a restricted number of attributes that best synthesize the nature of the data. To this purpose, a great number of statistical techniques and data analysis are available that we won't detail here [3]. The results are then presented using a number of standard representations such as histograms, pie or time charts and so on. New presentation models [19] are also developed that make the visualization easier by focusing on some specific aspects of the data depending on the context. An alternative to this general scheme is when the phenomenon has physical reality and may be visualized directly or using appropriate color scales. Physical numerical simulations make a large use of this techniques, and medical visualization is a rapidly expanding domain that also exploits the same principles. Numerical Simulation

Data Acquisition

(Pre)Processing Analysis of data

Real Simulated Complex Systems Complex Systems Data ..... D.A. Acquisition D.A.

Data Garden

Data Attributes Visual Presentation

Visual/Sonorous Presentation

Interactive User Interface

Interactive User Interface

Scientist/ User

Scientist/ User

Fig. 1. Classical (partial view from [6]) vs. Data Gardens' visualization taxonomy

We propose to handle, with artistic visualization, phenomena which are both distributed and with spatial and temporal dynamicity. Our hypothesis, based on the analysis of scientific visualization techniques, is that such complex systems cannot be well represented using purely quantitative and objective means. This may be explained by the fact that we have an almost qualitative and subjective experience of such complex phenomena as meteorology, biological ecosystems, social groups, etc. Most of the knowledge that we have about those systems is derived from our everyday-life perceptions, which give us an intuitive grasp about such systems but which we don't know how to transcribe into numbers.

3 The Garden of Chances We have explored with the computer-art The Garden of Chances an artistic alternative that is useful in making qualitative aspects visually or sonorously sensible in the

Grounding Virtual Worlds in Reality

277

representation of complex systems. Furthermore, the distributed aspect of complex systems is integrated as the basis of the functioning of the project and we think it qualifies as the first step in the representation of complex systems by means of colored and sonorous metaphors. 3.1 The Artistic Paradigm The philosophy underlying this artistic work is to let the automatic generation of images be directed by a real time incoming of real world data. This has led to the development of a first computer artwork called Quel temps fait-il au Caplan? (What's the weather like in Caplan?). In this project, weather data coming in hourly from MétéoFrance stations were used to suggest the climatic atmosphere of a given spot (actually a small place in Britain) by means of color variations inside an almost fixed abstract image. To put it naively, rather warm tints were used when the temperature was high, dark tints when clouds appeared to be numerous, etc. In addition to meteorological parameters, the system also took astronomical ones (season and time of the day) into account, which eventually allowed very subtle variations. When functioning continuously all year long, the animation makes the computer screen become a kind of virtual window, giving access to a very strange world, both real and poetic.

User Interface

Configuration files Script language interpreter Weather data acquisition

Graphical and audio interface Multiagent simulation kernel

Fig. 2. The Garden of Chances The GoC (Fig. 2) is basically designed with the same principles, namely using real data for the creation of mixed worlds, imaginary landscapes anchored in real world. In addition to colors modulations, the weather data are used to give life to a set of two-

278

Guillaume Hutzler et al.

dimensional shapes, so as to create a metaphorical representation of a real garden. Thus, each graphical creature is able to grow up like a plant, benefiting from the presence of light and rain, competing against similar or other hostile shapes, reproducing and dying like any living creature. By so doing, the goal is definitely not to produce accurate simulations of natural ecosystems nor realistic pictures of vegetation. The focus is rather put on enabling the artist to experiment with lots of different abstract worlds until he obtains some imaginary ecosystem fitting his aesthetic sensitivity. The graphical space doesn't have the passiveness of coordinate systems anymore; we rather consider it as an active principle giving birth to worlds, as the raw material from which everything is created. 3.2 The Multiagent System In agreement with artistic requirements, the system has been implemented as a programmable platform, allowing the artist to undertake a true artistic research. Capitalizing on our experience with biological simulation systems [7], we designed it as a genuine vegetal simulation platform, supplying growth, reproduction, and interaction mechanisms similar to those observed in plants. Indeed, we believe the difference between metaphorical and simulated ecosystems only resides in the perspective adopted during the experimentation process. agents

environnement Data retrieval agent

parameters

parameters parameters parameters parameters

behaviors

behaviors behaviors behaviors behaviors

Scheduler

Fig. 3. The multi-agent simulation system The core of the platform is a multiagent simulation system (see Fig. 3), representing plants as very simple reactive agents evolving in a simulated environment. Both the agents and the environment are characterized by sets of parameters that define their characteristics at any given time. The activity of the agents is defined as a number of behaviors which are programmable using a little scripting language. Those behaviors are handled by a scheduler which activates them whenever needed, either periodically

Grounding Virtual Worlds in Reality

279

(each n simulation cycles) or upon reception of some particular events (those are related most of the time to changes in one or several parameters). Finally, agents will be represented on the screen by colored shapes, which won't have necessarily something to do with plants but may be freely designed by the artist. A given still image will thus be close to his painting work, while the dynamics of the whole system will more closely rely on the artificial side of the project, i.e. the simulation of natural processes of vegetal growth. 3.3 Agents and Environment Parameters constitute the basis for the representation of both agents and the environment. Actually, six types of parameters have been defined in order to describe the simulated world and the incoming data flow. Agents are characterized by reserved, internal and external parameters as shown in Fig. 4. Reserved parameters are common for all agents whereas internal and external parameters may be defined specifically for each agent. Reserved parameters include information about age, speed, color, size, etc. Internal parameters describe the resources of the agent (water, glucose, etc. with the vegetal metaphor, or any other quantifiable resource). By contrast, external parameters represent any substance or information that the agent may propagate around him (chemical substances that plants release in the soil or the atmosphere, signals, etc.).

patch local parameters Agent reserved parameters

discrete environment

internal parameters external parameters

continuous environment

data parameters global parameters

Fig. 4. Agent's parameters

Fig. 5. Environment's parameters

The environment is characterized by local and global parameters as shown in Fig. 5. Local parameters correspond to variables whose value and evolution can be defined in a local way, i.e. for each square of the grid covering the environment (substances present in the soil, water or mineral materials for example). On the contrary, global parameters represent variables which have a nearly uniform action on the whole

280

Guillaume Hutzler et al.

environment (meteorological variables, real world data, etc.). Finally, each data variable within the incoming data flow is integrated as a data parameter. 3.4

Behaviors

A little scripting language based on Tcl has been developed in order to make the programming of behaviors easier to handle from a user's point of view. In this language, one may think about agents' and environment's parameters as local and global variables, and behaviors may be thought as procedures, either with local effects when associated to agents or with global effects when associated to the environment. Within behaviors, there are only four basic actions which can be combined together using conditional statements (if ... then ... else) and iteration loops (for ..., while ..., repeat ... until). These four actions are the modification of a parameter, the death or reproduction of the agent, and the playing of a sound file. For example, if one assumes that sun-shining conditions in a distant location are available in real-time in the dataflow and we wish to make a flower grow according to this parameter, one would have to express that some flower gains energy through photosynthesis if the sun is shining, loosing a part of this energy continuously due to its metabolism, and dying if it doesn't have any energy left. The programmed behavior would simply look like the following: // parameters declaration data_param sun_shining 3 internal_param flower energy 50 0 100 // photosynthesis behavior declaration behavior flower photosynthesis 1 // energy parameter modification param_set energy {$energy - 0.01 + $sun_shining * 0.1} // death if not enough energy goc_if {$energy == 0} death sound flower_death.au end_if end_behavior This approach allows to describe how some parameters evolve (slow decrease of the energy of an agent to sustain its metabolism, etc.), to specify the interactions between the agents (chemical aggressions, etc.), between an agent and its environment (water drawn from the soil to feed a plant, etc.), and even between different levels of the environment (supply of underground waters by the rain, etc.). Behaviors are triggered by a scheduler, either with a fixed periodicity or whenever a given event occurs. An event is simply defined as a special kind of behavior which periodically evaluates a condition and positions the corresponding flag of the scheduler accordingly. In agreement with the vegetal model, different behaviors of a single agent may be triggered concurrently during a single timestep. A simulated plant

Grounding Virtual Worlds in Reality

281

can thus simultaneously execute different operations such as drawing water from the soil, realizing photosynthesis, growing, releasing chemical substances, etc. 3.4 So What? Set aside aesthetic and artistic discussions, the scientific evaluation of the quality of such a system proposing a metaphorical visualization of real data, would have to be done under two complementary points of view :  what can the spectator say about the data that were used to generate the pictures ? In other words, does the representation used for visualizing the data make sense for the spectator ?  what can the spectator say about the dynamics of the system that produced those data ? Does the representation succeeds in producing topsight ? Since the GoC has originally been developed in an artistic perspective, experimental protocols have yet to be designed in order to scientifically address these issues. Our personal experience with the system is that meteorological variations are easily detected, but the GoC would surely prove not so well adapted for the visualization of any kind of data, especially when complex systems with no physical reality are concerned. We feel however that this artistic approach to both complex systems and data visualization may provide us with new paradigms for the visualization of complex data. Without entering the details of painting creation, it should be underlined that painters are used to handling the problem of organizing distributed colored shapes in order to produce a given global effect. They are also used to evaluating and interpreting the produced result. Actually, they are used to communicating complex and subjective information by means of a visually distributed representation, and this expertise will be valuable in further development and experimentation steps. We're now going to formalize these intuitions to show how they should be integrated in a single effort to make complex interacting data accessible to direct visual perception, in what we have called Data Gardens.

4 Towards Data Gardens The expression Data Gardens was coined in analogy with the artistic project and its garden metaphor. In fact, Data Ecosystems may be more appropriate, indicating any community of living and interacting organisms with graphical and/or sonic representation, whose individual survival is subjected to the real-time incoming of data to which they react, and to their interactions with other entities. Still, Data Gardens is strongly evocative and we will only use this expression in the remaining sections. Data Gardens are an alternative proposal to standard schemes of data visualization which we think would be most efficient when applied to the representation of complex systems for purposes of diagnostic or monitoring. We explained in section 2 why we thought those standard schemes to be inadequate in the context of complex systems,

282

Guillaume Hutzler et al.

and we presented with The Garden of Chances the outline of a possible alternative. So what are the basic features that Data Gardens should integrate in order to fulfill the requirement of adequately representing complex systems, that is providing the user with a global understanding of the functioning of the system. And how could this be achieved ? 

As a first thing, DG should rely on a dynamic representation, necessary get a perceptive sensation of a system's dynamics and to easily detect discontinuities. This representation should be both visual and sonorous, vision being best fitted to the perception of spatial dynamics through a parallel treatment of information while audition is more adapted to the perception of temporal dynamics through sequential treatment of information. The aim is to limit the use of high-level cognitive processes, trying to take advantage of the “peripheral”, rather than active, perception capabilities of the user (many experiments in augmented reality, like in the MIT MediaLab's Ambient Room [12], rely on this approach). The graphical and sonorous aspects of The Garden of Chances have been designed so that the user can interpret the painting in the same way he could interpret his daily environment for extracting critical information. Animation on the other hand is an essential part of the GoC, and it carries out two different kinds of information via two different dynamics: a slow, homogeneous dynamics, intended to reorganize the whole environment during a long period of time, is used to represent the seasons. Within this dynamics, a few punctual and rapid animation of some agents or sets of agents are used to represent important short-term fluctuations of the data values.



When representing complex systems, complexity shouldn't be reduced a priori with synthetic indices and means, but should be directly integrated in the representation system. Therefore, Data Gardens should be based on a multiagent modeling (accurate or metaphoric) of the system to represent. But because complex data are generally just too complex to represent directly, distributed principles for organizing and synthesizing the represented data must be integrated, that decrease the perceptual load of the user without significantly altering the meaning of the data. In Data Gardens, incoming data influence both the activity and the evolution of the agents. The synthesis is then realized at two levels : in the individual evolution of each agent, and in the mutual interactions they engage in. The development of an agent, which is graphically translated by a modification of its shape or color, can be the consequence of the variations of different data, like the evolution of a plant within an ecosystem.



The representation must be metaphorical, that is, use cognitive categories aimed at being easily interpreted and managed by the user. This is a way of decreasing the complexity, mapping abstract data into a meaningful representation from which one can get instantaneous understanding.

Grounding Virtual Worlds in Reality

283

In that respect, one of the most interesting aspects of Data Gardens is the “garden” or “ecosystem” metaphor, already developed in the GoC. Analyzed at the light of the first two points we developed, the garden metaphor has the interesting property to have natural significance to anybody, while being very complex in its functioning. This functioning relies on the interactions of three types of complex systems: physical (the weather), biological (vegetal and animal) and social (social animals such as ants). This results in various audiovisual dynamics which look very familiar, from the slow evolution of vegetal landscapes to the fast interactions of animal life, and continuously changing meteorological ambiances. 

The representation should be programmable, at a user's level. The user must be able to express subjective choices about the representation, either to make it more significant with regards to data visualization, or to make it more aesthetically pleasing. Perception is a mostly individual and subjective feature of human cognitive functioning. The representation should therefore fit the user's own subjective conceptions of how the data are best visualized. Moreover, different types of representation of the same data may reveal different aspects of this data. In the GoC, the whole simulation system is programmable allowing the user to filter the input data, define the behaviors of the agents to make them react to the data, and choose their graphical representation. In the process, the user will need to be guided by the system, which should propose sets of dynamics and graphical vocabularies. Our collaboration with a painter in the GoC project has proven very useful for that particular matter, and we intend to cooperate more closely with various artists in order to define basic instances of Data Gardens.



The representation should be interactive to allow dynamic exploration and perturbation of the multiagent system. Once a general framework has been found efficient, it must be made possible for the user to refine the representation acting as a kind of gardener by adding, removing, moving, changing agents on the fly. The user should also be able to visualize the system from different points of view. If the default functioning of Data Gardens exploits peripheral perception, the user may also choose to focus his attention on a specific part of the representation, getting then more detailed information about this part. Finally, the user should be integrated as a particular agent of the system, with extended capabilities, that would enable him to experiment the reactions of the system when perturbed. In the GoC, the basis for these interactions is set. The user can make agents reproduce, die or move, and one can view or change the parameters of agents or of the environment. When associating specific behaviors to a “perturbing agent” and when moving it around with the mouse, one can also visualize how the system may spontaneously reorganize.



The system should finally be evolutive, taking into account the interactions of the user in order to learn something of his tastes and evolve accordingly. The

284

Guillaume Hutzler et al.

reason is that a given user will most of the time be unable to express how he would like the data to be represented in the formalism of the system. But if he can't formalize it, he can point out aspects of the representation that he finds pleasing or some others that he dislikes, just as in the gardener metaphor. This last point is mostly prospective for now, since nothing like it exists in the GoC yet. As a result, Data Gardens should be designed as “meaning operators” between the flow of data and the user, who is supposed to identify and follow, in the evolution of the system, that of the outside world. However, it must be clear that they are not intended to replace the existing environments used to track and trace data (histograms, textual presentation, curves, etc.), which are still the only way to know the precise value of a variable. They are to be viewed as complementary tools that allow an instantaneous and natural perception of complex situations and propose a global perspective on them. The Garden of Chances is the first of such systems we built and it should now be studied in a systematic way, and with an experimental perspective, in order to develop operational design and evaluation methodologies.

5 Conclusion We propose with Data Gardens hybrid environments that graphically represent information gathered in the real world for users likely to take decisions in this real world. The goal is to let a complex and dynamic system of numeric data become visually intelligible without catching the whole attention of the user. We explained how the application of DAI concepts, along with real-time visualization, allows to compensate for some of the deficiencies of more classical approaches. In particular, it enables to handle dependent data without a priori reducing the complexity of the data but only as the result of a dynamic hierarchisation and synthesis of the different pieces of the data through the organization of a multiagent system. Furthermore, the representation is evolutive and adaptive, because of its dedicated endogenous evolutionary mechanisms, and also because of the user's actions. We presented with The Garden of Chances, both the artistic work that initiated the reflection on Data Gardens, and the first concrete application of this paradigm. But it is obvious that Data Gardens are not limited to given data nor graphical representation types, each application domain requiring however that the representation be adapted to specific cultures and representation habits. Further work will put the focus on systematizing this adaptation process, through the constitution of libraries proposing various dynamics, based on previous works in DAI, and various graphical and sonorous schemes, based on a collaboration with artists. The aim is ultimately to be able to interpret complex and interacting systems of data, almost as naturally as one can perceive meteorological subtle variations, which could be of fundamental importance for the interpretation of multiagent systems themselves. ‘Mmm. Looks like rain has stopped. Would be perfect for a walk but it's getting cold outside and the wind has turned east. Better stay home!’

Grounding Virtual Worlds in Reality

285

References 1. Boden M. A., “Agents and Creativity”, in Communications of the ACM, Vol. 37, No. 7, pp. 117-121, July 1994, 2. Bruning J. L. and Kintz B. L., Computational handbook of statistics, Glenview, IL: Scott, Foresman and Company, 1968. 3. Bonabeau G. and Theraulaz E. eds., L'intelligence Collective, Hermès, Paris, 1996. 4. Coutaz J., “L'art de communiquer à plusieurs voies”, Spécial La Recherche “L'ordinateur au doigt et à l'oeil”, n°285, mars 1996, pp. 66-73. 5. Deneubourg J.-L. and Theraulaz G., Beckers R., “Swarm-Made Architectures”, in Towards a Practice of Autonomous Systems, MIT Press, Cambridge, pp. 123-133, 1992. 6. Domik G., “The Role of Visualization in Understanding Data”, Lecture Notes on Computer Science 555, “New Trends and Results in Computer Science”, pp. 91-107, Springer Verlag, 1991. 7. Drogoul A., “De la simulation multi-agents à la résolution collective de problèmes”, Thèse de l'Université Paris VI, Novembre 1993. 8. Ferber J., Les systèmes Multi-agents, Interéditions, Paris, 1995. 9. Gebhardt N., “The Alchemy of Ambience”, Proceedings of the 5th International Symposium on Electronic Art, Helsinki, 1994. 10. Gelernter D., Mirror Worlds, Oxford University Press, Oxford, 1992. 11. Hutzler G., Gortais B., Drogoul A., “The Garden of Chances : an Integrated Approach to Abstract Art and Reactive DAI”, 4th European Conference en Artificial Life, Husbands P. and Harvey I. eds., pp. 566-573, MIT Press, 1997. 12. Ishii H. and Ullmer B., “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms”, Proceedings of CHI '97, ACM, Atlanta, 1997. 13. Ishizaki S., “Multiagent Model of Dynamic Design - Visualization as an Emergent Behavior of Active Design Agents”, Proceedings of Conference on Human Factors in Computing Systems (CHI '96), ACM, Vancouver, 1997. 14. Kandinsky W., Du spirituel dans l'art et dans la peinture en particulier, Gallimard, Paris, 1989. 15. Kandinsky W., Point et ligne sur plan - Contribution à l'analyse des éléments de la peinture, Gallimard, Paris, 1991. 16. Klee P., Théorie de l'art moderne, Denoël, Paris, 1985. 17. Larkin J. H. et Simon H. A., “Why a Diagram is (Sometimes) Worth Ten Thousands Words”, Cognitive Science, 11, pp. 65-69, 1987. 18. Penny S., “The Darwin Machine: Artificial Life and Art”, Proceedings of the 5th International Symposium on Electronic Art, Helsinki, 1994. 19. Rao R., “Quand l'information parle à nos yeux”, Spécial La Recherche “L'ordinateur au doigt et à l'oeil”, n°285, mars 1996, pp. 66-73. 20. Resnick M., “Overcoming the Centralized Mindset: Towards an Understanding of Emergent Phenomena”, in Constructionism, Harel I. and Papert S. eds., pp. 205-214, Ablex, Norwood, 1991. 21. Risan L. ““Why are there so few biologists here?” - Artificial Life as a theoretical biology of artistry”, Proceedings of the Fourth European Conference on Artificial Life, Husbands P. and Harvey I. eds., pp. 28-35, MIT Press, 1997. 22. Tufte E. R., The Visual Display of Quantitative Information, Graphics Press, Cheshire CN, 1983. 23. Wagensberg J., L'âme de la méduse, Seuil, Paris, 1985. 24. Wright R., “Art and Science in Chaos: Contesting Readings of Scientific Visualization", Proceedings of the 5th International Symposium on Electronic Art, Helsinki, 1994.

Growing Virtual Communities in 3D Meeting Spaces rederi K pl n 1,2

ng us

nty re1

h is to N um ok

1

ndSilv`ere

j n3

1

Sony CSL Paris, 6 rue Amyot, 75005 Paris, France LIP6 - OASIS - UPMC - 4, place Jussieu F-75252 Paris 3 Consultant - 100bis rue Bobillot, 75013 Paris, France {kaplan,angus,chisato}@csl.sony.fr, [email protected] 2

Abstract. Most existing 3D virtual worlds are based on a ”meetingspace” model, conceived as an extension of textual chat systems. An attractive 3D interface is assumed to be sufficient to encourage the emergence of a community. We look at some alternative models of virtual communities, and explore the issue of promoting community formation in a 3D meeting space. We identify some essential or desirable features needed for community formation – Identity, Expression, Building, Persistence and Focus of Interest. For each, we discuss how the requirements are met in existing text-based and 3D environments, and then show how they might be met in future 3D virtual world systems.

1

Introduction

h e h istory o th e nternetisone o d pt tion ools ndte h nolog ies on eived or seriouspurposes- ele troni m il senet th e orld ide e - h ve een swi tly seizedupon y ordin ry users ndusedto ex h ng e person lin orm tion or in orm tion rel ted to person l interests sm u h s rrier o sientifi in orm tion th e nternetis h nnel or so i l inter tions h isnew underst nding o th e nternet s so i lm edium onstitutes si ssum ption or m ny developerso rowsers or 3 virtu lworlds nvironm ents su h s l xxun nter tive’s om m unity ry o’s euxi`em e onde nLive e h nolog ies’ r veller ndSony ’s om m unity l e re ll sedon sim il r m odel m ore or lessre listi visu l worldin wh i h people m eetto so i lise h ile th e rowsers lso lendth em selvesto oth er uses th e sh ow se ppli tions re ty pi lly sed on th iside o virtu l worlds sso i l sp es;h om es or virtu l om m unitiesth t will em erg e nd exist in th e ontext o th e virtu l world e intendto explore two m in issues irst we propose si l ssifi tion o som e di erentty peso nternet- sed om m unities h iswill le dusto sk som e si uestions outth e w y sth t3 virtu lworldsy stem s re used nd wh eth er th ose usesre lly representth e m ost ppropri te ones or su h sy stem s h e se ondse tion o th e p per will sug g estsom e re uirem ents or sy stem s desig nedto supportvirtu l om m unities e willlook tth e w y th ese re uirem ents re m etin existing 3 virtu lworldsy stem s t e tureso e rlier sy stem s J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 286–297, 1998. c Springer-Verlag Berlin Heidelberg 1998 

Growing Virtual Communities in 3D Meeting Spaces

287

th tm ig h t e d pted or use in 3 virtu l worlds nd tuni ue e tureso 3 virtu l worldsth t ould e exploitedto pit lise on th e streng th so th e m edium

2

Communities on the Internet

h e su esso th e nternetin ostering virtu l om m unitiesin worldin wh i h re l om m unities re e om ing in re sing ly r g m ented nd unst le h snot g one unnoti ed s o R o k well o l xxun putsit ” om m unity is eing seen sth e long -soug h t’k iller pp’ th twill turn th e nternet into profi t le resour e ” h e desire to re te ( ndexploit) ’ om m unity ’ ppe rsto underlie m ny o tod y ’s3 virtu l worldsy stem s h e om m on h r teristi so su h sy stem s n e riefly st ted s ollows th e 3 worldisdesig ned s virtu l m eeting pl e rem inisent in som e im port nt w y so our own world sers onne t inter t nd om m unity em erg es h e prom otion o 3 virtu lworldsy stem s or th isuse ppe rsto e m otiv ted y th e ssum ption th tth e ’ m ili rity ’ o th e world with itsph y si l sp es ndem odied v t rs will m k e itm ore essi le ndintim te th n m ore str tenvironm ents h ere re num er o ssum ptionsh ere th tseem to re uire urth er ex m in tion;one im port ntone isth twe n h ope to m k e th e virtu lworldenoug h lik e th e re l worldth t m ili rity torswill work in our vour ( ndth tth e overh e do th e sim ul tion won’twork g instus y m k ing th e tooltoo slow or wk w rdto use) noth er isth tinter tion in n r itr ry ontextis y itsel suffi ientto re te om m unity som eth ing wh i h isnotne ess rily th e se ore su tle isth e ssum ption th tth isisth e only or th e m ost ppropri te m odel or om m unity orm tion ndth e estm odel or th e ppli tion o 3 virtu l world te h nolog ies n th e rem inder o th is se tion we will ex m ine som e ltern tive m odels(sh own sh em ti lly in ig ure ) nd disussth eir im pli tions or 3 virtu l worlds 2.1

Shared Interest Communities

om m on ty pe o nternet om m unity is sedon th e ex h ng e o in orm tion out sh redtopi o om m on interest Su h om m unity o ten lre dy exists in potenti l orm h e worldwide popul tion o im i endrix ns or inst n e is potenti l net om m unity h e nternet provides w y or th e m em ers to om m uni te nd or th e om m unity itsel - with itsroles onventions nd om m uni tion p tterns- to om e into eing om m unitieso th isk ind re lik ely to e m ulti-m od l y wh i h we m e n th tth ey em ploy v riety o di erent om m uni tion h nnels voured’ ore m odes’ in lude m iling lists senetnewsg roups ndtopi -spe ifi nternetR el y h t( R ) h nnels sth e om m unity e om esest lish ed m em ersexploitoth er h nnelsto extend th e tivity o th e om m unity h e om m unity

288

Fr´ed´eric Kaplan et al.

Meeting-space communities IRC, MOOs

Shared interest communities

Role-adopting communities

WWW, Usenet, Mailing-lists Topic specific IRC channels

MUDS, Themed MOOs

MULTI-MODAL

Fig. 1.

SINGLE TOOL

om m unity ty pes

sed roundth e network g m e u k e or ex m ple h sitsown e sites R h nnels nd senetnewsg roups em erso g r ph i sso tw re m iling list K List sh ow o th eir work on th eir e sites ndex h ng e te h ni uesusing ( n R e uiv lent) ven userso om m unity - o used h tsy stem sm y use ltern tive h nnels; ( ulti- ser im ension je t- riented) users ex h ng e priv te em il nduserso sever lvirtu lworldsy stem suse e ring s to provide essto th eir worlds ndrel tedm teri ls n sense m ulti-m od l sh redinterest om m unities re perh psth e fi n l st te o ny nternet om m unity ; ny su essul om m unity will ultim tely tendtow rdsth isty pe n p ssing itisworth sk ing wh eth er 3 virtu lworldsm ig h tnoth ve role to pl y s n ’extr m ode’ in m ulti-m od l om m unities 3 worlds re ty pi lly presented s m in h nnel or inter tion in om m unity h ey m ig h te u lly well t s supplem ent ry h nnelm u h s e sites re tod y h ispossi ility is tpresent rel tively little exploredor supported 2.2

Role-Adopting Communities

se ond k ind o om m unity m ig h t e term ed ’role- dopting ’ om m unity Su h om m unities re ty pi lly sed( tle stiniti lly ) round sing le m ode o inter tion nd strong ’th em e’ wh i h ty pi lly t k esth e orm o story or im g in ry ontext rti ip nts ssum e roles ppropri te to th e ontext nd t k e re to ’ tin h r ter’ h isis k indo ’sh red tivity ’ om m unity ut with th e di eren e th tth e sh red tivity isem eddedin th e m edium o intertion sers re notusing h nnel (em il) to t lk out om m on interest ( im i endrix) wh i h existsindependently o th t h nnel;th ey re eng g ing in n inter tion wh ose orm isdeterm ined y th e h nnel n th is teg ory we in lude oth online dventure g m es(i e s) ndnon-g m ing environm ents (i e th em ed so i l worlds);th e h r teristi e tures re th e im port n e o

Growing Virtual Communities in 3D Meeting Spaces

strong ly -defi ned ontext nd o h r ter

2.3

dopting

nd m int ining

289

distin tive role or

Meeting-Space Communities

istori lly role- dopting om m unitiesh ve o ten evolvedtow rds th irdk ind o environm entin wh i h th e ontextor th em e isl rg ely stripped w y n su h ’m eeting sp e’ om m unitiesth ere re no defi ned topi so onvers tion nd th e environm ent(i ny ) serves prim rily de or tive purpose ( lth oug h th e onventionso th e om m unity m y m k e itim polite to ig nore th e ontexttoo l t ntly see ) or su h om m unity to em erg e ndflourish th ere m ust e l rg e num er o reg ul r repe tvisitors wh o eng g e in extendedinter tions ith no spe ifi sh red interestto dr w in nd ret in p rti ip nts th e so i l inter tionsth em selvesneedto t sth e ’g lue’ th t indsth e om m unity tondsim il r g eth er om m unitieso th isk ind re om m only seen in s R h tsy stem s h ey re lso th e prim ry m odel or 3 virtu l worldsy stem s he t th t so m ny 3 virtu l world sy stem s ppe r to e im ing to support th is k ind o om m unity is interesting n th e e o it one o th e g re test streng th so 3 virtu l world sy stem isits ility to re te ri h environm ent y et th e ’m eeting sp e’ m odel l rg ely ig noresth e environm ent uilding om m unity ’in v uo’ is m u h m ore h lleng ing t sk th n re ting one in th e ontext o sh red tivity or interest oreover e use so i l inter tions re und m ent l to om m unity orm tion toolsused m ust m k e inter tions sfluid ndexpressive spossi le h e m h inery neededto render 3 worldis om put tion lly expensive resulting in slug g ish per orm n e on ll utth e m ostpower ulh rdw re wh ile th e h twindow in wh i h th e inter tion t k espl e isty pi lly sh runk to m inim um y ontr st n R or lient n run on lm ost ny m h ine ndth ere isnoth ing to t k e w y sreen sp e rom th e textwindow in wh i h th e inter tion o urs h ese o serv tionsr ise th e ollowing uestions isth e ’m eeting pl e’ m odel o om m unity ne ess rily th e m ost ppropri te one or exploiting th e potenti l o 3 virtu lworlds we doptth ism odel wh t n th e 3 world ontri ute in term so im provedinter tion u lity wh i h n justi y th e extr osto th e lient re th ere oth er te h ni ueswe ould lso use to prom ote om m unity uilding in our virtu l worlds n th e nextse tion we will try to look tw y sin wh i h we m ig h tm k e 3 virtu lworldenvironm entsm ore e e tive in prom oting om m unity e propose fi ve re uirem entsth twe see sessenti lor desir le or om m unity orm tion or e h o th ese we o er justifi tion ndth en ex m ine h ow th e re uirem ent ism etin existing sy stem s oth 3 nd’tr dition l’ e th en sug g esth ow uture 3 sy stem sm ig h tm eetth e re uirem ent dr wing on oth 3 -spe ifi ndm ore g ener lte h ni ues

290

3

Fr´ed´eric Kaplan et al.

Promoting Community in ’Meeting Spaces’

om m unity orm tion is om plex usiness ny o th e torsth tle dto th e orm tion o su essul om m unity re int ng i le nd nnot e sily e eng ineered Su h torsin lude th e o urren e o orm tive events(see 7) nd th e om m itm ent or dy n m ism o individu l m em ers( 9 spe k so ’k ey stone pl y ers’) h eth er om m unity flourish esor st g n teswilldependon th e ty pe o inter tionsth tt k e pl e with in it h ere re h owever som e tors th t m y e m en le to ontrol 2 presents use ullisto re uirem ents o using h iefly on th e om put tion l e tureso th e supporting environm ent ere we presentour own listo r th er m ore str tre uirem ents lth oug h th ere will o ten e deg ree o overl p riefly we elieve th t om m unity orm tion re uiresth tth e environm ent h ssom e i not ll o th e ollowing properties usersm usth ve th e m e nsto re te distin tive identity th e m e nsto use non-ver l orm so expression ndth e m e nsto uildsp es ndo je ts h e worlditsel sh ould e evolving ndpersistent ndide lly th ere sh ould lso e om m on tivity or o uso interest h ese re uirem ents re sum m rizedsh em ti lly in ig ure 2

Identity

Expression

Building

Persistence

Shared Interest

Fig. 2. ive re uirem ents or om m unity orm tion

3.1

Identity

ne o th e fi rst e turesth twe see sne ess ry or th e orm tion o om m unity is m e h nism or m em ersto re te ndm int in n identity (notne ess rily th e s m e sth eir re l-worldidentity utnoneth elessdistin tive nd re og nisle) rg uesth tidentity is und m ent l to est lish ing th e rel tionsh ips ne ess ry to ondu ting usiness ndreg ul ting g roup eh viour n in ility to presentourselves sindividu lswilldire tly im p tour ility to orm so i l rel tions dentity ism ulti- eted h o we re isdefi ned y our rel tionswith oth ers y wh twe sy nddo nd y th e w y we presentourselves nd y th e th ing s th t we own nd ontrol ur rel tions with oth ers re ty pi lly defi ned in w y sth t re independento th e environm ent h e environm entm y ontri ute riti lly h owever in th e ilitiesitprovides or usersto presentth em selves t isth isissue we will onsider h ere

Growing Virtual Communities in 3D Meeting Spaces

291

Identity in Other Channels strik ing e ture o m ny environm entssupporting virtu l om m unities re th e lim itedm e nsth ey o er or onvey ing identity n ele troni m il senetor nternetR el y h t( R ) p rti ip nts re judg ed lm ostex lusively on wh tth ey write prem ium ispl edon u lities su h swit rti ul y orig in lity nd m ili rity with sh red onventions ey ondth eir writing users n onvey identity only th roug h th eir ’h ndle’ ( R ) or m il ddress(m il or news not lw y sunder th e user’s ontrol) nduse o sig n tures(m ilor news) Surprising ly som e o th e m ost oh erent nd olour ul om m unitiesseem to em erg e in pre isely su h lim itedenvironm ents R estri tions m y even en our g e re tivity nd ssertion o identity R i h er ilities re o ered y ug m ented text- sed environm entssu h s s wh ere usersm y provide desriptionso th em selves ostensi ly ph y si l uto ten usedto onvey som eth ing outth e owner’s tu lor desiredpersonlity (see 6) Som eone wh o desri esth em selves s”Short, fair-haired, slightly chubby, wearing jeans and a denim jacket.” is om m uni ting som eth ing r th er di erent rom som eone wh o desri esth em selves s”A nervous-looking threelegged centaur wearing two Cecil the Seasick Sea-Serpent puppets on its hands.” r ph i l environm ents o er di erent possi ilities n 2 visu l environm entssu h s l e users n sele tg r ph i sto representth em selvesm u h s userssele ttextu l desriptions h ile pi ture isnotne ess rily worth th ous ndwords th e use o g r ph i sto onvey h r ter ( nd swe sh ll see l ter m ood) n e uite su tle ndexpressive

Identity in 3D Virtual Worlds sersin 3 virtu l worlds re represented h ese v t rs re y v t rs g r ph i lrepresent tionso th eir virtu lperson ty pi lly pi k ed rom seto ustom is le sto k ty pes n ortun tely th e r ng e o v t rs v il le iso ten very lim ited ndth e ustom is tion possi ilitiesm y e too restri tedto om m uni te ny th ing m e ning ul outth e owner n m ny existing 3 environm entsth e lim it tions o th e v il le v t rsh inder th e est lish m ento ny re lidentity

Improving Identity in 3D Virtual Worlds o try to m k e v t rsm ore ’identifi le’ th ere re two possi le ppro h es ne isto orrow rom textsed virtu l worlds y tt h ing inspe t le textu l desriptionsto v t rs tt h eddesriptions n in lude ddition l m et -in orm tion;sever l 3 sy stem ssupport’h y per rds’ virtu l usiness rdswh i h n rry n m es(re l or im g in ry ) m il ddresses nd e site R Ls tth e oth er end o th e s le re te h ni ueswh i h m k e use o th e uni ue propertieso visu l 3 worlds ne possi ility isto m p th e user’sre l-world lik enessto th eir v t r seith er still or m oving im g e (e g 3) h ile th is m ig h t ppe r to resolve th e issue o identity in th e m ostdire tw y possi le it isun le r h ow popul r itislik ely to e in pr ti e;th e lending o th e re l nd th e virtu l n som etim esd m g e th e illusion nd m ny usersm y pre er to present n str tim g inedidentity r th er th n th eir re l-li e ppe r n e

292

Fr´ed´eric Kaplan et al.

n ltern tive isto llow usersg re ter reedom in th e h oi e o v t rs ide lly y letting th em desig n or d ptth eir own h iswill re uire som e st nd rdiztion to ensure interoper ility n issue wh i h is eing ddressed y th e nivers l v t rse orto th e R L onsortium nvironm entsm y needto en or e onstr intson v t r propertiesin order to void n ’ v t r rm sr e’; th e desire or g re ter sel-expression m y le dto m ore om plexm odels with n inevit le im p ton downlo dtim es nd r m e r tes v t r h oi e m y lso e su je t to esth eti onstr ints 3 orre tly predi tsth tth ere re lik ely to e so i l st tusissuesrel tedto v t r h oi e wh ile entering strong ly -th em edworldwith n in ppropri te v t r riding sk te o rd ndwe ring wr p roundsh desin m edi ev l stle or ex m ple islik ely to use o en e ndresentm ent ltim tely it seem s lik ely th t we will m ove tow rds truly person lised v t rs utin th e m e ntim e we sh ouldnotneg le tth e potenti l o non-visu l h nnels(i e textu ldesriptions ndh y per rds) wh i h h ve th e dv nt g e o pre ision ndexpressiveness tvery low ost 3.2

Expression

Su essulso i linter tionsdependon g re tde lm ore th n sim ply ex h ng ing words n e-to- e onvers tionswe use p r ling uisti uessu h stoneo -voi e i lexpressions nd ody l ng u g e to onvey sig nifi ntproportion o our m ess g e le troni environm ents wh i h deny usth e use o th ese ues n o ten im pose severe onstr intson om m uni tion we n in som e w y restore th ese ues we m y e le to re te ri h er ndm ore n tur linter tions th twill en our g e so i l onding Expression in Other Channels erh psth e est k nown orm o extr ling uisti uesin online environm ents re th e sm iley susedorig in lly in ele troni m il ndnews h ese onvention lisedsy m olsserve lm ostth e s m e un tion stone o voi e with v ri tion in m e ning ording to ontext h e ’h ppy ’ sm iley :-) m y m e n th t th e user isple sed with som e st te o irs justre erredto th tth ey ’re m k ing jok e or m ore r rely to indi te g ener l st te o h ppiness se to sig n l jok esiswidespre d p rti ul rly in ontexts wh en iling to per eive n utter n e s jok e ould use o en e ext usersh ve oth er onvention l devi es v il le to th em su h suse o upper se to S sterisk s or em ph siset h ese onventions m e out e use th ey were needed;even ele troni m ilo ten seem sto e loser to onvers tion th n letter-writing nd so su stituteswere ui k ly ound or th e extr -ling uisti m rk erswe use to m k e onvers tion fluid ndun m ig uous h ese devi es re lso usedin R nd s/ s utsu h sy n h ronous inter tionso er ddition lpossi ilities or expression ilitiesin th e environm entsu h sth e ’em ote’ om m ndprovide w y sto desri e h r ter tions wh i h re om m only used to onvey m ood or ttitude h isre h esitsm ost om plex in so i l sor swh ere onvers tion m y involve el or te

Growing Virtual Communities in 3D Meeting Spaces

293

ritu lised use o st nd rd om m nds nd o ten h um orousinter tion with o je tsin th e world(see 5) r ph i l sy stem ssu h s h e l e supplem entth ese devi es urth er y use o visu l in orm tion h e im g e used to represent h r ter n e swit h ed t ny tim e nd th is e ture issom etim esused to onvey m ood or ttitude sersm y h ve li r ry o di erent im g esto dr w on nd sig n l th eir st te o m ind or eeling s out noth er user y su stituting one or noth er ( em le user m ig h tpresent neutr l im g e to str ng ers or inst n e nd m ore re og nis ly em le one to intim tes) Expression in 3D Virtual Worlds 3 virtu lworlds wh i h ty pi lly in lude text- h t elem ent o ten dr wson th e s m e onventions s ove Sm iley s re widely used nd in th e sen e o em otes sel-desriptive utter n es re som etim esseen h ism y e in p rt e use m ny 3 usersh ve om e rom text- sed k g round utitm y lso e e use th e expressive toolsprovided y th e 3 environm ent re too rude or wk w rdto use xpli itsupport or expression in 3 rowsersty pi lly t k esth e orm o push - utton ontrolswh i h use th e user’s v t r to g esture or dopt i l expression estures re ’one-o ’ m ovem ents wh ile i l expressionsm y persistuntil su stituted y noth er xpression supportin 3 virtu l worldsispro lem ti estures nd i l expressions re o ten ex g g er ted nddo notne ess rily m p well to di erent v t r ty pes(i y our v t r is fi sh h ow do y ou onvey surprise or h ppiness) oreover it ppe rsth tusersdo note sily m ix text ndg r ph i s y ping jok e nd th en li k ing th e sm ile utton to sh ow th t y ou’re not seriousdoes notseem to om e n tur lly ore seriously th e g esture doesn’t ppe r in th e written re ord;i som eone re dsth e inter tion l ter or m issesth e g esture th ey h ve no w y o k nowing y ou were jok ing n su h ir um st n es m ostuserswill use sm iley inste d Improving Expression in 3D Virtual Worlds 3 virtu l worldssh ould provide us th e m e ns to restore som e o th e ody l ng u g e th t we lose in tr nsition to th e ele troni environm ent utin pr ti e ith snotwork edth t wy v t rsin onvers tion tend to ppro h e h oth er orient th em selves roug h ly tow rdse h oth er ( utnot lw y s) ndrem in m otionlessth roug h out th e inter tion N o use ism de o m ovem entor dist n e nLive e h nolog ies h ve ound th tth e ody is lm ostentirely irrelev nt ndh ve redu edth eir v t rsto m ere h e ds e m ig h t sk wh t i ny th ing th e 3 virtu l world ontri utesto expressiveness R e l-tim e video-m pping o ersone possi ility llowing usersto sh ow i l expressions oupled with voi e- r th er th n text- sed om m uni tion th is would g ive usth e possi ility to inter tm ore n tur lly ut t th ispoint we ouldst rtto wonder wh y we need virtu l world t ll ll we’re interested in isspee h nd i l expressions wh y notuse video- on eren ing

294

Fr´ed´eric Kaplan et al.

n ltern tive m ig h t e to use v t r ody l ng u g e wh i h isunder indire t r th er th n dire t ontrol o th e user se o sm iley in text ouldprom ptth e v t r to g rin m om ent rily sers ouldsetp r m etersto indi te th eir g ener l st te o m ind ndth e worldwouldrespond y nim ting th e v t r ontinously in su h w y sto sug g estth e ppropri te m ood oredom ppreh ension nxiety et (see 0) trem insto e seen h ow e e tive su h sy stem swill e in onvey ing m oodin re l inter tions uti we intendto exploitth e e tures o th e 3 worldto th e m xim um th islook slik e use ul dire tion to explore 3.3

Building

or om m unity to orm userssh ould h ve som e k ind o involvem entin th e worldth tth ey o upy n h nnelsth tsupport n expli it’world’ (3 virtu l worlds S nd s r th er th n R or em il) th ere seem sto e n instin tive desire or usersto ontri ute to th e worldin som e w y uilding not only g ivesth e user person lst k e in th e world utit n lso enri h th e world or oth er users( onsider th e se m entionedin 9 o user wh o introdu edfi sh ndfi sh ing polesto th e ue lo edu tion l ) uilding is lso n tivity th th elpsto orm identity nden our g e intertions ’virtu l h om e’ n reve l sm u h outus s n v t r desription (see th e desription o r est’sroom in 7) re tion o use ul or interesting o je tg ivesth e re tor st tus( 2) wh ile ex h ng e o desir le o je tsor te h ni l in orm tion m y orm th e siso inter tionswith oth er users uilding h soth er roles je tsth t re uiltserve s t ng i le rem inder o th e h istory or th e onventionso th e om m unity h e re tion o ver sto sis g oodex m ple o th is uilto je tsm y en psul te running jok esin lso serve sth e o us or inter tions ndin p rti ul r or th e k indo ritu lised ver lsl psti k h r teristi o s Building in Other Channels uilding isnot e ture o em il newsor R Som e s nd s h owever perm itusersto ddnew o je tsto th e world r ng ing rom urniture to sim ple utom t to sp eswh i h oth er users n enter nd explore je ts re ty pi lly p rto ull o je t-oriented sy stem wh i h llowsnew o je tsto inh erit(or override) eh viours rom th eir p rento je ts Som e prog r m m ing sk ill isre uired to defi ne interesting eh viours ut th e im plem ent tion l ng u g es re g ener lly sim ple enoug h th tnew o je ts n e ddedto th e worldwith outtoo m u h diffi ulty Building in 3D Virtual Worlds h e n ture o 3 virtu lworldsm e nsth t uilding islik ely to e m ore om plex th n in textu l worlds m ong th e 3 virtu l worldsy stem sth tdo llow som e k indo uilding re ry o’s euxi`em e onde nd ir le o ire’s lph orld euxi em e onde providese h user with ustom is le ’h om e’ utth e possi ilities ppe r rel tively onstr ined ndth e em ph sisison sele tion nd onfi g ur tion r th er th n uilding n m ny w y s th isp r llelsth e issue o v t r h oi e disussed ove lph orld is

Growing Virtual Communities in 3D Meeting Spaces

295

r th er m ore interesting ;usersm y re te o je ts sedon pre- uilt om ponents (R enderw re R o je ts) nd ssig n th em properties visi ility solidity nd eh viours re tionsto li k s ollisionset Improving Building in 3D Virtual Worlds h e power nd flexi ility o s risesl rg ely rom th e tth tth e sy stem is uilt round n o je torientedm odel wh i h en our g es ode-sh ring ndre-use sim il r m odel will pro ly e ne ess ry in th e se o 3 virtu lworldsi th e worldisto e e sy to extend ndm int in h ism y re uire onsider le h ng esto th e underly ing r h ite ture o th e server ndeven to th e represent tion l ng u g e used or th e world(see ) ppropri te uth oring toolswill lso e re uired tm ig h t e desir le to integ r te th e toolswith th e rowser so sto llow th e user to work dire tly with o je tsin th e worldr th er th n g oing th roug h uild-test-edit y le llowing th e user to uild nd dd r itr ry o je tsusing th e ullexpressive power o R L nd v (or sim il rly om plex im plem ent tion l ng u g e) poses l rg e num er o sig nifi ntte h ni l pro lem s p rti ul rly wh ere effi ien y ndst ility is on erned n th e presentst te o te h nolog y itwould e ll too e sy or user to uildo je tsth teith er onsum edh ug e u ntitieso lient ndserver resour esor usedruntim e errorson th e lient N everth eless it seem sre son le to sug g estth tth isisth e dire tion in wh i h 3 virtu lworlds m y ultim tely m ove 3.4

Persistence

llowing uilding le ds ne ess rily to onsider tion o issues o persisten e le rly i usersexpend e ortin onstru ting o je ts th ey will expe tth e o je tsto e th ere th e nexttim e th ey onne t je t- uilding im pliesth e need or persistentstor g e m e h nism to re ord ll h ng esm de in th e world h ere n e oth er spe tsto persisten e swell So i l inter tions onstitute k indo persisten e th e so i lg roup ts s ’m em ory ’ or p stevents individu l identities sh red onventions nd so orth h isisnot som eth ing th t n e sily e eng ineered nd is in t dire tly rel ted to th e g rowth o om m unity s om m unity g rows it e om espersistent ersisten e n lso e interpretedin noth er w y r doxi lly persistent world isone th t evolvesover tim e world wh i h isidenti l e h tim e we onne tto itl k s ny re l’existen e’;itseem sto exist’outside tim e’ ndth is rozen u lity n detr tseriously rom th e ppe l o th e world Persistence in Other Channels s nd swh i h support uilding im plem ent o je tpersisten e y reg ul r k upso th e d t se y opy ing o je tsto k ing stor g e itispossi le to ensure th t r sh esor server re oots will notdestroy re tedo je ts s nd s lso provide k indo support or so i lpersisten e in th e sh pe o newssy stem s ssueso th e d y nd ontri utions rom m em erso th e

296

Fr´ed´eric Kaplan et al.

om m unity m y e re orded in re d le orm th t onstitutes in sense k indo re enth istory o th e om m unity

ert in

Persistence in 3D Virtual Worlds or o viousre sons only th ose worlds th tsupportm k ing h ng esto th e worldh ve persistent e tures ndonly or rel tively lim itedseto properties s r swe re w re no urrent3 world supportsth e k indo r-re h ing ndperv sive persistentm odelseen in s xpli itsupport or so i l persisten e (i e newssy stem s) isty pi lly provided extern lly i t ll in th e orm o e p g es(th e euxi`em e onde or ex m ple h s n extensive e - sednewsservi e) Improving Persistence in 3D Virtual Worlds ully o je t-orientedworld m odel will re uire persisten e m e h nism s sim il r to th ose ound in s h ile th e im plem ent tion o su h d t se sy stem siswell-understood it rem insto e seen h ow e sily it n e d ptedto th e needso 3 worlds ssueso so i l nd ontextu l persisten e re lesste h ni lly h lleng ing utm y re uire new ppro h to th e provision o 3 worlds n p rti ul r th e t o world- uilding m y need to e seen not so m u h s one-o eng ineering tivity ut s pro esso ontinuous re tion h e world uilders need to provide th eir world with n evolving ontextin order to enh n e th e user’sper eption o eing in worldwh i h is onsistent ndenduring h ism y involve notso m u h dding new e turesto th e world sexploiting th e visu l represent tion to onvey th e ide o p ss g e o tim e d y nd nig h t se sons even de y ;in sh ort ny th ing th twill h elp to g ive th e im pression o living nd h ng ing worldr th er th n m ere m odel 3.5

Shared Purpose and Activity

ur l stre uirem entreturnsusto our e rlier disussion o di erent om m unity ty pes tisin som e senses desiredr th er th n re uired h r teristi so we sh ll not n ly se itin sm u h depth sth e oth ers h e ex m ple o s nd R sh ow th t om m unities n em erg e ndth rive with out ny sh red tivity oth er th n th t o t lk ing to oth ers N everth eless o serv tion sug g eststh t users re lik ely to e m ore toler nto th e lim it tionso tool i th ey h ve v lidextern lre son or using it we w ntpeople to use our 3 virtu lworlds inste do th e sim pler swi ter h nnelo R we needto look or ppli tionsin wh i h th e use o 3 virtu lworldprovides n ddedv lue r th er th n m erely n en um r n e inding suit le ppli tionsis wide-open rese r h re ne possi ility would e to m ove w y rom th e ’m eeting sp e’ m odel tow rds th e ’roledopting ’ m odel nd use th e power o th e 3 world to re te om pelling ontext or inter tions(Sony i turesh ve m de n interesting step in th is dire tion y using th e Sony om m unity l e sy stem to re te m ovie tie-in worldsin wh i h userspl y roles sed on th e fi lm ) th er ppli tionsm ig h t in lude virtu lim provis tion lth e tre oll or tive work using visu lelem ents or m ultim edi ’j m sessions’

Growing Virtual Communities in 3D Meeting Spaces

4

297

Conclusion

urrentuseso 3 virtu lworlds re ty pi lly sedon ’m eeting sp e’ m odel o om m unity h isisnot th e e siest m odel to exploit th e p ilitieso 3 worlds o en our g e om m unity g rowth in th is ontextwe m ustfi nd w y sto prom ote so i linter tions ndinvolvem entin th e world o do so we willneed to dr w on oth existing m e h nism s nd dv n ed3 -spe ifi te h nolog iesto try to m k e our virtu l environm entm ore n tur l nd om pelling e sh ould lso k eep in m indth tsu essul om m unitieso llk indstendto evolve into sh redinterest om m unitiesth tm k e use o v riety o di erent om m uni tion h nnels h en desig ning rowsers or 3 virtu lworlds itm y e use ul to t k e th isinto ount ndpl n or th eir use sp rto olle tion o toolsr th er th n uni ue h nnel or inter tion h ere isno olden R ule or om m unity uilding N everth eless use ulrule or im plem entorsm y e to review e h new un tion lity proposed o th um in term so th e fi ve re uirem entswe h ve identifi ed in th is rti le - identity expression uilding persisten e ndsh redinterest- nd onsider h ow itm ig h t ontri ute to e h o th em

References [1] Curtis A. Beeson. An object-oriented approach to vrml development. In Proceedings of VRML ’97, 1997. [2] Amy Bruckman. Identity workshop: Emergent social and psychological phenomena in text-based virtual reality. Technical report, MIT Media Laboratory, 1992. [3] Pierre-Emmanuel Chaut, Ali Sadeghin, Agn`es Saulnier, and Marie-Luce Viaud. Cr´eation et animation de clones. In IMAGINA [8]. [4] Lynn Cherny. The modal complexity of speech events in a social mud. Electronic Journal of Communication, 5(4), 1995. [5] Lynn Cherny. ‘objectifying’ the body in the discourse of an object-oriented mud. In C. Stivale, editor, Cyberspaces: Pedagogy and Performance on the Electronic Frontier. 1995. [6] Pavel Curtis. Mudding: Social phenomena in text-based virtual realities. Technical report, Xerox PARC, 1993. [7] Julian Dibbell. A rape in cyberspace, or how an evil clown, a haitian trickster spirit, two wizards, and a cast of dozens turned a database into a society. The Village Voice, 1993. [8] IMAGINA ’97, Monaco, 1997. INA. [9] Vicki L. O’Day, Daniel G. Bobrow, Billie Hughes, Kimberly Bobrow, Vijay Saraswat, Jo Talazus, Jim Walters, and Cynde Welbes. Community designers. In Proceedings of the Participatory Design Conference, 1996. [10] Ken Perlin. Improv: A system for scripting interactive actors in virtual worlds. In SIGGRAPH 1996 Conference Proceedings, Reading, MA, USA, 1996. AddisonWesley. [11] Bob Rockwell. From chat to civilisation: the evolution of online communities. In IMAGINA [8]. [12] Vijay Saraswat. Design requirements for network spaces. Technical report, AT&T Research, 1996. [13] Neal Stephenson. Snow Crash. Bantam Spectra, 1993.

A Mixed 2D/3D Interface for Music Spatialization François Pachet1, Olivier Delerue1 1 SONY

CSL Paris, 6, rue Amyot, 75005 Paris, France Email: pachet{delerue}@csl.sony.fr

Abstract. We propose a system for controlling in real time the localisation of sound sources. The system, called MidiSpace, is a real time spatializer of Midi music. We raise the issue of which interface is the most adapted for using MidiSpace. Two interfaces are proposed: a 2D interface for controlling the position of sound sources with a global view of the musical setup, and a 3D/VRML interface for moving the listener’s avatar. We report on the design of these interfaces and their respective advantages, and conclude on the need for a mixed interface for spatialization.

Active Listening We believe that listening environments of the future can be greatly enhanced by integrating relevant models of musical perception into musical listening devices, provided we can develop appropriate software technology to exploit them. This is the basis of the research conducted on “Active listening” at Sony Computer Science Laboratory, Paris. Active Listening refers to the idea that listeners can be given some degree of control on the music they listen to, that give the possibility of proposing different musical perceptions on a piece of music, by opposition to traditional listening, in which the musical media is played passively by some neutral device. The objective is both to increase the musical comfort of listeners, and, when possible, to provide listeners with smoother paths to new music (music they do not know, or do not like). These control parameters create implicitly control spaces in which musical pieces can be listened to in various ways. Active listening is thus related to the notion of Open Form in composition [8] but differs by two aspects: 1) we seek to create listening environments for existing music repertoires, rather than creating environments for composition or free musical exploration (such as PatchWork [11], OpenMusic [2], or CommonMusic [18]), and 2) we aim at creating environments in which the variations always preserve the original semantics of the music, at least when this semantics can be defined precisely. The first parameter which comes to mind when thinking about user control on music is the spatialization of sound sources. In this paper we study the implications of giving users the possibility to change dynamically the mixing of sound sources. In te next section, we review previous approaches in computer-controlled sound J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 298-307, 1998 © Springer-Verlag Berlin Heidelberg 1998

A Mixed 2D/3D Interface for Music Spatialization

299

spatialization, and then propose a basic environment for controlling music spatialization, called MidiSpace. We then describe a simple 2D interface for controlling sound sources, and then describe a VRML interface which gives users a more realistic view on the music heard. We compare the two approaches and argue in favor of a mixed solution integrating both interfaces. The last section describes the overall design and implementation of the resulting system.

Music Spatialization Music spatialization has long been an intensive object of study in computer music research. Most of the work so far has concentrated in building software systems that simulate acoustic environments for existing sound signals. These works are based on results in psychoacoustics that allow to model the perception of sound sources by the human hear using a limited number of perceptive parameters [4]. These models have led to techniques allowing to recreate impression of sound localization using a limited number of loudspeakers. These techniques typically exploit differences of amplitude in sound channels, delays between sound channels to account for interaural distances, and sound filtering techniques such as reverberation to recreate impressions of distance and of spatial volume. For instance, The Spatialisateur IRCAM [10] is a virtual acoustic processor that allows to define the sound scene as a set of perceptive factors such as azimuth, elevation and orientation angles of sound sources relatively to the listener. This processor can adapt itself to any sound reproduction configuration, such as headphones, pairs of loudspeakers, or collections of loudspeaker. Other commercial systems with similar features have recently been introduced on the market, such as Roland RSS, the Spatializer (Spatializer Audio Labs) which allows to produce a stereo 3D signal from an 8-track input signal controlled by joysticks, or Q-Sound labs’s Q-Sound, which builds extended stereophonic image using similar techniques. This tendency to propose integrated technology to produce 3D sound is further reflected, for instance, by Microsoft’s DirectX API now integrating 3D audio. These sound spatialization techniques and systems are mostly used for building various virtual reality environments, such as the Cave [5] or CyberStage [6], [8]. Recently, sound spatialization has also been included in limited ways in various 3D environments such as Community Place’s implementation of VRML [12], ET++ [1], or proprietary, low-cost infrastructures [3]. Based on these works, we are interested in exploiting spatialization capabilities for building richer listening environments. In this paper, we concentrate on the interface issue, i.e. how to give average listeners the possibility of exploiting sound source spatialization in a natural, easy way. We will first describe our basic spatialization system MidiSpace, which precisely allows user to control in real time the spatialization of sound sources. Then we describe two interfaces for MidiSpace, and compare their respective advantages.

300

François Pachet and Olivier Delerue

The Basic MidiSpace System MidiSpace is a system that gives listeners control on music spatialization. We first outline the characteristics of midi-based spatialization before describing the system. Midi-Based Spatialization MidiSpace is a real time player of Midi files which allows users to control in real time the localization of sound sources through a graphical interface (extensions to audio are not discussed in this paper). MidiSpace takes as input arbitrary Midi files [9]. The basic idea in MidiSpace is to represent graphically sound sources in a virtual space, as well as an avatar that represents the listener itself. Through an appropriate editor, the user may either move its avatar around, or move the instruments themselves. The relative position of sound sources and the listener’s avatar determine the overall mixing of the music, according to simple geometrical rules illustrated in Fig. 1. The real time mixing of sound sources is realized by sending Midi volume and panoramic messages. sound source





listener’s avatar

Fig. 1. Volume of sound_sourcei = f(distance(graphical-objecti, listener_avatar)). f is a function mapping distance to Midi volume (from 0 to 127). Stereo position of sound source i = g(angle(graphical_Objecti, listener_avatar)), where angle is computed relatively to the vertical segment crossing the listener’s avatar, and g is a function mapping angles to Midi panoramic positions (0 to 127).

It is important to understand here the role of Midi in this project. On the one hand, there are strong limitations of using Midi for spatialization per se. In particular, using Midi panoramic and volume control changes messages for spatializing sounds does not allow to reach the same level of realism than when using other techniques (delays between channels, digital signal processing techniques, etc.), since we exploit only difference in amplitude in sound channels to create spatialization effects. However, this limitation is not important in our context for two reasons : 1) this Midi-based technique still allows to achieve a reasonable impression of sound spatialization which is enough to validate our ideas in user interface and control, and 2) more sophisticated techniques for spatialization can be added in MidiSpace, independently of its architecture.

A Mixed 2D/3D Interface for Music Spatialization

301

We will now describe the interfaces for MidiSpace: first, a 2D interface, which provides a global view on the musical setting, and allows to move sound sources around. Then we describe a VRML interface and discuss its relevance for music listening. We conclude on the interest of a mixed approach.

The 2D Interface of MidiSpace In the 2D interface of MidiSpace, each sound source is represented by a graphical object, as well as the listener’s avatar (see Fig. 2. ). The user may basically play a midi file (with usual tape recorder-like controls), and move objects around. When an object is moved, the spatializer is called with the new position of the object, and the mixing of sound sources is recomputed accordingly. Other features allow to mute sound sources, or select them as “solo”.

Listener’s avatar

Fig. 2. The 2D Interface of MidiSpace. Here, a tune by Bill Evans (Jazz trio) is being performed

In an initial version, we allowed both sound sources and the avatar to be moved. However this was confusing for users. Indeed, moving sound sources amounts to changing the intrinsic mixing of the piece. For instance, moving the piano away will changes the relationship between the piano sound and the rhythmic part. Moving the avatar simply amounts to changing the mixing in global way, but respects the relationships between the sound sources. The effect is quite different since in the second case the structure of the music is modified. The interface provides a global view on the musical setup, which is very convenient to edit the musical setting. However, there is no impression of musical immersion in the musical piece : the interface is basically a means for editing the piece, not to explore it.

302

François Pachet and Olivier Delerue

The VRML Interface for Navigating in MidiSpace Second, we have experimented with interfaces for navigating in the musical space. Several works addressed the issue of navigating in virtual worlds with spatialized sounds. The most spectacular are probably the Cave system [5] or CyberStage [8], in which the user is immersed in a fully-equipped room with surrounding images and sound. Although the resulting systems are usually very realistic, their cost and availability are still prohibitive. Instead, we chose to experiment with affordable, wide-spread technology. A 3D version of MidiSpace in VRML has been built (see Fig. 3. ), in which the VRML code is automatically generated from the 2D interface and the Midi parser. The idea is to create a VRML world in which the user may freely navigate using the standard VRML commands, while listening to the musical piece. Each instrument is represented by a VRML object, and the spatialization is computed from the user current viewpoint. In this interface, the only thing the user can do is move around using standard commands; sound sources cannot be moved.

Fig. 3. MidiSpace/VRML on the Jazz trio, corresponding to the 2D Interface of Fig. 2. On the left, before entering, on the right, inside the club.

Although the result is more exciting and stimulating for users than the 2D interface, it is not yet fully satisfying because the interface gives too little information on the overall configuration of instruments, which is a crucial parameter for spatialization. When the user gets close to an instrument, she loses the sense of her position in the musical set up (e.g. the jazz club, see Fig. 3. ). Of course, this problem is a general problem with VRML interfaces, and is not specific to MidiSpace, but in the context of a listening environment, it is particularly important to provide a visualisation which is consistent with the music being played. This consistency is difficult to achieve with a 3D virtual interface on a simple screen.

A Mixed 2D/3D Interface for Music Spatialization

303

The Mixed Interface Based on experiments with users on the two interfaces, we concluded on the interest of combining them for obtaining an optimal satisfaction on user control. The 2D interface is used for editing purposes, i.e. moving sound sources. Moving the avatar is not possible in this interface. The VRML interface is used for exploration, in a passive mode, to visualize the musical setting in 3D, and move the avatar around. Moving objects is not possible. The communication between the two interfaces is realized through the VRML/Java communication scheme, and ensures that when the avatar is moved in the VRML interface, the graphical object of the 2D interface is moved accordingly. The overall architecture of MidiSpace is illustrated in Fig. 4. Music input data (midi files)

Parser

Temporal Objects

Real Time player

VRML Code VRML Interface avatar movements

User Audio devices (synthetizers & mixing console)

2D Interface sound source movements

loudspeakers

Fig. 4. The architecture of MidiSpace User Interaction.

Implementation The implementation of the MidiSpace Spatializer consists in 1) translating Midi information into a set of objects within a temporal framework, 2) scheduling these temporal objects using a real time scheduler, 3) the interfaces. The Parser The Parser task is to transform the information contained in the Midi file into a unified temporal structure. The temporal framework we use is described in [18], an object-oriented, interval-based representation of temporal objects. In this framework,

304

François Pachet and Olivier Delerue

each temporal object is represented by a class, which inherits the basic functionalities from a root superclass TemporalObject. One main issue the Parser must address comes from the way Midi files are organized according to the General Midi specifications. Mixing is realized by sending volume and panoramic Midi messages. These messages are global for a given Midi channel. One must therefore ensure that each instrument appears on a distinct Midi channel. In practice, this is not always the case, since Midi tracks can contain events on different channels. The first task is to sort the events and create logical melodies for each instrument. This is realized by analysing program change messages, which assign Midi channels to instruments, thereby segmenting the musical structure. The second task is to create higher level musical structures from the basic musical objects (i.e. notes). The Midi information is organized into notes, grouped in melodies. Each melody contains only the notes for a single instrument. The total piece is represented as a collection of melodies. A dispatch algorithm ensures that, at a given time, only one instrument is playing on a given Midi channel. Scheduling Temporal Objects The scheduling of MidiSpace objects uses MidiShare [14], a real time Midi operating system, with a Java API. MidiShare provides the basic functionality to schedule asynchronously, in real time, Midi events, from Java programs, with 1 millisecond accuracy. More details on the scheduling are given in [7]. MidiSpace Interfaces The 2D interface uses the standard Java awt library, and follows a straightforward object-oriented interface design. The VRML interface is generated automatically from the Parser. More precisely, the Parser generates a file, which contains basically 1) the information on the global setup, 2) description of individual sound sources, corresponding to the various sound tracks identified, and 3) routing expressions to a spatializer object, which is defined as a Java script, using the VRML/Java communication scheme [12]. An excerpt of the VRML code is shown in Fig. 5.

WorldInfo {title "Trio jazz"} # Various global settings NavigationInfo {speed 2 type [ "WALK" ]} # The viewpoint from which the spatialization is computed DEF USERVIEWPOINT Viewpoint {position -13 0 45} # The definition of the musical setting (here, a Jazz Club) …

A Mixed 2D/3D Interface for Music Spatialization

305

# the Label Transform { translation -2 4.7 21 children [ Shape { appearance Appearance { material Material { diffuseColor 1 1 1}} geometry Text { string "Jazz Club"}}]}

# The sound sources, as identified by the General Midi DEF PIANO Transform { children [ Shape { appearance Appearance { texture ImageTexture { url "piano.jpg" repeatS FALSE repeatT FALSE} textureTransform TextureTransform {} } geometry Box { size 3 3 3}}]}

Parser

in

DEF DRUMS Transform { …} DEF BASS Transform {…}

# The Java script for handling movements and spatialization DEF MY_SCRIPT Script { url "MusicScript.class" field SFString midiFileName "http://intweb.csl.sony.fr/~demo/trio.mid" field SFNode channel10 USE DRUMS field SFNode channel2 USE BASS field SFNode channel3 USE PIANO eventIn SFVec3f posChanged eventIn SFRotation orientation

306

François Pachet and Olivier Delerue

eventOut SFRotation keepRight eventOut SFVec3f keepPosition} “ The routing of VRML messages to the Java script ROUTE DETECTOR.position_changed MY_SCRIPT.posChanged ROUTE MY_SCRIPT.keepPosition USERVIEWPOINT.set_position ROUTE DETECTOR.orientation_changed MY_SCRIPT.orientation ROUTE MY_SCRIPT.keepRight USERVIEWPOINT.set_orientation

TO TO TO TO

Fig. 5. The generated VRML code for describing the musical setting from a given

midi file

Conclusion, Future Works The MidiSpace system shows that it is possible to give some degree of freedom to users in sound spatialization, through an intuitive graphical interface. We argue that a unique interface is not appropriate for both moving sound sources and avatars, while giving users a realistic feeling of immersion. In the case of spatialization, although they appear at first to be similar user actions, moving sound sources and moving the avatar bear significantly different semantics, and we concluded that allowing these two operations in the same interface is confusing. We propose an approach combining a standard 2D interface, appropriate for moving sound sources and editing the setup, and a VRML interface appropriate for moving avatars and exploring the musical piece. The prototype built so far validates our approach, and more systematic testing with users is in progress. Future work remains to be done in three main directions. First we are currently adding a constraint-based mechanism for maintaining consistency in sound source positions [15]. This mechanism allows users to navigate in a restricted space, which will always ensure that some mixing properties of the music are satisfied. Second, an audio version of MidiSpace is in progress, to 1) enlarge the repertoire of available music material to virtually all recorded music, and 2) improve the quality of the spatialization, using more advanced techniques such as the ones sketched in the introduction of this paper. Finally, an annotation format is currently being designed to represent information on the content of musical piece, to enrich the interaction. These annotations typically represent information on the structure of the piece, its harmonic characteristics, and other kinds of temporal information [17]. These extensions are in progress, and remain compatible with our choice for user interface design proposed here.

A Mixed 2D/3D Interface for Music Spatialization

307

References 1. 2.

3. 4. 5.

6.

7. 8.

9. 10.

11. 12. 13. 14. 15. 16. 17.

18.

Ackermann P., Developing object-oriented multimedia software, Dpunkt, Heidelberg, 1996. Assayag G., Agon C., Fineberg, J., Hanappe P., “An Object Oriented Visual Environment For Musical Composition”, Proceedings of the International Computer Music Conference, pp. 364-367, Thessaloniki, 1997. Burgess D.A., “Techniques for low-cost spatial audio”, ACM Fifth Annual Symposium on User Interface Software and Technology (UIST '92), Monterey, November 1992. Chowning, J. (1971), “The simulation of moving sound sources”, JAES, vol. 19, n. 1, p. 2-6. Cruz-Neira, C., Leight, J., Papka, M., Barnes, C., Cohen, S.M., Das, S., Engelmann, R., Hudson, R., Roy, T., Siegel, L., Vasilakis, C., DeFanti, T.A., Sandin, D.J., “Scientists in Wonderland: A Report on Visualization Applications in the CAVE Virtual Reality Environment”, Proc. IEEE Symposium on Research Frontiers in VR, pp. 59-66, 1993. Dai P., Eckel G., Göbel M., Hasenbrink F., Lalioti V., Lechner U., Strassner J., Tramberend H., Wesche G., “Virtual Spaces: VR Projection System Technologies and Applications”, Tutorial Notes, Eurographics '97, Budapest 1997, 75 pages. Delerue O., Pachet F., “MidiSpace, un spatialisateur Midi temps réel”, Cinquièmes Journées d’Informatique Musicale, Toulon, 1998. Eckel G., “Exploring Musical Space by Means of Virtual Architecture”, Proceedings of th the 8 International Symposium on Electronic Art, School of the Art Institute of Chicago, 1997. IMA, “MIDI musical instrument digital interface specification 1.0”, Los Angeles, International MIDI Association, 1983. Jot J.-M., Warusfel O. “A Real-Time Spatial Sound Processor for Music and Virtual Reality Applications”, Proceedings of International Computer Music Conference, September 1995. Laurson M., Duthen J., “PatchWork, a graphical language in PreForm”, Proceedings of the International Computer Music Conference, San Francisco,172-175, 1989. Lea R., Matsuda K., Myashita K., Java for 3D and VRML worlds, New Riders Publishing, 1996. Meyer L., Emotions and meaning in music, University of Chicago Press, 1956. Orlarey Y., Lequay H. “MidiShare: a real time multi-tasks software module for Midi applications”, Proceedings of the ICMC, 1989, ICMA, San Francisco. Pachet, F. Delerue, O. A Temporal Constraint-Based Music Spatialization system, submitted to ACM MultiMedia 98, 1998. Pachet F., Ramalho G., Carrive J. “Representing temporal musical objects and reasoning in the MusES system”, Journal of New Music Research, vol. 25, n. 3, pp. 252-275, 1996. Pachet, F. Delerue, O. “Annotations for Real Time Music Spatialization”, International Workshop on knowledge Representation for interactive Multimedia Systems, KRIMS-II workshop, Trento, Italy, 1998. Taube H., “Common Music: A Music Composition Language in Common Lisp and CLOS”, Computer Music Journal, vol. 15, n° 2, 21-32, 1991.

Organizing Information in 3D lu r

v ll

n

n l o¨g l

lr l s r ss / 10 0 nn l. +4 1 6 9 6 x +4 1 6 9 6 11 [email protected] [email protected] http://www.dc.co.at/index.html

sr

Abstract. sp p r ls r o s sp s o or n z on n sp . on o sp p r s r ro o r xp r n s r n kn o ” n n rs ” M s n ron n or sr n y o n s. n s s on s ss r n ppro s s o or n z n . n s on n l so o s on r r ons r ro o r xp r n s pro r rr n ly ork n on kno n n r ork n l o ”N M s N ”. o l o s pro s o lop ool ox n l s s rs o r r n or on ro orl n o spl y r s ls n r ns ons. sp l pp n n r y q ry n l s s r o n r ly l n r o r s ls n or r os pro s n s n o r s/ r o n r o r r n or on.

1

Introduction

O g n lly v s sp o n xh on ll sh w ssn (l . p o u k nowl g ; pun on ssnsh s n n g m n) n nv s g w n o p o o sown wh n h xh on w s n l n only h v u lp m n . h p o s s p s M L ppl on. h n p s som 100 M L m o ls h soun n om p ny ng x (spok n o s M L) onn n o g n z usng v . h g o lo h p o w s o m k h wo k o us n s n ss n s h ms ss l n un s n l o o u n . h sh m sso m l n h h og n ous on n h h o n o h nv onm n ll o o g n z on l s u u sh woul llow h us o v h no m on n o un s n h onn ons w n h op s.On h o h h n w w on on w h h p o l m o o g n zng h v ousp sn ons. o solv o h p o l m s h oug h -s u u s.

2

Organizational Structures in 3D-Space

n h sp p w nom on n h h n u o h

s ng u sh - m nson l sp h h o

n pp o h s o o g n zng o s n . h s pp o h sw on on y v su lz ( . . g og ph l on p u l

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 308–314, 1998. c Springer-Verlag Berlin Heidelberg 1998 

r

nzn

n or

h u l .) n y h n ss y o m n n oh on l y n n l ng u g h oug h ou ll p so h 2.1

on n

09

n n v g on un nv onm n .

Abstract Structures

h s pp o h m g h us ul wh n m pp ng g o s su h s n h so s n ) on 1 n h s s w s us o llus h h og n ous on n w h oug h s o sm pl s u u wh h oul v no h op sw onn .

som sp sp l ov ll o g on on h m on n

o h on ng m n o n z on. us w h g n

n ( .g . o s. vn h un on h ow

”Smarties”. n h n y lv l h op s p sn y n o soll sm . h y ll h s m sz n l l w h x s ng o ng o h on n . h n no m h s n sl ll h sm s sh h sm olo n u l g y. h olo h ng s o g h y llow wh n sm h s n sl n o wh n h us m g s om h op ( n log ous o v s lnk h sm om s v s sm ). n h sw y h us n k p k o h spos on n h sm ov m n s. h sh p o h sm s w s on on y h sk s h y h op o m n h ov v w w s m po n o h v n o h woul v y sm pl n o o wo k n los up sw ll s om s n . h os o g v h o sn o n on . . on on w sm o p om n n h n h o h s sn h o n on o h sm s p ov h v su l lu o wh h w s h m n pl n . h n y lv l n sly xp n o mo y ng o su ng sm s n o o om m o n w op so h ng s n on n . Spatial Arrangement. n ll h op s spl y n h ov v w g vn qu l m po n n h sno p ul o n wh h h y h v o xplo w o o uson h l onsh p w n h op s. h us ll sm s loosly g oup y g o y .g . ll sm s on n ng wo l s h lw h olog y ng n lus . M os op s .g . h g h - n g y ph y s s l w h num o n on p s n pp o h s n h o spl n o su -wo l s. l k ng on y llow . . sl sm s h on n su - op s spl y s su s u u m om p l y llow sm s onn y h s xpl n ng h onn on w n h su - op s( lso g . 1). h n o u on o xp n l su s u u s lso h lp o vo lu ng h n y l v l w h oo m ny o s. h us n h oos h sm s y po n ng n l k ng h m h sl op om s h n o h wo l p om n n ly pl n h m l o h s n. ll h o h sm s now g oup oun h spo n h us sm s pl n o h n l on on n op s l o wh l h os h w y o v ously lw h ssu s h only m o ly onn o no l ll. h us n o oun h sn w n y usng h ows h g so h - m ( lso g . 1). p l soun w sus o nh n h o n on.

10

l

ll r n

n l

o ¨ l

Fig. 1. h n y l v lo h nv onm n tures n arrows for navigation.

sh ow ng labeled smarties substruc-

Conclusion. h s up s n h p v ouss onswo k sw ll o xpl n ng h l ons w n h op s v ly onv y s sns o h sp l ng m n o h o s n h o ou h l on o h oh . u n ou o g un o p opl wh o us w n o xplo h nv onm n w h ou look ng o ny h ng n p ul n h lp p opl k p k o vs un xplo s. On h o h h n u n ou o h um som o p opl look ng o sp op s o g n zng h sm s sp lly n l lng h m sno noug h o sy v l. h s w swo sn y h h w s h h o n su l sz o w no n x h h x woul pp m u h oo l g wh n h o w s los o woul ly l wh n v w om s n h llow ll o so s n. v l pp o h s n h oug h o lng h sz o h x o ng o h v wpo n h ng s n s n . h sop on w s opp sn h w s h w l ng . lng h sz o h x o ng o h v wpo n h ng s n s n . h sop on w s opp sn h w s h so n ng . pl ng h n y l v ls u u w h sup s u u wh h sm pl s g oup o sm s om h o g n l on g u on. h ssolu on w s s sn woul k h us on m o p v s p o h h m o ls. pl ng h sm s w h ons woul s ll qu l lng sn h y llow o o ng o n p on.

r

2.2

nzn

n or

on n

11

Organizational Structures Resembling the Real World.

M ny op s s o g sp wh n h us s on on w h on o h l h ng no us s p on o llus on. o n v s h sw s u o op s h l w h sp l ng m k n .g . h olog l s s g og ph l h sol sy s h nvolv p o sss .g . xp m n s n qu n um ph y s so p o u ng m ol ul m p xy. O g n zng h n ss y n o m op s h oug h m o ls n -sp us m n u l n wo k Models as Organizational Structures. n som on o om m un s nh n n no su h h x m pl w sussh l w h h x v n ph sus p o y h us n h olog l my o n s. h m o l p sn s p l o om n p m n .

Fig. 2. h m o lo h om n p m n n o n v g on n n o m on v l.

p snn n so som m o op s u on s ps on o h s ly w ll.

ssm os o h n o m s u l ng o m h n . onso h ng h us ns u n h us n ons u on o h u ns

ph sus sh ow ng h

labels us

Navigation. h l w s sy o onn h no m on o h o s w s no so sy o om m un o h v s o wh h o s o us. h sp o l m w sno on n o h ph susm o l h w som k h o ss n ou l ly wh l on h o h h n vo ng h h m k o swoul n w h h m osph n ov ll m p sson o h m o l. M o g n lly sp k ng w s m po n o n n v g on n v lm h o h oul p o h v ouss u ons n lso o h n y l v l. xp m n w h p n ng l k l o s p ul olo n pp o h h soon p ov o h v m ny w k s sm llo sw no s ng u sh l noug h un m l o s su h sp so m h n so v sn on l n o m on on h pu pos. su lly h olo ng lso h sh so h

1

l

ll r n

n l

o ¨ l

m o l. no h pp o h onss n no m k ng h o s n ly ng on h u so o h ng om n ow o h sy m olo h n . h sp ov h ous sn v n w h h sm pl s o s nvolv us ng ls. n lly s l o wh l ls sg n ng l k l o s n k o l k l ls h s v s xpl n o y no sonly. L lng h lp xpl n n o n h o h lp h vs o wh h sh h w n o k now m o ou . o s llo n h low gh o n o h ML m s v slnk s o on p s n l op s no h o no . o xplo h h ous h us l k son h v ousl lsm k ng pl so n s y o ng so h sh m ov s h oug h h oom s n sp sn w h k g oun n o m on. h squ n o l ls lso g ul s h o n wh h h m o l n xplo . 2.3

Merging Both Systems

o som op s s m g oo s s u u . us h sm w h h m om h n y lv l n pl n h mo l o o n on n

o om n ls m o lsw h n s sn h us w s l y m l og n z h m s x n ous l m n s n o m on v l.

The ”Smarties” as Markers. h l o m gh s v s n x m pl wh sm s w us o m k s n v lo ons n m o l. n h l o w s p sn sh m lly h sm s oul us s m k sh ll o n on wh l w ssuffi n ly l h h y no o spon o l o s.

Fig. 3.

h

l

o sh ow ng ”smarties” used as markers

The ”Smarties” as Carriers of Content. s on p s h no sly v su lz m g h s o un s n y usng -s u u s o o g n zng n o m on. o h s on u on p o h h g h - n g y ph y s ss on

r

s m s o us n h s m om n s h m ploy sm k s h s sp ll h on n llus s g so v lopm n .

Fig. 4.

3 3.1

h

g

n or

on n

1

s p sn on o h v n s k ng pl n un v s s o xs. g n h sm s w m on m -s l . ssoon s h y l k h y ng h p l s n o s xs ng h v ous

ng m o

l sh ow ng ”smarties” used as carriers of content

Future Directions Towards a New Information Architecture

h n

p o sso m pp ng xs ng on n n sp s h h p v ously n n gl ns og n z l n sp l ng m n o h g sp onn ons n su squ n ly o uson spo s. 3.2

nzn

-sp g vs ss o onn o w us no v s l . h ng s llow h us o ns n h m os n s ng o p om

ons p v ly sng

Conclusion o n v u

som

sw

ons

m po

n

o

uu

v lop-

m n . Data Driven Structures. h o g n z on l s u u sw m ploy n n n v s w m osly usom lo o h sk h n . h m pp ng o h on n on o -s u u sw sno u om .. h p h o m nu lly pos on l l n n ow w h sp un on l y. o som sk s .g . h s u ons s n s on . n . h s pp o h wo k sw ll n sh o h nk o su s u m h o . o s s u u s s s n s on .1 mo v n pp o h .g . h

14

l

ll r n

u om z on o h on n llows o h sg n o m o fl ssu m y h oosng on on olog y wh l h v sy s m .

n l

o ¨ l

m pp ng o ng o p n s o ul s x l nv onm n s. n m s un h qu y p s om - p sn on o h un ly ng o um n s spl y so s n sp l

Improved Navigation and Orientation m g h h v h oug h sg n l m n s su h s y n o u ng l n m k s o o n on o y ng g o v lu s n s o y o ng p n l v ws su h sov v wso pl ns h l h us h k h s h pos on n sp n h ov ll ng m n o h o s. n h s on x l v lso p sn on h llow h us o sw h om v y l v ws o ov v wso h nv onm n n m po n ssu . Alternative Ways for Representing Information, . . sw h ng w n n p sn onso h on n o ng o n n ss s. On h o h h n h nv onm n m g h pp n o ng o n us s sss uso h s h p n s .g . h us m y usom z n nv onm n o n ss o v w n p so . Allowing for a Wider Range of Interactions . . g v ng h us h poss l y o ng h nv onm n o o p son l nv onm n . h us m g h lso g vn h p l y o h ng h pp n o h o s l v sn h nv onm n pos m ss g s om m un w h oh us s .

References 1.

. . 4. . 6.

o n . r ns on l n or on s l s on. o p r n n l por No. 1 (1996) http://www.dur.ac.uk/ dcs3py/pages/work/documents/lit-survey/IV-Survey/ l rs M. s n rsp s n s lsn o pl x n or on (199 ) http://www.ubs.com/info/ubilab/projects/hci/e viz.html o n . op. . (1996) l rs M. op. . (199 ) n or . . l. xp r n o s n r p s n s s l s on (1994) ftp://ftp.comp.lancs.ac.uk/pub/vds/ oyl . s n . o r ll . n r y . lop n o s l ry n (199 ) http://www.biochem.abdn.ac.uk/ john/vlq/vlq.html

Human Centered Virtual Interactive Image World for Image Retrieval r oK 0

11

oo

n orm on n omm n on ys ms L s. k r k okos k h K n w 0 7 hon + 1 7 x+ 1 11 [email protected]

Abstract. n h s p p r h mpl m n on o r l worl wh h s s or m r r l s s r . h s r l worl s ll o n wh h ons s s o l r n m r o m s o r r . hs s o n h s 00 m s o flow rs r sh p n n s n p n s fin r s. o n shows m s o h m n n h n s s onfi r on r n h n r ons w h h m n who w sh s o xplor n fin m s n h worl . h n show n h worl h n r o h worl s lw ys k y m wh h s s l y h m n so h s s h m n n r worl . h worl h s r ons s h s olor r ons n r onfi r s l s h n r o h worl h n s. h worl n show m s o h m n y n m on n o rs oo m n xn ool. r m k n s h worl o r l z h worl wh h r spon s o o r own m n n h pro ss o h n r on or s r h n som h n . r n h s p p r h pro ss o s r h n m s s s r s on o h ppl ons o xplor n h r l worl . h mos or n l h r s h h worl h n s s onfi r on n or n w h h ry n r ons w h h m n so h h o l fin wh h w n s q kly n mor o r h wo l n o n r w h som h n wh h h o s no know n n . s h ry h r rs s o r l worl n omp r m mory o h n s onfi r on s ly.

Keywords: r r n n r osop

K

1

s wor on

r

z on r on o wor

n r on r n

s r on wor

sp

n n r on o n or on n xn .

Introduction r

n r wor w n s s on r on rn n onsw n w o w s s o xp or wor n so n ro wor . r son w w r n s wor s o r z wor w r spon s o o r own n n pro sso n r on or s r n r

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 315–322, 1998. c Springer-Verlag Berlin Heidelberg 1998 

1

r o K mo o

so

n n w n n n w w n sq n s . r n sp p r pro sso s r n s s sr son o pp onso xp or n r wor . os or n n sp p r s wor n s s on r on n or n w r n r onsw n so o n w w n sq n or o r wo n o n r w so n w o sno now n n . s r r r s so r wor s n o p r or o n s on r on s . n r wor w on on wor on w n s n r wor s sw w s w rfl o r n n s own n pro sso n r onsw wor . r now n s r wor n w swor s o n . n o ow n s ons o n w sr . n s on n r r ss w s s o o n w n ro . n s on w xp n o n s n n r wor . n s on r on n o n w xp n . n s on r on r on o o n n n r on w s ss . n s on 6 n on o o n w s own. n s on 7 ow r pos on n o n w s o n x sw xp n .

2

Image Retrieval as an Application of Moving-VW

op r r s n pp on o o n . n o n s pp o r r sn r o n o n s n o s s n z p n n s n p n o s p n n s o n s o p r pro r n . s o r r n sp p r s r r s s r n w n s so sr w s or s r . ss r rns s r rs n s r or r o . s r or r n n r o s. n s o or s s s pp o r pr sn s r o ons n n or r o r n ons n n or r o ons n n s or on o s r . n s o or s s s pp n or r o s r on n or r o n ns n s or on o s r . n wor o s or on o s r wor s r n x o s. K wor s r n n w s r s n o o n r r s sn wsp p r r r p n r r n so on. r r s r wor s n o o n r r w s ows r r r s sn wo nsonssp 1 or n r nsonssp 6. r r w wor sw s ows r r rs o sn r nsonssp .

3

Human Centered World

o n sns.

s s

n n r s r

wor o n s w so

p

s

sns n n o n ro o n

.

m n

n r

Lo r r rs sn r or r ro . o s w s n ro o n n ro o n .

Fig. 1.

4

n

r

l n r

m

orl

17

s ss own n w ss r 1 s ows s r o

s sr

r o

.

r

n ro

o n

.

Direction in the World

n w w r os n n w r on s o w ? Loo n or n n n r n ro ss s s on o n os n n . r w w r ons ro w r o s r ons ¿ ro w w r w n o sor oo s o s. o r r w s r ons o . o n s ows r ons or sr. r onsw w r p n n o n ss r s s r o ons n o r r n s r on n n ns .O o rs r r n o r r onsw w n n o. r sn wor r on s r r on on p r on n so on. p n on n on o o s r ons r n x r s r or s. r s ows rs o r r s own n r on w n wor . n r p n so o o n r r n n n o o or s s n s r o n s r or r o w so n n r p n. r

5

World Reconfiguration by Human Interaction

sr . . ro

n s pr

n ws r o s r wor . n

ro n

pr o ss r n x s r w

rs

1

r o K mo o

Fig. 2.

rs

o

r r

s own n

r

on

w .

sn

ss r . s rs n ws r rs w sp n r wor s n w wor w s r n ss own now on sp w r ons. r s ows s on o n w s r or n x r r . o o w os r w s p sz s s n w s r . sn w s r w so n ro wor s . spro ss sw w r on r on o wor n n r on. s n ws r so n ro wor n n w s r r s w s sp . n r r n s w rp r rs s r n n n n rn sr rs s r sr o n r wor w s s n n s or . s s o n o n r n nown wor w s. sr n r r wor s s s w r oo n n o osop w o sx o n n w wor . o

6

Animated World n

wor s r n ro s s o sp o son sr n on . or s s s r n n rn so sr n s w . r s ows n on o s or s ow n r r rs o sr. r n n on sr n s w s n on sr n. On n rn s sr s s r n oo or w w n so r r so s r rs . r n ws r n r s s r n n n w n owsw s n ro . o s o s n on o s r ss ro sr o ss n r s n r so o rs . on o so n w n ow snow n pro r ss.

m n

Fig. 3.

r s

wor

s ows

n o

7

sw

o op o

n r

on o

ws n r sr s o

r

l n r

n x s r

w o n r

m

orl

1

.

pro sso s r . ss r n r n r

op n o o

r.

Relative Indexing of Images in the Image World

n o n or on r r o n ss sn wsp p r r s n p p rs n oo s r n x wor sor n r o r rs. on ro r n xn sn s r s son o w s s w o n xn . O r n xn o s r r r n xn n n r n xn . or z on o o n s n n xn r so s or n xn o n s. On o r n n o r r s r r r on n ss s o or or o sw s n s s. o ors n o s r p s r so s. s or o nzo sn s o p rs. wor sw r pr sn on p so sw r n x s s or r r n s. propos o o n xn s wor s. s o sr n xn . s o ss s r r so sw r r n x wor s. s ss own n r 6 sw r r n x r o so orn rso sp r n n w s on o n x so n ro sp r n nn n . sr n o o n x n r o sw n s r n r o o n x n so sns s s o s r os or os o on n so s r s . n o n x s s wor s ro n r sw w sn

0

r o K mo o

Fig. 4.

Fig. 5. o

n

on o

wso

s.

o n

.

m n

n r

r

l n r

m

orl

propor on o s n s ro n r s. s wor s n n n n or o p on. p n r n xn n n xp r o p r wor s n x r n xn n x n . rs o xp r n s ows 70n xn n n r n xn . r s n xp r n r s w on wor sw wr n x o s p rsons 70sn r s s n n xn .

Fig. 6.

8

n

xn

n

rw

1

s

n . wor s n n n s ows r n wo

wor .

Summary and Future Works

o n w s sr . o n s n n r wor w os on n s o on o ss sflow rs r s p n n s n p n s n r s. wor s r ons n n r on rs s s n ro wor s n n. o wor orr spon s o n. wor n n s n or r o oo wo s. n so n or r o n x s o rs oo w s zsr r onso s. s n xn o s r n xn . o n s s p n n s n now. o xp r n ss ow r n xn s spr s s n n xn . O r r wor sw n n o w o wor n r so wor r n w po n sw s n n w p rsp so wor n s p r posn o o n so o sn son sp or n n r ws.

r o K mo o

References 1.

.

. .

.

.

.Ln s l or n z n s m n m p or n orm on r r l n ro n s o h nn l n rn on l on r n on r s r h n lopm n n n orm on r r l p s 1 1. . ll mson n . hn rm n h yn m om n r l n y n m q r s n r l s n orm on xplor on sys m n ro n so h nn l n rn on l on r n on r s r h n lopm n n n orm on r r l p s 1 . . l s n yn m q r s n s l n orm on s k n n r s n ro n s o h n rn on l sympos m ” n o h’ ” p s 1 . . K mo o h n r m l r on l n orm on spl y n sys m n ro n so n n rn on l workshop sponsor n or n z y h spr work n ro p ” ” o ly p s 7 1 . . mm . K nk l . ll Ly r orl s l z on s r n r s ppor n ll x r r l n ro n s o h nn l n rn on l on r n on r s r h n lopm n n n orm on r r l p s 1 . . o . . rs n . . rs . . k nl y .K. r L. sn r . l ors n n . . o r son h n r on n h l l r ry omm n ons o h ol. o. p s 1 .

Virtual Great Barrier Reef: A Theoretical Approach Towards an Evolving, Interactive VR Environment Using a Distributed DOME and CAVE System Scot Thrane Refsland1, Takeo Ojika1, Tom Defanti2, Andy Johnson2, Jason Leigh2, Carl Loeffler3, and Xiaoyuan Tu4 1Virtual

System Laboratory, Gifu University, 1-1 Yanagido, Gifu, 501 JAPAN Visualization Laboratory, University of Illinois at Chicago, 3 Simation, P.O. Box 1984, Cranberry PA 16066, 4Intel Corporation, 2200 Mission College Blvd., RN6-35, Santa Clara, CA 95052 2Electronic

Abstract. The Australian Great Barrier Reef is a natural wonder of our world and a registered UNESCO World Heritage site hosting 1.5 million visitor-days in 1994/95. Tourism is currently the main commercial use and is estimated to generate over $1 billion annually.[1] With the coming 2000 Olympics in Australia, tourism increases will substantially present a major conservation and preservation problem to the reef. This paper proposes a solution to this problem through establishing a virtual reality installation that is interactive and evolving, enabling many visitors to discover the reef through high quality immersive entertainment. This paper considers the technical implications required for a system based in Complexity: a distributed DOME and CAVE architectural system; a mixed reality environment; artificial life; multi-user interactivity; and hardware interfaces.

1 Introduction "Well, life emerged in the oceans," he adds, "so there you are at the edge [diving off the continental shelf], alive and appreciating that enormous fluid nursery. And that's why the 'edge of chaos' carries for me a very similar feeling: because I believe life also originated at the edge of chaos. So here we are at the edge, alive and appreciating the fact that physics itself should yield up such a nursery..." - Chris Langton, artificial life pioneer, commenting on Complexity.1

1

Waldrop, M. Mitchell, 1992 Complexity, The Emerging Science at the Edge of Order and Chaos, page 231. Simon & Schuster, 1992.

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 323-336, 1998 © Springer-Verlag Berlin Heidelberg 1998

324

Scot Thrane Refsland et al.

1.1 A Cinematic Narrative Virtual Walkthrough Jack, Jill and Tanaka have entered the Virtual Barrier Reef Exhibition, where "Nintendo" meets classroom. Before going into the environment, they have to choose from several different experiential options: Jack picks the "Marine Biologist," Jill picks the "Dolphin" and Tanaka the "Snorkeler."

Fig. 1. Overview of the Virtual Great Barrier Reef installation consisting of a 15 meter diameter DOME, Interactive WALLs, 3 CAVE systems and a motion simulator.

When they put on their equipment mainly consisting of 3D shutter glasses, audio communication and other interactive devices, they enter the Dive Vehicle (motion simulator) that takes them on a small "journey" to the reef. The Immersion hatch opens and they exit into the main DOME [Fig. 1] finding themselves immersed in 20 o meters (60 feet) of water. The main DOME [2] (15 m diameter x 15m high, 360 x o 180 ), enables them to see fish and other species swim about in 3D. They see a mix of autonomous fish, avatars and avatar guides that assist, guide and role-play. They hear soundscapes of the reef. They can also see 3 CAVEs [3]. They were told that the installation was directly linked to the real reef to affect the climatic conditions within the installation. The installation's environment was persistent, evolving and adapting just like the real reef. Jack finds a school of fish that holds his interest, and points to it using the "laser and" pointer on top of his Dive Panel. The panel displays the information about the fish, life habits, predators, eating, mating habits, etc. Then, a big grouper swims to him (guide avatar) and engages him in a conversation about the reef. A dolphin joins

Virtual Great Barrier Reef

325

in and he finds out its Jill, and they are able to communicate together through the group communication channel with their headsets. Tanaka finds an autonomous fish guide and several descriptor icons floating nearby. He selects the Japanese version. He inserts his smart card in the nearby stand, then experiments with the inter-related lives of the species inhabiting that community by moving the various avatars around with a Fish data Glove. At one point a mythical aboriginal character swims by and heads for one of the CAVEs. At the warning beep on each of their timers, Jill and Tanaka to return to the visitor's center. Jack extends his visit by buying more air directly through his smart card and Dive Panel. Once in the visitors' center, Jill and Tanaka insert their smart cards to get a color print out of their activities during their visit. 1.2 The Complex Architecture The system is a large distributed network based upon proven, stable DOME and CAVE technologies. Similar reference models in which this project is based upon can be found in the past projects of NICE [4] and ROBOTIX: MARS MISSION [5]. The NICE project is a collaborative CAVE-based environment where 6 to 8 year old children, represented as avatars, collaboratively tend a persistent virtual garden. This highly graphical, immersive virtual space has primarily been designed for use in the CAVE, a multi-person, room-sized virtual reality system. As the CAVE supports multiple simultaneous physical users, a number of children can participate in the learning activities at the same time. Interactive DOME projects including the Carnegie Science Center's "ROBOTIX: MARS MISSION, was the first example of Interactive Virtual Reality Cinema. Each audience member had a three button mouse device that allowed them to make group decisions on story and piloting spacecraft. The graphics were rendered in real-time, called from an extensive database of Mars's terrain from NASA. The partial dome sat 35 people, and was 200 degrees wide and 60 degrees high. The system used a SGI ONYX, with 4 R10000 processors, Infinite Reality Engine, Audio Serial Option board, and display boards. The polling system, custom built, fed into a PC which talked to the SGI. Majority vote ruled. The dome display used 3 state of the art Electrohome RGB projectors. Most important, only one graphic pipeline was used, and split into three views using custom built software and edge blending. The overlap on edges was 25%, and it appeared seamless and no distortion. Performance, 30-90+ fps, no lag. Audio was 3-D spatialized, custom built software, with floor shakers. It ran for 9 months, no glitches. Effective, there were some people who thought they went to Mars. 1.3 Methodology of Application Whilst the true objective of this installation is to install a strong sense of conservation and preservation to the future visitors of the Great Barrier Reef, it seeks to apply it through a Constructionist [6] and "Immersive role-play" educative model steeped heavy in "Nintendo" style interaction of total consciousness submersion. Thus, our methodology objectives are simple – give the visitor an experience that they

326

Scot Thrane Refsland et al.

constructed for themselves, immerse them in wonder and inspiration, give them a reason to care, in their minds really take them there, and finally, afterwards give them something material they can use to prove and remember their experience.

2 Distributed DOME and CAVE Architectural System

2.1 The DOME Zone The DOME zone is constructed of CG and real-time/pre-captured video which is projected onto the scrim of the DOME together in layers to assist with overall processing quality of CG and simulations. The background is a mix of live or prerecorded scenes in the reef, and the CG fish are projected bright enough over the top of it to create the mixed illusion. Of course there will have to be certain areas, such as coral and other big blocks of CG that have to be animated, but these aren't that computationally intensive as they update on a slightly slower level.

Fig. 2. Distributed Relationship of the DOME, CAVES, WALLs and the Internet. 2.2 Multiplexing and Mixed Topologies Since the DOME is the central "Hub" [Fig. 2] of the environment physically, and holds the most connectivity to all other zones, we will use it as the focus to consider the many facets of multiplexing within the environment and the various other network topologies and protocols that connect to it.

Virtual Great Barrier Reef

327

2.3 The WALL Zones (4 WALLs) The walls will be the most computational expensive of all the zones in the environment as most of the direct interactivity will happen around them. Walls are interactive in that they might have species that will react when visitors walk by them. Example would be an alife moray eel might come out to feed, and when a visitor walks up to it quickly, a sensing device gives it spatial reaction information. Spot projectors and smaller independent computer systems are displaying various small interactive displays, while the meta environment was separate and being processed by the SGI system. 2.4 CAVE Zones: (3 CAVEs) CAVEs are separate zones or environments that are story related. They can host indigenous or mythical stories related to the reef and its inhabitants. Here the visitors can have a more immersive, intimate experience with the denizens of the reef, as the sea creatures swim and interact with the visitors within the 3 meter cube of each CAVE. The CAVEs use the standard network topologies and protocols being developed as part of the CAVERN network [7] to deal with the multiple information flows maintaining the virtual environments. Each of the CAVEs, WALLs, and the DOME making up the environment will be connected via a high-speed network, allowing information such as avatar data, virtual world state / meta data and persistent evolutional data to pass between them. This will allow avatars in the CAVEs to swim over onto the WALLS or the DOME, and allow museum guides in the guise of reef inhabitants to interact with the visitors. This will also allow remotely located CAVE users in the United States, Europe, and Asia to join into the collaboration. Similar connectivity has been used in the NICE environment [8] to link multiple CAVE sites in the United States to Europe and Japan.

3 Mixed Reality Environment The environment of this project uses video, lighting, and sound techniques to establish an environment that convinces the user that they are indeed immersed in the Great Barrier Reef. Video: Video will be used in strategic places includes some parts of the walls and the floor. The main imaging of the floor is video, with spot CG projections to provide a better illusion of immersion and non-repetition. Sound: immersive and 3D in its localization, the sound system provides authentic sounds from autonomous life and "extra spatial" depth sounds.

328

Scot Thrane Refsland et al.

3.1 Real-Time Connectivity to the Reef Climatic Interference: The virtual model is connected real-time to a section of the real reef so that it can automatically adjust the virtual model. Temperature, current, air pressure and other climatic conditions on the reef will cause several actions in the virtual environment, such as temperature of the room, flow of the virtual debris, virtual water currents, etc. 3.2 Complex and Evolving Environment This environment is considered to be a "living entity", possessing an authentic as possible simulation to the real reef. For simulating simple fluid flow, we consider techniques similar to the ones used by Wejchert and Haumann Wejchert 91 [9] [Fig. 3] for animating aerodynamics.

Fig. 3. The flow primitives (reproduced from the paper by Wejchert and Haumann)

Assuming inviscid and irrotational fluid, we can construct a model of a nonturbulent flow field with low computational cost using a set of flow primitives. These include uniform flow where the fluid velocity follow straight lines; source flow--a point from which fluid moves out from all directions; sink flow which is opposite to the source flow; and vortex flow where fluid moves around in concentric circles. This process is also directly linked with the real reef current flow that is constantly supplying real-time information. Both of the processes integrated together enable lower computation requirements and increase the authenticity of the virtual environment. Much of the seaweed, plankton, coral and other movable plant/marine life are simulated and respond dynamically to water currents and movements of other life in a realistic manner. In this case it is proposed to use a system which builds the plants in a mass-spring chain assembly. In order to obtain computational efficiency, we do not calculate the forces acting on the geometric surface of each plant leaf, rather, we approximate the hydrodynamic force at each mass point.

4 Artificial Life The artificial life treatment in this project is an exciting and innovative one. As mentioned earlier, one of the educative goals in this project is to enable people to

Virtual Great Barrier Reef

329

"think like a fish" so they can experience first hand through the "eyes of a fish" what it's like to live on the reef. To achieve this, we have developed two types of models for Alife. The first model is a totally autonomous animal that behaves according to the events in the environment. It has a life span, is integrated into the ecological system, and possesses authentic behaviors of its real life counterpart. The second model is a similar model, only that the "brain and motor skills" are detached and placed into the control of a visitor using a dataglove to direct the fish. The innovation here is that even though a visitor is controlling the "will and motor" of the artificial fish, it still has behavior and characteristic traits that will not allow you to act outside of the character set for that fish. 4.1 Artificial Animals for Life-Like Interactive Virtual Reef Creatures What's the key to bringing you a captivating experience as a tourist of the virtual Great Barrier Reef? Whether you are merely observing the schools of sardines, playful dolphins and vicious sharks, or you are living the aquatic life by playing the role of one of the virtual reef dwellers (want to be a dolphin, parrot fish or maybe a reef shark? Sure you can), realism is the password to the ultimate immersive experience. Here we are not just talking about visual authenticity in the appearance of the digital sea creatures, but also in their physical environment, and most importantly, in the way they move, perceive and behave. In an interactive environment like ours, behavioral realism, especially, autonomy of the virtual creatures is indispensable. How are we to achieve such integrated realism? Our answer is to build Artificial Life (or Alife) models of the aquatic animals. The properties and internal control mechanisms of these artificial animals should be qualitatively similar to those of their natural counterparts. Specifically, we construct animal-like autonomy into traditional graphical models by integrating control structures for locomotion, perception and action. As a result, the artificial fishes and mammals have 'eyes' and other sensors to actively perceive their dynamic environments; they have 'brains' to interpret their perception and govern their actions. Just like real animals, they autonomously make decisions about what to do and how to do in everyday aquatic life, be it dramatic or mundane. This approach has been successfully demonstrated by the lifelike virtual undersea world developed in [10, 11]. We tackle the complexity of the artificial life model by decomposing it into three sub-models:  A graphical display model that uses geometry and texture to capture the form and appearance of any specific species of sea animal.  A bio-mechanical model that captures the physical and anatomical structure of the animal’s body, including its muscle actuators, and simulates its deformation and dynamics.  A brain model that is responsible for motor, perception and behavior control of the animal. As an example of such an artificial life model, [Fig.4] shows a functional overview of the artificial fish that was implemented in [10, 11]. As the figure illustrates, the body of the fish harbors its brain. The brain itself consists of three control centers: the motor center, the perception center, and the behavior center. These centers are part of

330

Scot Thrane Refsland et al.

the motor, perception, and behavior control systems of the artificial fish. The function of each of these systems will be previewed next.

Fig. 4. Diagram of an autonomous and avatar model fish

4.2 Motor System The motor system comprises the dynamic model of the sea creature, the actuators, and a set of motor controllers (MCs) which constitutes the motor control center in the artificial animal's brain. Since our goal is to animate an animal realistically and at reasonable computational cost, we seek to design a mechanical model that represents a good compromise between anatomical consistency, hence realism, and computational efficiency. The dynamic fish model [12, 13] represents a good example. It is important to realize that adequate model fidelity allows us to build motor controllers by gleaning information from bio-mechanical literature of the animal [14, 15] Motor controllers are parameterized procedures, each of which is dedicated to carrying out a specific motor function, such as ``swim forward'', ``turn left'' or ``ascend''. They translate natural control parameters such as the forward speed, angle of the turn or angle of ascent into detailed muscle or fin/leg actions. Abstracting locomotion control into parameterized procedures enables behavior control to operate on the level of motor skills, rather than that of tedious individual muscle/limb movements. The repertoire of motor skills forms the foundation of the artificial animal's functionality.

Virtual Great Barrier Reef

331

Given that our application is in interactive VR systems, locomotion control and simulation must be executed at interactive speed. Physics-based dynamic models of animals offer physical realism but require numerical integration. This can be expensive when the models are complex. There are various ways to speed things up. One way is to have multiple models of increasing complexity for each species of animal. For example, the simplest model will just be a particle with mass, velocity and acceleration (which still exhibits basic physical properties and hence can automatically react to external forces, such as water current, in a realistic manner). The speed-up can then be achieved by way of `motion culling’, where when the animal is not visible or is far away, only the simplest model is used and the full mechanical model is used only when the animal is near by. This method, however, is view-dependent and can be tricky when the number of users is large. Another way is to use bio-mechanical models whenever possible (when they are simple enough to run the simulation at real time) and with more complex animals, we can build pre-processed, parameterized motion libraries (which can be played back in real time), like what is used by most of today's games. These (canned) motion libraries can be built from off-line simulations of elaborate bio-mechanical models of the animal or from motion-captured data. One thing to keep in mind is that, no matter what the underlying locomotion model is, its interface to the behavior system should be kept comparable as parameterized procedures of motor skills. 4.3 Perception System Perception modeling is concerned with: Simulating the physical and optical abilities and limitations of the animal's perception. 2. Interpreting sensory data by simulating the results of perceptual information processing within the brain of the animal. When modeling perception for the purposes of interactive entertainment, our first task is to model the perceptual capabilities of the animal. Many animals employ eyes as their primary sense organ and perceptual information is extracted from retinal images. In a VR system, such "retinal" images correspond to the 2D projection of the 3D virtual world rendered from the point of view of the artificial animal's "eyes". However, many animals do not rely on vision as their primary perceptual mode, in which case vision models alone may not be able to appropriately capture the animal's perceptual abilities. It is equally important to model the limitations of natural perception. Animal sensory organs cannot provide unlimited information about their habitats. Most animals cannot detect objects that are beyond a certain distance away and they usually can detect moving objects much better than static objects [14]. If these properties are not adequately modeled, unrealistic behaviors may result. Moreover, at any moment in time, an animal receives a relatively large amount of sensory information to which its brain cannot attend all at once. Hence there must be some mechanism for deciding what particular information to attend to at any particular time. This process is often referred to as attention. The focus of attention is determined based upon the animal's behavioral needs and is a crucial part of perception that directly connects perception to behavior. 1.

332

Scot Thrane Refsland et al.

Unfortunately, it is not at all well understood how to model animal sensory organs, let alone the information processing in the brain that mediate an animal's perception of its world. Fortunately, for our purposes, an artificial animal in its virtual world can readily glean whatever sensory information is necessary to support life-like behavior by directly interrogating the world model and/or exploiting the graphics-rendering pipeline. In this way, our perception model synthesizes the results of perception in as simple, direct and efficient a manner as possible. The perception system relies on a set of on-board virtual sensors to provide sensory information about the dynamic environment, including eyes that can produce timevarying retinal images of the environment. The brain's perception control center includes a perceptual attention mechanism which allows the artificial animal to train its sensors at the world in a task-specific way, hence filtering out sensory information superfluous to its current behavioral needs. For example, the artificial animal attends to sensory information about nearby food sources when foraging. 4.4 Behavior System The behavior system of the artificial animal mediates between its perception system and its motor system. An intention generator, the animal's cognitive faculty, harnesses the dynamics of the perception-action cycle and controls action selection in the artificial animal. The animator establishes the innate character of the animal through a set of habit parameters that determine whether or not it likes darkness or whether it is a male or female, etc. Unlike the static habits of the animal, its mental state is dynamic and is modeled in the behavior system by several mental state variables. Each mental state variable represents a distinct desire. For example, the desire to drink or the desire to eat. In order to model an artificial animal's mental state, it is important to make certain that the modeled desires resemble the three fundamental properties of natural desires: (a) they should be time varying; (b) they should depend on either internal urge or external stimuli or both; (c) they should be satisfiable. The intention generator combines the habits and mental state with the incoming stream of sensory information to generate dynamic goals for the animal, such as to chase and feed on prey. It ensures that goals have some persistence by exploiting a single-item memory. The intention generator also controls the perceptual attention mechanism to filter out sensory information unnecessary to accomplishing the goal in hand. For example, if the intention is to eat food, then the artificial animal attends to sensory information related to nearby food sources. Moreover, at any given moment in time, there is only one intention or one active behavior in the artificial animal’s behavior system. This hypothesis is commonly made by ethologists when analyzing the behavior of fishes, birds and four-legged animals of or below intermediate complexity (e.g. dogs, cats) [15, 16]. At every simulation time step, the intention generator activates behavior routines that input the filtered sensory information and compute the appropriate motor control parameters to carry the animal one step closer to fulfilling the current intention. The intention generator Primitive behavior routines, such as obstacle avoidance, and more sophisticated motivational behavior routines, such as mating, are the building blocks of the behavioral repertoire of the artificial animal.

Virtual Great Barrier Reef

5

333

Interfaces

Because it is our past experience that all technology much be bulletproof and withstand the rigorous demands of an unrelenting public, this equipment is designed along the lines of actual DIVE equipment which is very durable and rugged, big and easy to use. 5.1 Tracking/Sensing A number of tracking options are available, his environment will consider the use of mix from magnetic [17], to un-encumbered technology such as camera based systems [18]. Sensing the bulk of visitors is done with infrared and video. Specific occupant tracking is done either with localized gloves cabled to the specific site location, or through wireless. Shutter glasses should be independent and able to sense any WALL or CAVE. Artificial Life use sensing to establish the location of visitors within the environment and can respond to them as a normal fish would respond. 5.2 Weight Belt with Dive Panel This equipment enables visitors to interact with the environment and gain statistical information about the life species in the installation. The weight belt holds the battery pack and wireless communications, tracking and wand/pointer hardware, and a smart/memory card slot for recording individual data and experiences. Attached to it is a "Dive Panel" which is a pointing wand, and small LCD monitor to display information about the species that are under investigation, and other climatic information. Through this, the visitor is able to access the knowledge base stored on the memory card and other data being generated real-time. 5.3 Enhanced Shutterglasses Shutter glasses are connected to a timer so that when the visitor's use all their "air", they shut down and are inoperable. This makes the environment very difficult to see and forces the visitor to exit the installation. Some of the more sophisticated models for avatars and "Marine Biologists" have audio communications built in. 5.4 Taking Home a Bit of the Virtual Reef Something else that we think is really important from NICE is giving people some kind of artifact from the experience. Since VR is such a rare experience, and you really can't take anything physical away from the virtual space, it helps people when they try to describe what they did to other people, and enhance the experience and memory of the event. Each participant in the virtual reef environment gets a smart/memory card to record a "narrative" of everything they did, characters they talked to, actions they performed.

334

Scot Thrane Refsland et al.

The narrative structure captures these interactions in the form of simple sentences such as: "Mary (disguised as a dolphin) stops by to see Murray the Moray eel to find out what he eats. Mary and Murray have dinner together." The story sequence goes through a simple parser, which replaces some of the words with their iconic representations and stores the transcript onto their Memory card. This gives the story a "picturebook" style that the visitor can print out and keep to remember the various aquatic life they met that day. 5.5 Smart/Memory Cards We are currently investigating the use of Smart/Memory Cards to perform functions like: tracking and recording the user's actions, supplying a knowledge database, and providing financial calculations for visitation fees.

6 Interactivity

6.1 Visitor Interactivity Avatar: There are many species in which a visitor can explore, from fish and other animals. Avatars can be as simple as the ones found at the localized exhibits for tourists to try out, to the sophisticated ones like dolphins and sharks. Avatars ultimately come under the domain of the simulated eco-cycle, so anything could happen: the visitor's avatar could be eaten at any time, and eventually will go through the entire life cycle of the reef. Snorkeler: This level is simple roaming and exploring the environment with minimal hardware, such as shutter-glasses to immerse them in 3D. The smart card enables them to utilize the local interactive stands available throughout the environment. Marine Biologist: This level of interaction is highly independent and is able to program their own interactivity levels for simulation, planning, studying, etc. The equipment this visitor has is the most advanced versions of the standard models. 6.2 Artificial Life Interactivity The artificial life in the environment is just as interactive towards visitors as visitors are to them. They can and will interact with visitors in many different behaviors, from being friendly, territorial, viscous, funny, etc. Fast movement will scatter fish. In this way, true interactivity is two-way, creating a complex environment.

Virtual Great Barrier Reef

335

6.3 Staff Interactivity Guides: These are sophisticated avatars that can interact with anyone in the environment. These people can be in the environment or hidden from view. They have audio and visual communication devices to drive avatars such as Mermaids, large fish, human animations or indigenous characters. This level of interactivity can be used for guided tours, assistants, observers and other special requirements. At this level, the guide is able to leave discovery objects which visitors can touch and find out more information.

7 Conclusion and Future Works This system has been designed using existing and known solutions of virtual reality, networking, evolutional theory and artificial life, and yet, the innovation lies within the organization of each application into complexity. It has been designed with the forethought of being able to use the physical architecture as a generic shell to enable other stories besides the reef to be told, and the application able to be used in other systems, including retrofitted Planetariums.

Acknowledgements The Gifu Research and Development Foundation and The International Society on Virtual Systems and MultiMedia, Gifu Japan The virtual reality research, collaborations, and outreach programs at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago are made possible through major funding from the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the US Department of Energy; specifically NSF awards CDA-9303433, CDA-9512272, NCR-9712283, CDA9720351, and the NSF ASC Partnerships for Advanced Computational Infrastructure (PACI) program. EVL also wishes to acknowledge Silicon Graphics, Inc. and Advanced Network and Services for their support. The CAVE and ImmersaDesk are trademarks of the Board of Trustees of the University of Illinois.

References 1.

McPhail, I. The Changing Environment of Managing Use in the Great Barrier Reef World Heritage Area Presented at the ABARE Outlook '96 Conference, Melbourne, February 1996. Great Barrier Reef Marine Authority

2.

Goto Virtuarium DOME system.

3.

Cruz-Neira, C., Sandin, D.J., and DeFanti, T.A. "Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE." In Proceedings of SIGGRAPH '93 Computer Graphics Conference, ACM SIGGRAPH, August 1993, pp. 135-142.

336 4.

5. 6. 7.

8. 9. 10. 11.

12. 13. 14. 15. 16. 17. 18.

Scot Thrane Refsland et al. Leigh, J., DeFanti, T., Johnson, A., Brown, M., Sandin, D., "Global Tele-Immersion: th Better than Being There,." In the proceedings of 7 International Conference on Artificial Reality and Tele-Existence. Tokyo, Japan, Dec 3-5, 1997, pp. 10-17. Carnegie Science Center, "ROBOTIX: MARS MISSION", Piaget, J. To Understand is to Invent: The Future of Education. Grossman, New York, 1973. Leigh, J., Johnson, A., DeFanti, T., "Issues in the Design of a Flexible Distributed Architecture for Supporting Persistence and Interoperability in Collaborative Virtual Environments,." In the proceedings of Supercomputing '97 San Jose, California, Nov 1521, 1997. Johnson, A., Leigh, J., Costigan, C., "Multiway Tele-Immersion at Supercomputing '97." To appear in IEEE Computer Graphics and Applications, July 1998. Wejchert, J. and Haumann, D. Animation aerodynamics. ACM Computer Graphics, SIGGRAPH'91, 25(4): pp.19-22. 1991. Tu, X. 1996. Ph.D Thesis, Department of Computer Science, University of Toronto. Tu, X. and Terzopoulos, D. Artificial fishes: Physics, locomotion, perception, behavior. In ACM Computer Graphics, Annual Conference Series, Proceedings SIGGRAPH'94, pages 43-50, Orlando, FL. ACM Computer Graphics. 1994 Blake, R. Fish Locomotion. Cambridge University Press, Cambridge, England. 1983. Alexander, R.. Exploring Bio-mechanics. Scientific American Library, New York. 1992. Tansley, K. Vision in Vertebrates. Chapman and Hall, London, England. 1965. Tinbergen, N. The Study of Instinct. Clarendon Press, Oxford, England. 1951. Manning, A. An Introduction to Animal Behavior. Addison-Wesley Publications, Massachusetts, 3rd edition. 1979. Semwal, S., Hightower, R. Stansfield, S., Closed form and Geometric Algorithms for Real-Time Control of an Avatar, Proceedings of IEEE, VRAIS96, pp. 177-184 (1996). Krueger, M., Artificial Reality II. Addison Wesley Publishing Company, Reading, MA pp. 1-277 (1991).

The Development of an Intelligent Haulage Truck Simulator for Improving the Safety of Operation in Surface Mines Mr. Matthew Williams, Dr. Damian Schofield, and Prof. Bryan Denby. AIMS Research Unit, School of Chemical, Environmental and Mining Engineering, University of Nottingham, UK. [email protected]

Abstract. Surface mines are in operation world-wide, the vast majority employ large haul trucks for the transfer of material both to the outside world and around the site. The sheer size of these trucks and the operating conditions means there is a high level of risk. Allied to this, the commercial nature of the operation means that down time is extremely costly and driver training expensive. The AIMS Research Unit has developed a PC based system to improve driver training which is currently being developed into a commercial application. Scenarios are created by importing site specific data through industrial CAD systems, road systems are then added through an editor to create good replicas of the environment facing drivers on a day to day basis. The world is further enhanced by allowing the user to specify a number of intelligent objects including haulage trucks, excavators with load points and various static objects. Once scenarios have been created training is carried out on a full screen real time simulation which allows trainees to drive or be driven by computer through the world. At any given point the trainee is able to stop the simulation and identify potential hazards, their associated risk, and take possible corrective action. Keywords: Driving Simulators, Safety Training, Virtual Reality, Hazard Awareness, Surface Mining.

AIMS Unit The AIMS Research Unit at the University of Nottingham has been involved in the fields of computer graphics and virtual reality for a number of years and has identified a number of potential applications for the engineering industry. In particular it now appears that the technologies have developed sufficiently and costs reduced to levels that mean even the smallest companies can seriously consider them.

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 337-344, 1998 © Springer-Verlag Berlin Heidelberg 1998

338

Matthew Williams et al.

The AIMS Unit uses a range of commercial packages and programming languages to create new computer based applications. Development takes place on both PC and Silicon Graphics platforms. However the unit has found that systems developed using PC technology are more easily ported to the industry due to reduced costs, the existing user base and the familiarity of PC technology [1].

Background There are many surface mines in operation throughout the world, all of these require the safe transfer of material around the site. In many cases this is achieved by haulage trucks, these large vehicles, often the size of a small house, have many associated safety issues. Their huge size together with the difficult environmental conditions introduces handling and visibility problems. There is a high level of risk associated with each truck, as the consequences of an accident can be extremely severe. Strict operational rules are enforced to minimise the potential for disaster, whilst these have been successful in reducing accidents, by 1995 industrial trucks were still the second leading cause of fatalities in the private sector behind only highway vehicle fatalities [2]. Indeed the OHSA found on average 107 fatalities and 38,330 injuries occurred annually in the workplace, furthermore it found present training standards to be ineffective in reducing the number of accidents involving powered industrial trucks. At the moment only trained operators can drive these trucks, periodic evaluation of each truck drivers performance is also required. Competency will depend on the ability of the vehicle operator to acquire, retain, and use the knowledge, skills and abilities that are necessary. Financially, haulage trucks are the largest cost item in the operating costs of a surface mine in some cases accounting for an estimated 50% of operating costs [3]. The large initial cost of these trucks also makes driver training expensive. Attitudes towards industrial safety have advanced significantly in recent years and the mining industry has not been an exception to this. The introduction of new legislation in the UK [4] has changed the emphasis of industrial law from prescriptive legislation into new management systems. Large mineral organisations are now continually looking for ways to improve their safety performance and techniques, which may help to reduce accident statistics, need to be investigated. The capacity to remember safety information from a three dimensional world is far greater than the ability to translate information from a printed page into a 'real' three-dimensional environment [5].

The Development of an Intelligent Truck Simulator

339

AIMS Unit Truck Simulator System Details The system has been designed to run on a PC platform, prospective users identified this as particularly important due to the wide availability and low cost of the platform. The system used for training operates as a full screen application and provides both visual and 3D audio output. Trainees are able to interact with the system in a number of ways including through a custom built steering wheel and pedals. The system is split into two modules, the first is an editor allowing the trainer to configure and save a number of different scenarios, the second runs as a full screen simulation during training sessions. Both parts have been programmed in Visual C++ and DirectX is used as the graphics and sound API, this allows access to acceleration across a wide range of hardware. System performance varies according to the size of environment, however advances in graphical accelerators mean that high frame rates have been maintained despite using large pit meshes. The system has a high level of graphical detail, this is quite important during the process of training as it is often the ability to spot visual details that ultimately leads to safe operation.

World Modelling As mining progresses from phase to phase, haul road networks and truck routes must adapt as the landform changes. To create more relevant training scenarios, it was important that the trainer should be able to accurately recreate the work place and conditions. This ranges from road layouts, truck movements and loading points through to lighting and fogging conditions to simulate different weather conditions. The application we have developed reduces the cost of creating new worlds by using currently available pit data. Surface mine operators maintain 3D models describing the development of the terrain and possibly haul road layout for each distinct operating phase. These are maintained in industry standard modelling packages such as Vulcan or Surpac. Mesh data held in these packages can be exported directly into the system and used as a base mesh for the virtual world. ˝

340

Matthew Williams et al.

Fig. 1. 2D pit during configuration

Fig. 2. Construction of the haul road ˝

The Development of an Intelligent Truck Simulator

341

Haul road information is overlaid using 2D drawing tools, it is then converted to a 2D road system, before being subtracted from the base model. Height values for the haul road are obtained through an averaging process on local vertices, smoothing also takes place on each segment of the road so that the surface remains flat. The road system is then re-triangulated back into the original mesh and textures are applied to both. This process creates a haul road system consisting of a number of straight, curved, and multiple exit junctions, this not only helps to reduce the polygon count but also to reduce the computational overhead during simulation.

Fig. 3. Textured pit and haul road ˝

˝ All further objects, their attributes and behaviours are added into the scenario at this stage by clicking on a 2D plan of the environment, the third height dimension is automatically calculated. The addition of objects and the configuration of their behaviours has been kept as simple as possible to reduce the complexity of defining new scenarios. Haulage trucks are added by indicating the start, finish and waypoints through which they must pass. Excavators are added by defining the segment in which they load and the direction from which trucks approach. Other objects, such as Surveyors, parked vehicles and road signs are added in plan view and their associated height values are again calculated. Further haul road system details can also be configured, this includes assigning priority at multiple exit junctions.

Intelligent Behavioural Modelling Truck position and velocity is calculated in real-time and is based on industry standard rimpull and retarder curves, these together with factors such as mass, and

342

Matthew Williams et al.

gradient provide an accurate simulation. Interaction with surrounding objects is also taken into account before updating a truck's position. Trucks attempt to follow user-defined routes based upon the road network that has been created, these are pre-calculated as far as possible by storing an ideal line which the trucks attempt to follow. Trucks will take a predefined route through the pit to the load point, where they queue to be loaded before proceeding to the dump point, and the cycle begins again. Trucks also have the intelligence to modify their position, velocity, and acceleration during the simulation according to the state of localised equipment. Figure 4 shows an example of multiple computer-controlled trucks queuing to be loaded. The simplification of the haul network allows extensive details as to the position of the truck within each segment to be monitored as well as allowing the truck to look ahead analyse potential situations and adjust its behaviour accordingly.

Fig. 4. Trucks queuing to be loaded

Training Methods Trainees may either drive or be driven (by the computer) around a pit. Training is primarily achieved through a technique termed 'hazard spotting', this allows the user to identify a number of pre-determined hazards which are included in the virtual world. As the trainee passes through the pit they are able to stop the simulation at any time to 'spot' these hazards. Once identified, they are asked to classify the hazard in terms of risk, and indicate any corrective action which should be taken. Performance is recorded and responses assessed against ideal behaviour. Figure 5 shows such a choice being made.

The Development of an Intelligent Truck Simulator

343

Fig. 5. Risk choice being made ˝

Future Work ˝ Work is currently being carried out to improve the current system and also to extend its scope. Proposed is a pre-start up equipment check allowing the trainee to inspect his vehicle prior to entering the simulation. Planned improvements for the current system include the use of multiple monitors to provide additional viewpoints and using aerial photographs as textures to overlay the pit. Development is also being carried out on using similar techniques in other industrial environments.

Conclusions Virtual reality seems to be an ideal way of exposing trainees to hazardous situations without ever putting them at any real risk. Additionally a large number of potentially rare circumstances can be experienced in a relatively short period of time. However,

344

Matthew Williams et al.

there is still doubt as to how well these skills transfer to the real world. Work with other vehicle simulators indicates that there can be a significant reduction in accidents [6] so it would be reasonable to expect similar results. The system has yet to receive field testing, however it has been demonstrated to a large number of mining institutions whose response has been very positive. Representatives particularly liked the fact that they could create and configure multiple scenarios from existing CAD data quickly and easily. It is anticipated that the system will be commercially available in the next few months.

References 1.

Schofield, D., Denby, B. and McClarnon, D., Computer Graphics and Virtual Reality in the Mining Industry, Mining Magazine, p284 - 286, Nov. 1994.

2.

OSHA, Powered Industrial Truck Operator Training, Internet WWW page, at URL: http://gabby.osha-slc.gov/FedReg_osha_data/FED_19950314.html, 1995 (Version Current 22nd January 1997).

3.

Mueller, E.R. Simplified Dispatching Board Boosts Truck Productivity at Cyprus Pima, Mining Engineering, Vol. 68, No. 4 72-76 1979.

4.

Health and Safety Commission, Management of Health and Safety at Work Regulations 1992. Approved Codes of Practice, HSMO, London, 1992.

5.

Schofield, D., Virtual Reality Associated with FSV's Quarries and Open Cast Vehicles Training, Risk Assessment and Practical Improvements, Workshop on Risks Associated with Free Steered Vehicles, Safety and Health Commission for the Mining and Other Extractive Industries, European Commission, Luxembourg, Nov. 11th-12th 1997.

6.

U.S. News, Steering Clear of Danger, Internet WWW page, at URL: http://www.usnews.com/usnews/issue/truckb.htm, (Version Current 25th February 1998).

Navigation in Large VR Urban Models Vassilis Bourdakis Centre for Advanced Studies in Architecture, University of Bath, UK. Email: [email protected]

Abstract. The aim of this research project is to utilise VR models in urban planning in order to provide easy-to-use visualisation tools that will allow nonexperts to understand the implications of proposed changes to their city. In this paper, the navigation problems identified whilst working on large VR city models are discussed and a “fly” based navigation mode is proposed and evaluated.

1 Background The Centre for Advanced Studies in Architecture (CASA) has been involved in three-dimensional (3D) computer modelling for the last six years. In 1991 CASA received a grant to construct a 3D computer model of Bath [2]. The project was supported by Bath City Council and since its completion the model has been used by the city planners to test the visual impact of a number of proposed developments in the city. The model was created from aerial photographs of the city in 1:1250 scale using a stereo digitiser. It is accurate to 0.5 metre and covers the whole historic city centre, an approximate area of 2.5x3.0 km. During 1995 and using similar techniques, the core of London’s West End covering 1.0X0.5km was also modelled followed by Gloucester city centre in 1997. Following the hardware and software developments of the last few years, the expansion of the Internet and the World Wide Web (WWW) and current trends in the industry, the Bath and London model were translated and are used non-immersively in VRML as well as custom made VR applications [2][3]. The initial problem faced was how VR applications would scale and adopt to visualising a whole city of over three million triangles as in the case of Bath. The Bath database is, to the author’s knowledge, the largest and most detailed one produced as yet; the UCLA Dept. of Architecture and Urban Design (AUD) is currently building a model of the entire Los Angeles basin covering an area in excess of 10000 square miles as part of “The Virtual World Data Server Project” but it is still under construction.

J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 345-356, 1998 © Springer-Verlag Berlin Heidelberg 1998

346

Vassilis Bourdakis

2 Utilising VR Urban Models The computer models created in CASA demonstrate how computers will be used in the near future by engineers as part of their everyday practice, creating, modifying and improving our cities online using centrally stored sharable databases. The aim is not to create yet another Geographical Information System (GIS) although GIS features can be incorporated. Using Internet compatible programming languages such as Java, Cobra or ActiveX, VR urban models can replace a dedicated GIS system for certain applications (London’s West End). It should be noted that due to the nature of the proposed use of the models, the low polygon count fully texture mapped model approach adopted by more commercial/advertising oriented projects (Virtual Soma, Virtual LA etc.) was not feasible. The Bath model has been used in a variety of ways since it was originally constructed. To date, development control has been the main use with a number of schemes being considered. These are normally at the instigation of the local authority who recommend that schemes are modelled in order to facilitate discussions during the design phase and for presentation to the planning committee. In addition to its use in development control, the model has also been used to widen the public debate on how the city should develop in the future. The model of London’s West End is of lower level of detail and was initially used for transmitters' signal propagation experiments by British Telecom (BT). CASA has been using it as a database front end to navigating and mapping information on the city thus creating the Map of the Future. Gloucester’s city centre model which was commissioned by the Gloucester City Council is used for planning control similarly to the Bath one.

3 Navigation Problems Early in the process of creating VR models, the problem of navigation was identified together with inconsistencies on texture mapping, instancing, materials, indexing, etc. Although most of the problems were solved [9], navigation remains a troublesome experience not only to occasional users of the VR models but to the creators as well. The main problem in exploring, navigating and generally working with urban models in a VR environment (both immersive and non-immersive) is orienting oneself; being able to identify areas, streets, buildings etc. Therefore, a great deal of effort has been put towards making urban models more recognisable. It should be noted that there is a great difference between pursuing realism and aiming for a recognisable virtual environment (VE); a VE does not necessarily imitate reality [5]. Creating realistic VE of such scale is still not feasible and in many cases both pointless and inappropriate. In real life, when a person becomes disoriented in a urban environment, the natural action taken is to scan the surroundings searching for information. As Lynch [14]

Navigation in Large VR Urban Models

347

explains, there is a consistent use and organisation of definite sensory cues from the external environment. Assuming no external help is requested, this activity comprises of three steps:  Look around  Move around within a small radius  Wander further away with the added complexity the person may subsequently fail to return to the original position. Failure of the above three steps means that more drastic ones must be taken. This typically involves interaction with other people asking shop owners, pedestrians, car drivers, etc. for the relevant information. Alternatively, one could check road names against a map, in extreme cases even consult a compass. 3.1 Current Trends There have been many attempts, both commercial and theoretical, to create and interact with informational spaces [6]. Techniques have been developed based on both “realistic” and “unrealistic” concepts but it seems there is no consensus as yet. Putting aside the information space metaphors used, the navigation metaphors employed can be classified in two main categories; the screen and the world based ones. Among the former is sliding, rolling and examining whereas the latter include walking with or without physical constrains (gravity and collision detection), flying above the landscape and above the roof line utilising a bird’s eye view of the city and borrowing from cinematic terminology, panning and tilting. Game development is an area that one can review design decisions; a very efficient, demand driven, competitive environment enforcing quick, effective development cycles. Early 3D games like Doom and Descent, where the focus was on speed and interactivity, featured fairly simple environments. Each stage had an average of 10-20 rooms/spaces with quite different shapes, textures and themes in general. Furthermore, the overall settings for these games were not real 3D; players could not share plan co-ordinates (XY) with different elevations. The cues facilitating navigation were plenty, with the different types of enemies and, in some games, even their corpses adding to the amount of visual information available. Auditory cues were vital in sensing events that took place in secluded areas or simply behind the players' back. In such games a 2D wireframe plan of the site was the only navigation hint given to the players and was usually enough to get them back in course. In the more recent “action” games like Tomb Raider and Tie Fighter, the following important differences can be identified:  The player’s avatar is fully included in the VE. The player is not a spaceship dashboard or a weapon carrying hand anymore but a fully articulated avatar giving better sense of scale and participation [11]  More advanced navigation aids are used (notably compasses, text narration, lighting, sound etc.)  Real-time rendering of texture mapped animated characters

348

Vassilis Bourdakis

 Collision detection and gravity  “Director’s” Camera Movement. Camera zooms and moves around the action (that

is the player’s avatar) creating a feeling of a well-directed action movie rather than a simple game playing (Tombraider). Subsequently, wandering in the rooms becomes a worthwhile experience itself! The scale issue addressed with the full avatar presence is the most notable improvement together with the use of perspective and camera movement to follow the action. The above mentioned points can be and in a few cases are already employed in VR urban models’ interface. Information providing with position and orientation of the user visible as well as custom dashboards featuring compasses are possible. Additionally, audio data, animated objects can be incorporated. 3.2 “Flying” Modes of Interaction In this paper, the focus is on the implications of employing a “flying” based navigation mode. It is argued that due to the lack of sufficient detail at street level, the “identity” and “structure” of the urban image is much “stronger” from an elevated position, when more distant cues are visible. This is especially true in the CASA built models considering the fact that the 3D models were created from mainly aerial information; facade surveys were carried out but the main core of information came from stereo pairs of aerial photographs. The roofline is very accurately represented in geometry and colour without the need for textures or complicated geometrical hierarchies. In many cases, roof level detail had to be eliminated in order to keep an overall balance within the VE. Among the problems linked to the “fly” mode is that it effectively removes any degree of immersion by switching the person to map reading mode (some will argue that it furthermore defies the reason of having a VE in the first place). Prior knowledge of the city is advantageous whereas there is a distinct lack of sense of time and effort needed to travel over a VE. Furthermore, according to Ihde [12] there is evidence that there are important connections between the bodily-sensory perception, cultural perceptions and the use of maps as navigational aid. He relates the bird’s eye view of flying over a VE to the God’s eye map projection identified in the literary cultures. This mode of interaction has definite advantages although it also introduces an often-unknown perspective of the city. This perspective is more accessible to engineers and architects who are used to working with scale models of sites which inherently introduce the bird’s eye view. According to Tweed [18], the relationship between flying and walking modes should reflect to orientation of the body position, making flying modes more akin to immersive VR systems. It should be noted that it is possible to add street level detail in order to improve the amount of information -sensory cues available by adding:  Street furniture (lamp-posts, phone-boxes, bus stops, litter boxes, road signs, tarmac markings …)  Building detail ( Textures + geometry)

Navigation in Large VR Urban Models

349

 Animated elements (vehicles, people, etc.)  3D Sound and pre-rendered shadows  Trees, landscaping, flowerbeds etc.

However, more often than not, time and cost constrains prevail (surveying and modelling are both costly and time consuming) not to mention the hardware platform limitations [13]. Adding street furniture, landscaping and animated elements puts the burden on both hardware and software. Frame rate is an issue; from experiments carried out already in CASA, the indications are that 5-6Hz is the lowest acceptable level for such applications assuming the key issues addressed above are catered for. However, frame rate is not the most important variable in urban scale models—a 30Hz VE of an inferior model lacking the key issues is not a satisfactory solution. It should be noted that urban VE rely very heavily on visual cues in many cases ignoring the fact that aural cues could be more powerful and helpful [7]. This can be credited to the fact that most such models are created and used by architects and engineers in general who are traditionally not taught to understand and appreciate the importance of sound in an environment, even more so in a VE. 3.3 Limitations of Non-immersive VR VR models that are used in a team evaluation environment are usually nonimmersive. The main reasons are practical; if all the committee members are to be gathered together in one room, the amount of hardware and cables running on the floor, the size of the room and the overall support required is unfeasible. The cost of the necessary hardware to provide a fully immersive experience for half a dozen planners and other committee members is prohibiting. Finally, the need for interaction and communication while evaluating a scheme and the need for a common reference point when assessing a particular feature would be extremely difficult to achieve in an immersive environment [1]. For such tasks, the Reality Centres that Silicon Graphics Inc. has created are ideal; pseudo-immersive double curvature, wide screen displays with a single operator. On the downside, the assessors' team has to travel to the particular site (only a handful in the whole UK) and the costs involved are outside the budget of a City Council or local authority. Nevertheless, there are important navigation advantages to be obtained on a single user immersive VE as opposed to a multi-user one. One of the problems identified in early experiments is the lack of concept of time and distance. Walking on the streets of a virtual city is an effortless exercise in contrast to the real experience. It is possible to fly over a whole city model in a matter of seconds and that can be instrumental on loosing both orientation and sense of presence. Recent attempts in immersive VE tried to introduce body interfaces to navigation. Shaw [16] used a bicycle as the metaphor for navigating in his installation The Legible City. This way, the pedalling speed sets the speed of movement whereas the handlebars determine the direction. Another metaphor used with movement tracking devices is that of walking on the spot [17]. The pattern of body movement relates to pre-computed patterns and determines whether the user walks, runs or steps

350

Vassilis Bourdakis

back. Char Davies has created a synthetic environment called “Osmosis” [8] exploring the relationship between exterior nature and interior self where the immersant’s body movements are triggering series of events that position the body within the VE and even alter the environment itself. Among the first techniques employed in the Bath model in order to improve sensory cues from the environment was the use of wide angle lens; the argument being that this way a greater part of the city is visible and thus more information is transmitted to the user. However, the results of a small scale case study with university students of architecture and other members of staff familiar with the city where discouraging. Architects identified the problem as being that of “wrong perspective” compressing the depth of the image and generally creating a “false” image of the city. Others could not easily identify the problem but found the environment confusing nevertheless. Consequently it was decided to use only the “normal” lens on all VR projects although it does limit the perceived field of view which in real life is much higher than 45 degrees. Experiments on level of detail degradation in the periphery of head mounted displays (HMD) [19] as well as eye movement and feedback [4] demonstrate that more efficient VR interfaces can be achieved (compared to non-immersive ones) without necessarily hitting on the main VR problem; CPU capabilities. Another problem faced in non-immersive VR is that of the direction of movement versus direction of sight. Due to the two dimensionality of the majority of input devices used in non-immersive VR, it is assumed that the user looks at the exact direction of the movement. This is quite true in most cases, but when one learns and investigates a new environment, movement and direction of viewing should be dealt as two individual variables. Immersive VR headsets with position and orientation tracking mechanisms are again the easiest and more intuitive solution to this problem. Therefore the Variable Height Navigation Mode (VHNM) is introduced as another solution to the problem.

4 Variable Height Navigation Mode The VHNM is based on the fact that at any given position in a urban model, there must be a minimum amount of information - sensory cues - from the external environment available to assist navigation. This can be achieved by adding enough information on the model although this is rarely an option as was discussed earlier. Alternatively it can be achieved by varying the height of navigation according to the amount of sensory cues available on any given position. As an example, a wide long street with a few landmarks will bring the avatar down close to street level, whereas a narrow twisted street in a housing area will force the avatar high above the roof level.

Navigation in Large VR Urban Models

351

4.1 Theoretical Model In the real world, the sun, the wind, a river, the sound emitted from a market or a busy street, a church, a building, a street, a road sign, a set of traffic lights are among the sensory cues accessible and used by people. Furthermore, each person gives different importance to each of them making the whole process of classifying and evaluating visual cues much more difficult. In a VE, which is generally much less densely occupied and furnished by the above-described elements, classification is relatively easier. Although the user must be able to rank the relative importance of the various cues available, the designer of the VE can assign with relative safety what is a sensory cue within the VE. The theoretical model proposed uses a series of gravity like nodes or attractors [7] that pull the avatar towards the ground when approaching an area of high density in sensory cues. Similarly, in low-density areas a global negative gravity pulls the avatar towards the sky until there are enough sensory cues nodes visible to create equilibrium. It should be noted that the attractors are not true gravity nodes since they can only affect the elevation of the avatar and not its position in plan. The first emerging issue is that of cues’ visibility. In order to identify the visual cues available from each position the field of view and the direction of viewing is considered. Following, a ray-tracing algorithm is employed in order to assess visual contact from the current position. Considering the size of the object, a series of rays must be calculated to establish the percentage visible from the current position. Relative importance of the visual cues is essential if the model is to be successful. A general ranking of them can be carried out by the world designer based on the size, form, prominence of spatial location, historic value, users awareness, etc. However, the navigation mode proposed should be flexible enough to accommodate particular needs of the users. Locals will use completely different cues (depending on the culture, could be pubs, churches, prominent buildings, etc.) to the first time visitors who will probably opt for the ones described in their guide books and the ones that possess the main landmark characteristics as defined by Lynch [14]. Another variable relates to the amount of visual cues users received over time while navigating in the VE. The higher the amount the better the mental image of the city they have created. According to Lynch [14], sequential series of landmarks where key details trigger specific moves of the visitor is the standard way that people travel through a city (p.83). VHNM keeps track of the number of visible cues, their distance from the users’ path the speed and direction of movement and finally the time each one was visible. All that can recreate the path followed and should enable the prediction of areas of potential problems and modify the current elevation accordingly. In the event of orientation loss, it would be possible to animate back to a position rich in visual cues, a kind of an 'undo' option. The main question in animating back to a recognisable position is whether the user should be dragged backwards (as in a process exactly reverse to the one they followed) or if the viewing direction should be inverted so that a new perspective of the city is recorded. The argument for the former is that the user will be able to directly relate to the actions

352

Vassilis Bourdakis

taken only minutes ago, whereas the latter has the advantage of showing a new view of the city and the disadvantage that the process of memorisation may break down. Consequently, the VR application can be trained over a period of time and eventually be able to react and adjust to the habits and patterns followed by each user. Keeping track of the time spend on particular areas of the VE will enable the computer adjust the avatar’s position on the assumption that the recognition process and memorisation will strengthen the more time one spends at a particular place within the model. The tilt of the viewing should be also briefly considered. Walking at ground level and looking straight ahead is what people are used to, but when one is elevated twenty, thirty or more metres above the ground level the perspective distortion introduced by tilting and looking downwards should be considered. As described earlier, an often unknown perspective of the city is introduced and the effects it may have on different users should be carefully evaluated. Concluding, the theoretical VHNM model proposed will be quite difficult to implement in real time with existing computing power in large VR urban models. It is currently an extremely difficult task to structure the geometric database alone; setting up all the raytracing calculations and the fairly complex set of rules described above will seriously affect the performance of any VR application. However, some problems may be alleviated by reorganising the scenegraph, which conflicts with the general concept of spatial subdivision of large VR models and their organisation in Levels of Detail as discussed extensively elsewhere [2],[3].

Fig. 1. VHNM mesh on the Bath model. Typical view

Finally, it should be pointed out that the focus of this paper is limited on the visual cues. Experiments are still to be carried out regarding the employment of auditory cues in an urban VE.

Navigation in Large VR Urban Models

353

4.2 Model Implemented Having described the ideal system an attempt is made to put it into practice. Bearing in mind the difficulties in implementing the original concept, a collision detection based navigation mode is initially proposed (Fig. 1). A transparent and thus invisible mesh is introduced, “floating” above the urban model. The peaks of this mesh are where the minimum amount of sensory cues are, so in a way it is an inverted 3D plot of the sensory cues against the 2D urban plan. Using a 2D device, such as a mouse, it is possible to navigate in the VE whilst collision detection against the invisible mesh determines the elevation of the viewer (Fig. 2). a.

b.

Fig. 2. VHNM views on a. high and b. low visual cue areas

Early in the development stage, it was decided to represent the VHNM mesh as a visible five-metre wireframe grid, giving users a reference point to the height they are and an extra orientation directional grid. Consequently, the VHNM mesh clearly denotes which areas of the model are lacking sensory cues and by creating visible peaks on them, it actually presents cues of its own to aid orientation and exploration. Following extensive testing, it was concluded that for the Bath model, areas rich in sensory cues can be successfully explored at a height of 25 to 35 metres above street level (10 to 15 metres above the buildings' roofline). Regarding areas that proved problematic in the initial navigation experiments, 40 to 50 metres above street level produced good results. It was attempted to keep the height variation to distance ratio as low as possible to avoid user confusion and make navigation and movement smoother. It should be noted that the values mentioned above were satisfactory in the particular urban model but it is unlikely they will be suitable for a high-rise or a very densely built area. Experimentation will provide the right values and most likely a mathematical model will be developed once the VHNM is tested on London's West End and Gloucester City centre models also developed in CASA. In order to accommodate for the different needs of varying groups of users, the VHNM mesh can be shifted along the height axis (Z) according to the familiarity of the user to the urban model presented (Figure 3c,d) using on screen controls. It is also possible to alter the scale of the mesh along the Z-axis. Doing so, the viewer drops lower closer to the ground on areas rich in sensory cues and flies higher on

354

Vassilis Bourdakis

poor areas (Figure 3a,b). The first experiments carried out using the manually created VHNM meshes were quite successful in reducing the amount of confusion and orientation loss. However, no statistical analysis of the data obtained has been carried out yet, since controlling such an experiment proved extremely difficult. Consequently, most of experimentation was used for fine-tuning the technique and the variables involved. A series of tasks are currently being developed in order to create a more controlled environment enabling drawing of conclusions on the effectiveness of the proposed method and creating a test bed for evaluation of future work.

a.

b.

c.

d.

Fig. 3. VHNM Editing options, a. normal, b. scaled up, c. shifted down, d. shifted up

Among the limitations of this implementation is that the computationally expensive ray-tracing calculations needed to accurately create the mesh, were approximated in advance manually. Consequently, the already complicated and difficult job of the VR application is not stressed any further by having to compute gravity nodes and search for cues in the VE. However, generating this mesh is a very laborious process and difficult to automate. Furthermore, the VHNM mesh completely disregards the avatar’s viewing direction. In most positions within an urban model, the cues available are strongly related to the viewing direction. Looking north may give no clues whatsoever whereas turning 180 degrees and facing south may reveal the position of the sun (at least in the north hemisphere) and a landmark

Navigation in Large VR Urban Models

355

that will let you pinpoint your position on a map. Users have no option to classify the relative importance of the type of cues available according to their own needs. It is a pre-calculated VE that one is only allowed to navigate in, which is conflicting with the general conceptions of the service computers should be providing [15]. The next step will be to incorporate the visual cues versus time variable in the model and investigate to what extend the behaviour of a visitor can be predicted and augmented. Following, auditory cues will be included while work will be carried out into finding ways to simulate more accurately the theoretical model proposed.

5 Conclusions The proposed VHNM can solve certain problems in urban VR environments and if implemented fully could be a very successful navigational tool. However, bearing in mind the complexity of the proposed model, an immersive VR environment would supplement a partial implementation of VHNM more economically, in terms of cost, ease of programming and time needed to construct and fine tune the VR system. The implementation problems of the proposed VHNM can be tackled with different degrees of complexity and accuracy according to the capabilities of the VR platform used.

References 1. Billinghurst, M. and Savage-Carmona, J. (1995). Directive Interfaces for Virtual Environments. P-95-10 HITL University of Washington. 2. Bourdakis, V. and Day, A. (1997) The VRML Model of the City of Bath, Proceedings of the Sixth International EuropIA Conference, europia Productions. rd 3. Bourdakis, V. (1996) From CAAD to VRML: London Case Study, The 3 UK VRSIG Conference; Full Paper Proceedings, De Montfort University. 4. Brelstaff, G. (1995) Visual Displays for Virtual Environments – A review in Proceedings of the Framework for Immersive Virtual Environments Working Group. 5. Carr, K. and England, R. (1995) Simulated and Virtual Realities: Elements of Perception. Taylor and Francis, London. 6. Charitos, D. (1996) Defining Existential Space in Virtual Environments in Virtual Reality World96 Proceedings. 7. Charitos, D. and Rutherford, P. (1997) Ways of aiding navigation in VRML worlds Proceedings of the Sixth International EuropIA Conference, europia Productions. 8. Davies, C. and Harrison, J. (1996) Osmose: Towards Broadening the Aesthetics of Virtual Reality, in ACM Computer Graphics: Virtual Reality (Volume 30, Number 4). 9. Day, A., Bourdakis, V. and Robson, J. (1996) Living with a virtual city ARQ, Vol2. 10. Fritze, T. and Riedel, O. (1996) Simulation of complex architectural issues in virtual environments in Virtual Reality World96 Proceedings. 11. Gibson, J.J. (1986) The Ecological Approach to Visual Perception. London 12. Ihde, D (1993) Postphenomenology, North Western University Press, Minnesota.

356

Vassilis Bourdakis

13. Kaur, K., Maiden, N. and Sutcliffe, A. (1996) Design practice and usability problems with virtual environments in Virtual Reality World96 Proceedings. 14. Lynch, K. (1960) The image of the city MIT Press, Cambridge, Mass. 15. Negroponte, N. (1995) Being Digital Hodder & Stoughton. 16. Shaw, J. (1994) Keeping Fit, @Home Conference, Doors of Perception 2, Netherlands Design Institute, Amsterdam. 17. Slater, M., Usoh, M. and Steed, A. (1995) Taking Steps: The Influence of a Walking Metaphor on Presence in Virtual Reality, ACM Transactions on Computer-HumanInteraction (TOCHI) Vol.2, No3. 18. Tweed, C. (1997) Sedimented Practices of Reading Design Descriptions: from Paper to Screen Proceedings of the Sixth International EuropIA Conference, europia Productions. 19. Watson, B., Walker, N. and Hodges, L.F. (1995) A User Study Evaluating Level of Detail Degradation in the Periphery of Head-Mounted Displays In Framework for Immersive Virtual Environments FIVE’95 Esprit Working Group 9122, QMW University London. World Wide Web sites mentioned in the text: VRML2 Spec: Bath Model: London W.E.: Virtual L.A.: Virtual Soma:

                       !       

Art and Virtual Worlds Olga Kisseleva

Today`s world is characterized by an explosion of new communication technologies and unique uses which modify our behaviour and our thought processes. Confronted with this evolution, we are waiting for a new form of life to emerge corresponding to these technologies. A new art form corresponds to this existence, not merely by tools that it uses but through the questions that it raises. We can thus note that with the intervening changes in society, scientific progress, and the change in the general relation between art and life, the artist's role today is radically new. Contemporary art has changed its function in relation to modern art in the sense that it resembles more and more a science. Since its conception, art has appeared as a representation of a life ideal, relative to one time. Contemporary art doesn't reflect life. It prefigures it. At present , thanks to the new digital technologies and communication,art itself can become a simulation of reality: an immaterial digital space which we can literally pentrate and,as Fred Forest says, is no longer , a " self-defining form . It is itself a picture, formalized numerically. Dimensional space is no longer an intangible substratum. It is a digital object in interaction with other created objects , having its own reality". Art exists thus as the counterweight and the complement of science, so that the two together complete the human discovery process. The artist creates, therefore, from concept to analysis while the scientist works from analysis to concept. The artist elaborates a concept as a starting point from which he participates in the construction of a world. If he succeeds in proving the reality of his concept, it will develop to become part of the collective conscience. On the contrary, the scientist analyzes, researches, studies,and experiments before formulating his concept. In this way, science reveals our knowledge of the world , exploring it step by step, while art interests itself with experience of the world,offering us concepts that permit us to glimpse immediately another, more intuitive, vision of the universe. Finally, the questions that artists ask themselves in their research are extremely close of those of scientists. Communication artists and researchers, working on the problems of communication are confronted, according to Pierre Moeglin, by the same question: In our modern age, to what do we owe this central position ceded to video, to the computer and to telecommunications? Certainly not, they answer, to the sophistication of these technologies, installing a difference of technological application but not of its inherent nature in relation to the previous techniques. Furthermore, we can say that while pushing their limits to phenomena that are not all new - several were already germinating during the first industrial revolution - uses are made systematically revealing better than before the wider J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 357-359, 1998 © Springer-Verlag Berlin Heidelberg 1998

358

Olga Kisseleva

dimension of diffusion that includes our relation to others and ourselves as well as the access that we have to work, to leisure and culture . The vocation of art, as well as that of science,has changed. Art`s role is no longer to entertain but to communicate something about the nature of the human mind: the scientist teaches from his scientific paradigm, the artist from his vision of the world. The different and unique nature of cyberspace permits us to simultaneously approach and escape from reality. Today we are not content with computer-driven animation but moreso with attempting to recreate life, to breathe independant life into these " beings " of pixels and algorithms. It is about giving birth to these " virtual " creatures, capable of learning and evolving. Virtual reality in relation to the real world implies an isolation. It seems the most obvious expression of a man / machine system in which man plays an integral part. In this universe, movements, or even physiological data and human brainwaves, can be controled by sensory stimulations used like commands. It is a device placed in contact with our body, in our senses, on and maybe one day even under our skin, directly in our brain, if one succeeds in developing brainchips. Some researchers even speak about associating the computer system directly to the human brain. Our senses will then be pacemakers that activate our brain by their signals transformed into electric activity. More and more realistic "clones", shall circulate throughout the network, with all manners of functionality and with an impressive delegation of power devolving directly from their "game-masters ". The mixture of reality and virtuality applies directly to the creation of cloning. Thus, the putting into movement of virtual actors in real time can be of use in as different domains as teleworking, videoconferences, virtual communities,multiparticipant games or 3D animation. These actors are derived from real faces, then sculpted in 3D before finally being cloned. The animation of clones is produced then in real time: the face of the user drives directly his duplicate with the help of a camera and a treatment of images while the top of the body is directed with the help of two sensors, one on each hand. The user can communicate thus with other clones, to submerge and to interact within a 3D universe. At the same time, avatars install in the network a new aesthetic that would be the one of Tex Avery with a touch of Tolkien and a personality like the Simpsons. Masters identify more and more with their virtual image, so that avatars impose themselves in their daily life and transform their minds little by little as well astheir own body. The MUD and the MOO (carrying trades and games in 3D), that are developed in the same way as Internet, evolved in the same sense. Their virtual reality, founded first exclusively on the text, moved toward the conviviality and exchanges ludiqueses with representations in 2D and 3D of users nominees as of avatars.

Art and Virtual Worlds

359

Artificial comedians also became the object of various researches, in the scientific laboratories and in artistics studio. Interactivity in real time with these comedians became possible with the tele-animation of characters from movements and the mimic of real actors equipped with datasuits and sensors. Currently, the technical progress, the rythm of life and especially the new technologies of communication, are changing the body. The body is replacing by prostheses. the usefulness of its senses, sometimes leaving only one activated, deconstructed it. The body is seized and transposed in the virtual world, it is cloned and finally it is substituted by a new virtual body, the avatar. The body loses its consistence therefore, its substance, its value, and even its biologic role; it turns into interfacing. The carnal art, created by Orlan, interrogates the statute of the body at the genetic manipulation age, as well as its future that turns to a gosthly appearance . The virtual, to Orlan, does not only consist to multiply the body and to be telepresent, but also to accept the reconfiguration and the sculpting of the body. For her seventh operation-surgical-performance, implants, normally used to heighten cheekbones, were placed on every side of her forehead . It created two bumps. These bumps, being totally virtual, “avatar ized” the body of Orlan. As one saw it, the imitation of gestures and even of some human senses is today easy. But the virtual makes also possible the control of senses. Indeed, it reintroduces a kind of individual sovereignty, even though it is rather illusive. Of course, our sensory perception is necessarily coded, because of our cultural, social or personal history. Nor uncertain nor genetic, it is the result of the modelling to which all society proceeds. However, in the virtual reality, this modelling proves to be more restricting, more precise. This testifies well that sensory control is more there a fantasy. Therefore in a reasoning: “The sovereignty of the individual is maybe then restored. But at the same time, his liberty is forced infinitely, since, being not anymore a prey to confusions of the world or of the meetings, he cannot feed himself anymore with their wealth”.

Las Meninas in VR: Storytelling and the Illusion in Art is m

izi

n

w o nson

n

istin

sil k is

Electronic Visualization Laboratory University of Illinois at Chicago, Chicago, IL 60607, USA (312) 996-3002 voice, (312) 413-7585 fax [email protected], http://www.evl.uic.edu/chris/meninas/

Abstract. Las Meninas is a virtual reality (VR) artwork based on the painting of the same name by Spanish painter Diego Velazquez. Created for the CAVE(tm), Las Meninas attempts to establish a language of art in virtual reality by placing VR in the realm of storytelling; storytelling that is not simply formalistic and decorative, but also psychological. The viewer confronts a narrative cryptogram which can be deciphered at multiple levels of meaning as he seeks to explore the enigmas inherent in the painting by literally entering into it, both physically and psychologically. This allows for the suspension of disbelief or the illusion in art, the quintessential rule of art.

Keywords ontolog i l ut nti ity k in st ti /sy n st ti stim ul tion im m

1

sion.

Introduction

ny p opl w o v xp i n vi tu l lity ( ) fo t sttim will tt st t t t y w m z y t uniqu p ptu l xp i n ut t t t i m otion l n t oug tfulinvolv m ntw sm inim l. isisnotsu p ising sin m ny vi tu l lity wo k s in l g p t x issin visu l ts n notint n sp tof m ning fuln tiv w fo m n fun tion int onn t . o i v su n tiv language of art in n sto st lis . pi tion of Las Meninas in tt m ptsto st lis su language y pl ing in t lm ofsto y t lling t tisnotsim ply fo m listi n o tiv ut lso psy olog i l. quint ssnti l ul of tisits ility to susp n is li f n t t illusion of art. oug onvin ing p snt tion t illusion in tm ni f stsitslf fo ing t vi w to om psy olog i lly involv . Las Meninas involv st vi w in psy olog i l n tiv im g in s om pl x w of sig ns. vi w onf onts n tiv t w i n ip tm ul tipl l v lsof m ning . ip t m o i s n two k of sig ns l ing to ot sig ns. f t vi w f ilsin isiniti l t sk onf onts not J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 360–372, 1998. c Springer-Verlag Berlin Heidelberg 1998 

Las Meninas in VR: Storytelling and the Illusion in Art

361

stof sig nsto oos f om w i will l to ot sts. vi w spsy o log i l inqui y into t n tiv n tt m ptto ip its y ptog m s pl s m p sison reactions to t vi tu l wo l n nott vi tu l wo l itslf.

Fig. 1. L s

nin s sp int

y t

p nis p int

i go

l zqu zin 16 6.

o t wo k to fun tion tt psy olog i ll v l n i v t illusion in t t vi w m ustbelieve in t n tu of p snt tion t vi tu lwo l po t y s. p im y on n ofLas Meninas in isfo us upon t m nism of t in ts n nott i uss. tisto s y t vi w xplo s l tion s ips t t n in ivi u l l m nts. ow v fo t isinv stig tion to o u two l m nts n ss y ontolog i l ut nti ity n k in st ti /sy n st ti (k /s) im p t. n t s l m nts ful ll t illusion in art in om s possi l .

2

Storytelling Specificity in VR

Las Meninas o The Maids of Honor(16 6) y t g t p nis p int i go l zqu z ss own in g u 1 ll ng st vi w wit its ll g o i lsu j t m tt n nig m ti m is n s n 4 7. om t outstt vi w onf onts t tists nv sw i is fo v i n f om vi w. vi w si sto s w tis i n n tt s m tim witn sss m is n s n w i i swit in itslf m ultipl ll g o i lm ning s t pi tu s o ting t w lls of t oom in l zqu zs om position su j tsf om vi sMetamorphoses p int y zo ft t o ig in ls y u ns 1 sp i lly t two pi tu s ng ing ig on t w ll ov t m i o Pallas and Arachne n Apollo

362

Hisham Bizri et al.

and Pan t m i o in t l k f m tt k of t oom w i fl ts t lf l ng t g u sofK ing ilip n u n i n un u t in utnot ing ls t m y st iouslig ts ining in f om t upp ig tsi of t oom t m g i lstilln ssof t oom n t p opl in it sif p otog p fo ing t vi w to li v im slf to tiv ly p snt t t s n t p int im slf w os “ k fo m n litup f p sntt visi l n t invisi l 3 t l m vil os N i to st n ing in t k g oun in n op n oo w y t im g in y sp ly ing outof t pi tu f m w l zqu z t nf nt m i t gi l w f n os N ito look ing f om i ntpoint tt sov ig ns w o in t o y st n ing n xtto t vi w n so fo t . ll g o i l su j t m tt n nig m ti m is n s n wo k tog t in l zqu zsp inting to dramatize t inner focal point of t lm of t p inting n t outer focal point of t lm of lity t vi w sposition. vi w is ton seeing n being seen. onst ntly osill t s tw n objective realism n subjective p ox s ising f om t m l m ti int p t tionsw i t ov ll m is n s n l n sitslf to. ision isno long x on sing l v nis ing point utisnow dispersed ov m ultipl pl n sof fo m fun tion n su j tiv m ning . p inting issqu stions outt n tu of p snt tion n su j tivity in uniqu w y ly m t in t isto y of visu l t. nt 2 t p inting of Las Meninas om st vi tu l lity of Las Meninas. vi w is l notonly to xplo t in p o l m sp t ining to t n tu of p snt tion n su j tivity ut lso f fu t nig m s. t n foott ll p inting w i m t st siz of on of t sl g p oj tion s ns om s n im m siv nvi onm ntw sv lp opl n xp i n t wo k sim ult n ously ss own in g u 2. t o ti l qu s tionst p inting iss om t ng i l n m pi i l on pl wit in t oun i sof . n ot wo s t p inting s x n t ition ln tu of p snt tion n su j tivity t k on y n m i n p y si l sp ton t nt of vision isdispersed in t m ium of .

Fig. 2.

wo p otog

p sof vi w swit in Las Meninas in t

.

Las Meninas in VR: Storytelling and the Illusion in Art

363

Las Meninas in pp o st qu stion of p snt tion n su j tiv ity f om v ious ng l s. ntolog i l ut nti ity n k /sstim ul tion st lis f m of f n f om w i t s qu stions nsu . f m of f n onsistsof fou t isti s. i st t fusion of opti l n vi tu l im g s. on t tion of m ultipl g ui s ot visu l n u l. i t tion of total environment n t double articulation of time. ou t t m ti s iftf om t fo m listi to t psy olog i l. fo isussing t f m of f n n ow it wst vi w into t n tiv tin vi tu l nvi onm nt i f s iption of t xp i n is n ss y.

3

The Virtual Experience

Las Meninas isp im ily sig n to un in t (tm ) m ulti p son oom siz vi tu l lity sy st m v lop tt l toni isu liztion L o to y ( L) oft niv sity of llinois t i g o. is t n y t n y t n foot oom onstu t of t t nslu ntw lls. k ny x wit two n nit lity ng in s iv st ig solution st osopi im g sw i p oj t onto t w lls n f ontp oj t onto t floo . Lig tw ig t L st o g l sss wo n to m i t t st osopi im g y. tt to t g l sssis lo tion snso . st vi w w lk swit in t on n sof t t o tp sp tiv n st o p oj tion of t nvi onm nt up t . isp sntst us wit t illusion of w lk ing oun o t oug vi tu lo j ts. ou sp k sm ount tt top o n sof t p ovi u io. us int tswit t nvi onm ntusing t 3D wand sim pl t k input vi ont ining joy sti k n t uttons. w n n ls n vig tion oun t vi tu l wo l n t m nipul tion of vi tu l o j ts wit in t two l . sLas Meninas g ins t vi w s n t m slv ssitu t in l tion to t wo k in m u t s m l tions ip st y v wit t p inting . y un l to m ov oun t wo k n n not inv stig t w t ison t nv s. n opti lly g n t to o v t pl y ing t ol of l zqu z nt st stu io t oug t oo w y ss own on t l ft in g u 3. to p ussin f ontoft i n nv s om ing t two im nsion lp int im g of l zqu z. o n tion stst s n n ont xtu lizst n tiv w il p lu f om The Well-Tempered Clavier y o nn sti n pl y sin t k g oun . n to sk st vi w to paint in t st of t t s sposition in t o ig in lp inting ss own on t ig tin g u 3. is llowst vi w is st tt m pt tint tivity st w n om s us us to p intt t s. n to t n s i st p inting t lling t vi w w o t v ious t s n pointsoutt v ious nig m s ss own on t l ftin g u 4 . two im nsion lp int t s xisting in t im nsion lwo l now om t m slv slif siz t im nsion l t s ss own on t ig tin g u 4 . nf nt g it m ov soutf om pl tow st

364

Hisham Bizri et al.

Fig. 3. n t l ft st xp i n g ins n to po t y ing l zqu z nt st vi tu lstu io n t k s ispl in f ontof is nv s. n t ig t t vi w om pl t l zqu zs om position y “ p inting in t st of t lif siz t susing t sw n .

Fig. 4. n t l ft t n to isussssv l nig m sin t p inting . o x m pl t m i o tt k of t oom is oug tfo w so t vi w n g t tt look tw titis fl ting . n t ig t t 2 t s in t p inting om lif siz 3 t sin . vi w n now w lk oun t stu io n s t s n f om ny pointin t oom .

Las Meninas in VR: Storytelling and the Illusion in Art

365

vi w n t n ing s im into t s n llowing im to s t s n s l zqu zwoul v s n it. isist vi w siniti l nt n into t wo l p viously un tt in l . p o n tion info m st vi w t t isnow f to xplo t iswo l . oving outt stu io t vi w n o s v t s n f om ny point in t oom o f om ny t spoint of vi w.Look ing outt win owsof t stu io t vi w s stwo non u li n sp sof volving p n ls. sts ows g u sw o influ n l zqu zs tim ( on s ts lil o v nt s n ot s). s on s owst p snt tion of m i g in p inting su s n n y sArnolofini Marriage w i s p t w y l zqu zt oug tof t us of m i o s n fl tionsto p sntt visi l wo l 6. L ving t stu io vi t oo w y os N i to st n sin t vi w nt s o i o wit p inting s y zo l zqu zs pp nti ft u ns ng ing on t w lls. is o i o l sto tow . oing n li n tion t t l zqu z ssto tow f om w i so s v t v ns t vi w lim sup t st psof t istow om p ni y m usi f om sSt. Matthew Passion. nt s oom wit t l sop s t im nsion ltipty n l g p inting of ist y l zqu z im slf ss own on t l ftin g u . L ving t is oom t y nt p ss g w y wit p inting s y i sso n t ussi n p int v v ft Las Meninas . st vi w m ov sf om t 17t .to t 20t . ntu y t oug t ist nsition l o i o t m usi s ifts f om t tof sfug u s n p lu sto t tofLig ti n nittk ss own on t ig tin g u .

Fig. 5. n t l ft s l zqu z tow fo o s ving t v ns t vi w n lim vi tu l tow of t i own to t iso s v tion l oom . n t ig t t vi w w lk sf om t 17t .to t 20t . ntu y own llw y lin up wit p inting s y i sso n t ussi n p int v v ft Las Meninas.

366

Hisham Bizri et al.

n tiv s iption of t p inting in lu s quot f om i sso t t Las Meninas sug g st to im t nt n of f sistsol i sinto t stu io of l zqu zwit w ntfo is st. i sso sint stin Las Meninas sto o wit its nt l t m of t p int n is l tions ip to ism o ls n lso wit itsp ofoun m it tion upon t isto i l n so i t l p on ition of tisti tivity n pow l tions. n w sp t vi w n s im slf in ssv l l m ntspl y ing on t ist m ofpow n om in tion ss own in g u 6. l vision sts ng ing in m i i juxt pos iv l lm foot g f om 1936 of n l n o of itl in Triumph des Willens n of plin in The Great Dictator.L g two im nsion lim g sof n o n post sf om t p nis ivil o n t w llsin f ontof m u lsof i sso sGuernica . u ning oun n s sw lk into t iss n f om outof on of i sso sstu y p inting s ft Las Meninas. in lly t vi w tu nsto t stu io s ing t s n f om t p sp tiv of os N i to tt k of t oom . v lpossi l p inting s s own on t l nk nv s sin g u 7. tu n to t initi l sp ism k wit n w qu stions n n y t m ulti l y p sp tiv st jou n y i s v l to t vi w .Lif tt ou tof l zqu zw ssti tly i l. om position itslf p s v st is i y n m g in liztion of t p int . s l zqu zty ing to correct t is i y t oug ll g o i l n sy m oli llusions outt n tu of p snt tion n su j tivity ? t w sitt t w sp inting on t i n nv s? s l zqu zp inting t K ing n t u n t Las Meninas itslf o g u sof v y y lif n t us w s lling g instt K ing n t u n n t nti t ition of ou t p inting ? s l tions ips ll s utiniz in t m ium of .

4

The Illusion in Art in VR

om t outst t vi w s onsious n un onsiousm in is two k m k ing inf n s n ing t n tiv y ptog m sin Las Meninas. is ing ispossi l us t vi w believes in w t s s n n i ntify wit it. tis in w tis ll ontological authenticity t tt st ul of t illusion in t o t ility to susp n is li f m nif stsitslf. o t illusion in tto om su i nt on ition in k /sstim ul tion of t snso y m oto s m is qui . ot l m ntsful llt qui m ntsfo i ving t illusion in t sw ll sfo m listi lly n n ing t m ning of t t m ti n tiv plot. ntolog i l ut nti ity f sto t illusionisti ility of t t im n sion lim g sto s ow n authentic p snt tion of lity. v n ift s im g s unf m ili to t um n y t y v to xt pol t f om t k nown in o to i v onvin ing p snt tion. k in st ti im p tin is sult of t vi w s ontinuousn vig tion n t m ov m nt of im g itslf. sy n st ti im p tis sultof t s m l ssint tion tw n t u ito y n t visu l l m nts.

Las Meninas in VR: Storytelling and the Illusion in Art

367

Fig. 6. i sso sw issu sof pow n om in tion in Las Meninas n t vi w n xp i n 20t . ntu y int p t tion.

Las Meninas m nif stsontolog i l ut nti ity n k /sstim ul tion in m ny w y s. vi w isnotonly im m s in t t im nsion l im g s ut lso believes in t real im g sof n v t stu io tow p ss g w y s n v ious isto i l n politi l oom s. u t m o t m is n s n lo tion soun n t us of t m usi of Lig ti n nittk o st t in o to i v genuine realism n n n t m otion l p ti ip tion of t vi w in t tof v l tion. o st tion of visu l n u ito y l m nts oft n st t n m unf m ili in o to int nsify n stim ul t t vi w sm nt l p ti ip tion. st tion in Las Meninas is lw y s xt n o xt pol t f om t known. fo t vi w isoft n im m s in t t im n sion lim g s n soun sw i estranged i. . on t on n t y pp n soun concrete n real ut tt s m tim t vi w k nowst y sy nt ti n st ti lly onstu t . vi w sno illusion t ts is li v sin it us it is xt pol t onf ont wit real im g y ts f om t k nown. ntolog i l ut nti ity ist fo l y upon w i k /sstim ul tion is uiltto i v t illusion in t. n Las Meninas v y t ing outt jou n y is t stsig tconcrete in o to w t vi w into t n tiv t oug t us of opti lly g n t im g s. st sto y p o s ow v t in t g tion/int tion of long n vig tion lu n llu in to y l n s p s non u li n sp s l t n l t m otion su n s iftsf om olo to l k n w it t m ix of 17t ntu y ton l m usi n 20t ntu y ton lm usi iz n isto t o l y int in stu tu n p oli sty l t m po l p ssu n sp ti l isontinuiti s n t us of iv l lm foot g in vi tu l nvi onm nt t og nitiv im p t in t vi w . is im p tdramatizes t jou n y n fusst ov ll n tiv wit on i i f l n ng g st vi w in t tion plot. vi w onst ntly lt n t s tw n t sp of t p snt tion l n t tof t su l wit t l tt g oun in t p snt tion l. tis

368

Hisham Bizri et al.

Fig. 7. tu ning on t nv s. n s n.

k to t stu io w g n t vi w s sw t oul om t isn w p sp tiv look ing k sv l possi iliti s

in t issp t tt susp nsion of is li f n t illusion in to u s. tis t t st ti s n psy olog y om int twin . n t following s tionsw will l o t on ow ontolog i l ut nti ity n k /sstim ul tion in Las Meninas st lis language of art in . 4.1

Fusion of Optical and Virtual Images

Las Meninas st tsw n t oo of l zqu zsstu io t im n sion l om put g n t im g op nsto l t n opti lly p o u vt pl y ing t ol oft p int im slf nt t m pty sp w i is om put g n t . om t st t t vi w xp i n s narrative tension ising f om n im m i t osill tion tw n t wo l of t real opti s n t t of t imaginary vi tu l nvi onm nt w i isnont l ssontolog i lly ut n ti . fo ontolog i l ut nti ity is ou ly p snt t oug opti s s w ll st oug vi tu l wo l m ut nti . isn tiv t nsion is t iz y t vi w s ility to see som t ing s ot l n im g in y sim ult n ously. li v st tw t s s long sto t l wsof opti s ut tt s m tim xisting wit in t l wsof vi tu l nvi onm nt. p snt tion l n t vi tu l fo st vi w n lin tw n t to fl tupon t im g in y sp m pi i lly. vi w nnots y to im slf t tnot ing is li v l in t isn tiv us itisvi tu l n non l. opti l im g s ll t g p m pi i lly sw ll st ontonolog i lly ut nti vi tu l nvi onm nt. ism k st vi w tl stiniti lly identify wit t n tiv unfol ing . f ou s t isp ti isnotn w in t isto y of t. ot t ists n t u lists m ploy su t niqu squit su ssfully in p inting n in m w t l w sm estranged. not inst ntin Las Meninas w t fusion oft opti l n t vi tu l t k pl istow st n of t n tiv . ft t vi w l v st wo l of t 17t ntu y l zqu z n t fug u sof . . nt st wo l of t 20t ntu y to witn ssstu i s y i sso of Las Meninas isGuernica

Las Meninas in VR: Storytelling and the Illusion in Art

369

t s i l m usi of Lig ti n nittk ut ov ll t vi w n ount s t l vision stssusp n in m i i . stss ow iv l lm foot g of itl n n o n of plin. g in t in lusion of opti lly g n t im g swit ontolog i lly ut nti vi tu l nvi onm nt fun tions t m t t m ti l v l to p ovok fl tionson t ng ing m t o sof p snt tion n su j tivity. vi w is onst ntly s ifting f om t p snt tion l to t vi tu l n k . t isin t isosill tion t t t fo m ost l m nt of sto y t lling m nif stsitslf t susp nsion of is li f o t illusion in t. 4.2

Visual and Aural Guides

Las Meninas in o po t sm ultipl visu l n u l g ui s. st g ui is t ism o i voi of t n to w o n t st isto i l politi l n st ti y ptog m sp sntin t p inting .L t t nf nt g it ts s t im nsion l g ui l ing t vi w f om st ti p sp tiv into t p inting n llowing to m ov out t sp f ly. t st t inf nt s m sto n li sof t invisi l n to utw n t vi w is g iv n f om ofm ov m nt t n to susp n s isn tion n t nf nt sum s pl in t s n . not g ui st psinto t n tiv . isg ui is p son st n ing wit t vi wing u i n in t w o t n t k son tiv . t sponsi lity to g ui t t m t oug t stof t n ism t o ofusing g ui f m ili wit t sto y isinspi f om p n s K uk i t t ig ly sty liz n som w t ov w oug t m ti fo m iv f om t f u l ok ug w p io (1603 1 67). n K uk i t t t is ns i o to w o st n s tt si of t st g n n t st tion fo t u i n ( m t o l t us in ly p n s sil nt in m ). n Las Meninas, t ns i o g ui ful lls ou l fun tion. n vig t s t vi w t oug out t st of t n tiv n n t s n som tim s fl tsupon t v ious y ptog m s. vi w n int uptt ns i n is fu t qu stions ou ts om m nts n o j tions. is lps t i log u tw n t ns i n t vi w n lso m ong t ot vi w s sw ll p op ty possi l us of t so i l n tu of in t . fo t ontinuousn vig tion n s m l ssint tion/int g tion tw n t visu l n u ito y l m nts t in t vi w k /sstim ul tion w i ws im ot psy olog i lly n p y siolog i lly into t tion plot. 4.3

A Total Environment and the Double Articulation of Time

ft t opti l l zqu zt k s ispl in t m pty vi tu lstu io t vi w paints t stof t p inting t nf nt g it n ntou g using t w n in t lik p int us . tiswit su int tivity t tt vi w is l to t ln tw n w tisp snt in f ontof im t p nom non n isown m nipul tion of itin . is ou l ti ul tion of tim t tis tim t t l y xistsin t p nom non n itsm nipul tion y t vi w g iv st vi w t f ling t tt lity p snt in t isnotonly p snt tion l ut lso ontolog i l n su j tiv . vi w is

370

Hisham Bizri et al.

n lly l to p tof t p nom non. is n xt nsion of vi tu lwo l in w i n s p n t m in t out om of v nts. p nom non p ssing in tim n now int upt l t l t m ov ot k w s n fo w s n om pl t o l ft sis. vi tu l nvi onm nt om s total environment in w i t vi w is ot n xt nsion n t m ining f to of t nvi onm nt. vi w sp ti ip tion isnot it y in Las Meninas. f ooss t in sig ns outt n tiv fu t isto i l politi l n st ti sig ns v l to im w i p op lst n tiv in t in i tion. vi w in volv int tivity isintinsi to t n tiv sov llstu tu n n ss y fo t unfol ing of t nig m sof p snt tion. 4.4

Dramatic Shift from the Formalistic to the Psychological

Las Meninas isst g in su w y t tt is m ti s iftf om t fo m listi to t psy olog i l.N otonly o st wo k inv ntp ss g w y s tow s t im nsion l tipty s non u li n sp s t l sop s t nsp ntsu f s n t l vision sts w i ontolog i lly ut nti utit lso p ovi s t m wit isto y in o to onn tt m wit t n tiv n g iv t m m ning . vi w st oi to n vig t n int twit v ious isto i l p io s f om t 17t to t 20t ntu y w i m o y sp i sts n m usi fl ting t i isto i l politi l n st ti sp i iti s. s ift f om on p io to not is n tt m ptto m k onn tions tw n v ious p io s n m onst t ow fo m n fun tion t son . psy olog i l f to pl y s m jo ol . N otonly o st vi w xp i n sp i sensation ising f om t sp i fo m listi st n m usi ut lso m nt l tof p ption om s s pu ly on un onsiousinf n ss m k s s s n vig t s n int tswit v iousstsin i ntp io s.

5

Conclusions

llusion in t is om pl x topi n s its own lim it tions n p ig m sw n n ing lity. n w look t g y pti n t fo x m pl w it s illi ntsig n ling sy st m of o n not s lit l p sn t tion of lity. utist ist w y t g y pti nst m slv ssw t i t? ks t t t ton o fo m o ling in lig t n s w i m insfun m nt l to ll l t v lopm ntof st n t. sin ito sof t tt ition n inv nto sof two k s itisim po t ntto inv nt language w i n st w y ou n w toolsof p o u tion op t n s p t futu of t. n its m st tf w wo k sty to sy st m i lly fo m ul t n tisti position n n w y sto t l ng u g of tw os ssnti lfun tion ist m nif st tion of t illusion in t. n v lop sy st m ofs m t in w i t illusion in tispossi l onsists of two l m nts ontolog i l ut nti ity n k /sstim ul tion of t snso y m oto s m . vi w s positiv spons ft Las Meninas w s

Las Meninas in VR: Storytelling and the Illusion in Art

371

s own tt nt n tion l o i ty o t l toni ts n t ink u st 1997 t llsust tt y om pl t ly fo g ot outt t nolog y oft n t tt y believed in t n tiv unfol ing n w psy olog i linvolv in itst nsfo m tions. t st t vi w s xp i n t t ill of p f tillusion in t us t i g tw n t p nom non n t vi tu lis ok n. ow v on t isillusion w so itis ssnti lt tsom t ing ls llst g p us w w nt n xp t m o . isto y of t nolog i lly s t isfull of su inst n s. ly in m fo x m pl w s t ill us of t k n t t fli k ing im g s m ot pl s n so fo t . L t u i n sw nt mo n itw st oug m ti n tiv plots w t lin o non lin t t in m ti t v lop . isf ing sim il ll ng to t tof its ousin t in m . it out ontolog i l ut nti ity n k /sim p tLas Meninas woul v n n x is in ts n not wo k w illusion m nif stsitslf sw tf l n t ink in f ont of t y tog m sof itsvi tu l wo l . n ot wo s in Las Meninas t vi w notonly witn ssst f it ful n onvin ing p snt tion of visu l xp i n t oug ontolog i l ut nti ity ut lso t f it ful onstu tion n o st tion of l tion l m o l in w i t int pl y of im g n soun tig g in t vi w k /sstim ul tion to ing out second reality. iss on lity o ig in t sin t vi w s onsious n un onsious reaction to t vi tu l wo l n not in t vi tu l wo l itslf. ilusion in art n lly m nif stsitslf in t vi w sreaction to t vi tu l wo l t y xp i n .

Acknowledgements

Las Meninas w s t y is m izi n w o nson n istin si l k iswit onti utions y i l ol K y oung k vi i o n ln uz. m o lsw t using oftim g 3 n im po t into t using ili on p i s fo m . opti l v t of l zqu zw s t using lu s n t niqu n li y v lop y osp nsl y. ny t ink s lso to v p fo is ontinuing innov tion in t li y n to om o . vi tu l lity s oll o tions n out p og m s t L possi l t oug m jo fun ing f om t N tion l i n oun tion t f ns vn s oj ts g n y n t p tm nt of n g y sp i lly N w s 93034 33 9 12272 N 97122 3 97203 1 n t N tn s ipsfo vn om put tion l nf stu tu p og m . n m m s sk t m k sof t o of ust sof t niv sity of llinois. m

372

Hisham Bizri et al.

References 1. F. Baudouin, Pietro Pauolo Rubens. Harry N. Abrams, Inc., New York, 1977. 2. C. Cruz-Neira, D. Sandin, and T. DeFanti. “Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE,” Computer Graphics (Proceedings of SIGGRAPH 93), ACM SIGGRAPH, August 1993, pp. 135-142. 3. M. Foucault. The Order of Things: An Archaeology of the Human Sciences. Vintage Books, New York, 1970. 4. A. Del Campo Y Frances. La Magia de Las Meninas. Colegio de Ingenieros de Caminos, Canales y Puertos, Madrid, 1978. 5. M. Gelman and P. Gelman. Babylon, Moscow Youth Center, Moscow, 1990. 6. E. Hall, The Arnolfini Betrothal: Medieval Marriage and the Enigma of Van Eyck’s Double Portrait. University of California Press: Berkeley, Los Angeles, 1994. 7. J. Lopez-Rey. Velazquez, Volume II. Benedikt Taschen Verlag, Germany, 1996. 8. C. Warncke. Pablo Picasso, Volume II. Benedikt Taschen Verlag, Germany, 1994.

Mitologies: Medieval Labyrinth Narratives in Virtual Reality ri

ou o

n

i

m

izri

l roni isu liz ion L or ory niv rsi y o llinois hi go hi go L 0 0 +1.312.99 -3002 voi +1.312. 13x [email protected], http://www.evl.uic.edu/mariar/MFA/MITOLOGIES/

Abstract. v n s in hnology h v m i possi l o r v s ri h n r hi ur lly in ri vir u l worl s. h Mitologies pro is n mp o u iliz his hnology s m ns o r is i xpr ssion n or h xplor ion o his ori l poli i l musi l n visu l n rr iv s. Mitologies r ws inspir ion rom l rg pool o li r ry n r is i sour s y p uring h ir in r wining r l ionships in in m i orm h n m king onn ions o h s rong n rr iv r i ion o o h r m i su h s film n li r ur .

1

Introduction

v n e in te nolo y ve m e itpo i le to re te v t ri n intri te r ite tur lvirtu l worl . e Mitologies proje ti n ttem ptto utilize t i te nolo y m e n o rti ti expre ion n to explore i tori l politi l m u i l n vi u l n rr tive . Mitologies r w in pir tion rom l r e pool o liter ry n rti ti our e y pturin t eir intertwinin rel tion ip in inem ti orm en e m k in onne tion to t e tron n rr tive tr ition o ot er m e i u fi lm n liter ture. Mitologies i t e ulm in tion o n exten ive o y o work ot n rtproje t n o tw re e i n prototy pe in virtu l re lity. e t em ti ontento Mitologies r w in pir tion rom m e iev l n on to ontem por ry liter ry en e vor . e work i loo ely e on t e ret n m y t o t e inot ur t e Apocalypse or evel tion o t. o n nte Inferno urer woo ut ter t e po ly p e n or e Library of Babel. u i rom ner Der Ring Des Nibelungen i u e m oti to tru ture t e n rr tive. e work explore t e eni m ti rel tion ip etween t e e our e n pture t em in m i e-en- ene t ti roote in t e illu ioni ti n rr tive tr ition o ot er m e i . e i n prototy pe in virtu l re lity Mitologies i n rtwork re te or t e (tm m ulti-per on room - ize virtu l re lity y tem evelope t t e le troni i u liztion L or tory ( L o t e niver ity o llinoi t i o 5. e i ten y ten y ten oot room on tru te o t ree tr n lu entw ll . r k ny x wit two nfi nite e lity n ine rive J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 373–383, 1998. c Springer-Verlag Berlin Heidelberg 1998 

3

ri

oussos n

ish m

izri

t e i re olution tereo opi im e w i re re r-proje te onto t e w ll n ront-proje te onto t e floor.Li t-wei tL tereo l e re worn to m e i te t e tereo opi im ery. tt e to t e l e i lo tion en or. t e viewer m ove wit in t e onfi ne o t e t e orre tper pe tive n tereo proje tion o t e environm ent re up te . i pre ent t e u er wit t e illu ion o w lk in roun or t rou virtu l o je t. our pe k er m ounte tt e top orner o t e provi e u io. e u er inter t wit t e environm entu in t e 3 w n im ple tr k e input evi e ont inin joy ti k n t ree utton . e w nten le n vi tion roun t e virtu l worl n m nipul tion o virtu l o je t wit in t t worl . Mitologies l o run on t e m ller m ore port le ou in t e m m er e k (tm n m m er e k 2(tm .

Fig. 1. rti ip nt inter twit Mitologies on n m m er re lity y tem .

e k (tm

virtu l

e r teri ti o t e ove te nolo y were t k en into on i er tion w en e i in to evelop Mitologies . e provi e n ppropri te virtu l re lity pl torm m inly e u e o t e non-intru ive n ture o it r w re n t e ility to provi e roup experien e. i i o re tim port n e to i it l work o rt em p i n e iven on t e work wit outworry o t e te nolo y overpowerin it.

i ologi s

2

i v lL

yrin h

rr iv s in

ir u l

li y

3

Description of the Virtual Experience

e wor Mitologies erive rom t e reek wor m ito t e t re ri ne r nte e eu to elp im fi n i w y out o t e ret n l y rint . e viewer in Mitologies re-experien e lle ori lly t e journey o e eu ut l o o not er i tori l n liter ry fi ure nte li ieri 2. e n rr tive i intro u e in tory tellin ion tru ture l r ely unk nown to virtu l re lity worl ut m ili r in ot er m e i u fi lm n liter ture. t e n rr tive pro ee t e inte r tion/inter tion o lon n en le n vi tion lurre llu in tory l n pe n o y p ew y eler te n e eler te m otion u en i t rom olor to l k n w ite eerie neri n m u i izrre n e torte e or l y rint ine tru ture n p r oli ty le tem por l pre ure n p ti l i ontinuitie en e t e viewer in k in o tion plot r m tize t e journey n u e t e over ll n rr tive wit re m -lik e eel. e u ien e enterin t e i initi lly lo te on t e nk o river in rk ore t n llu ion to nte in ul ore t.

Fig. 2. e openin o t e n rr tive on i rem ini ento nte Inferno.

o ttr vellin t rou

rk ore t

rom i t n e t e viewer e r t e re k in oun o woo en o t n t e u tle oun o w ter w in in tt e river nk . e o t lowly ppe r le y t tue m o el o on tello u one 9 . n t i e ir il om p ny in nte into Inferno? t e o t ppro e t e ore t e viewer

3

ri

oussos n

ish m

izri

i wi tly tr n porte onto it n t e journey own t e river e in . n t e p y i l p e o te two en e re pl e in onfi ur tion t t orre pon to t e virtu l e t o t e o t. en e t e illu ion o tr velin in t e o ti re li ti t e viewer look own tt e floor o t e n ee t e virtu l o t w y in ene t t eir eet. e intention o t i openin ene i to e t li n expli it en e o tory line n rr tive. e low n m oot flow o t i intro u tory equen e i let ri n m e it tive ettin t e p e t e work eek to om pli t rou out. e openin river m oti rom i r ner oper ein ol u e or t e river ene ontri ute to t e im pen in en e o n er n i ten t e expe t tion o t e unk nown y etto om e. pon lo er ex m in tion o t e vi u l n u itory m et p or t e p rti ip nt m y e in to re o nize t e elem ent n t rt r win t e onne tion t t will ui e t eir explor tion l ter on. e river ene eventu lly e out w ile t e next ene t t o ri ter n m ore et ere l p e e in. e tr n ition i i e y oun n ex erpt rom i r ner openin e m ento ein ol w i erve tru tur l m oti or t e un ol in n rr tive. n e in t i p e t e viewer i em rk e rom t e o t n n now t rtu in t e inter e evi e t e 3 w n to ontinue t e journey y n vi tin t rou l r e pl ne. r in te i tn e m nifi ent ur urroun e y orti ultur l m ze r en ppe r . e r n t e r li in pire y t e even ur e e ri e in t e po ly p e p e u m y rn er m um y tir r i il elp i n L o i e . n Mitologies ll even o t e e ur e re repre ente t i one r n ur w i i m o ele ter Leon r o in i k et o ur t tw never uilt 4 . ny ttem pt were m e in t e en i n e to uil in i ur ut t e e ttem pt never m teri lize . t i or t e fi r t tim e t t in i e i n n fi n lly t k e pe 3 tru ture w ere t e viewer t e opn roun t e intri tely et ile ri k om e n portunity to tr velup i ex m ine in i m nifi ent r ite tur lvi ion. t e oor o t e ur lowly open t e interior reve l t e el or te p eo om pletely i erent ty le o ur e t e re t o que o or o in p in 6. e oun m oti rom ner in inten ifi e t e en e o elev tion n pro re ion rom one p e to t e ot er. e viewer t e ree om to tr vel in i e t e re li ti repre ent tion o t e m o que t low level to lm o t eel t e rpet or fly i up over it r e . wit t e ur t e textile n orn m ent l et il o t e interior o t e m o que re r wn rom v riety o our e rel te to t e m o el t e perio n t e t em ti ontent ut l o pte to t e un ol in o t e n rr tive. e pro re ion i inten ifi e urt er w en t e rk er n m ore orn m ent l p e o t e m o que e om e t e entr n e to t e even rk er n m y teriou l y rint un erne t . n e et m orp o e vi note t t e lu uilt ou e in w i e on u e t e u u l p e n e eive t e ey e wit onfli tin m ze o v riou w n erin p t 1 . n Mitologies m y ri tr n e rk n m i le in

i ologi s

Fig. 3. m o ele

p

i v lL

yrin h

orti ultur l r en m ze urroun ter Leon r o in i k et n

rr iv s in

ir u l

li y

3

t e m nifi ent ur w i i n e experien e in Mitologies.

e re on tru te to re te l y rint rem ini ento t e l y rint uilt y e lu . e l y rint i we or r izom e every p t i onne te wit every ot er one. t no enter no perip ery n no exit e u e it i potenti l infi nite 10 12. t e viewer pro ee t rou t e m ze t ey fi n t em elve on p t t t le to m e iev l urio ity room room e on urer woo ut o t e po ly p e 7 n room popul te y t tue n i on room t trequire t tt e viewer m k e oi e in or er to pro ee . n om e e to pro ee rom one p e to not er t e viewer m u t m k e t e ri t oi e . e fi r troom or in t n e pre ent t e viewer wit t ree wor nte e eu n ri t. t e letter in ny o t e wor i o en t en t e oor le in to t e fi r t urer woo uti opene . t erwi e one o t e ot er oor le in to urt er p e i opene . t ny point n epen in on t e oi e t e p rti ip nt m y experien e eit er woo ut or one o t e ot er pe i l room . o t e woo utroom rin to li e in 3 one o even po ly pti woo ut y urer t twere ele te or t i work . e fi r troom en ountere i t e even rum pet woo utroom . tpre ent t e ook wit t e even e l r pi ly un ol in on roll w ile t e lou trum pet oun rom ner ein ol re juxt po e ;in t e po ly pti om n room em le voi e rom ie lk urie ollow wom n tor o w ter t rt to ri e eventu lly floo in t e room ;t e our or em en room tr n l te t e orror o t e m o t m ou n ever popul r eet o urer po ly p e wit t e violent m otion o t e or em en t e our- olore w ll lo e in on t e viewer;in t e penin o t e 5t 6t e lroom m ultiple em i-tr n p rentl y er illu tr tin t e lower p rt o t e woo ut ri e rom t e roun lik e l e ;t e orture o t. o n t e m o tunu u lo urer woo ut t ti notp rto t e po ly p e n rr tive i re lize wit m o ern interpret tion o torture in t e our ro -lik e p e

3

ri

oussos n

ish m

izri

o t e room ;fi n lly t e woo utroom o t. i el i tin t e o t e l t in t e erie o woo ut room i pre ente wit wor N N K L N. ook o niel N me n o num ere t y k in om n fi ni y ll le K L N me n ou rtwei e in t e next

Fig. 4. e t ree- im en ion l m o el o t e m o que o experien e in Mitologies.

or o

in

r on one rom t e e fi r ttwo e it; t e ln e .

p in

t er room in t e l y rint ttem pt to pture t e m y tery n e uty o t e popul r m e iev l urio ity room . e et p y i / tronom y room invite t e viewer to ze t t e i tine pel p intin o ell t rou t e ey epie e o l r e tele ope;in t e u i room t e viewer n pl y one o our in trum ent n row e t rou t e ore eet. e n e t eo r p y n l em y room re em le m p tu y room w ere k nowle e i l ifi e t rou el or te t xonom ie . n t e fi r to t e e room t le o in e t re pte rom m e iev l entom olo y ook . e eo r p y room i pl y t e e uty n ur y o m e iev l rto r p y t rou t e num erou ex m ple o m p in lu in entr l terre tri l lo e m o el repre entin t e un m ent l o tolem y eo r p i l y tem . e l tter room t e room o l em y i tur te wit t e ten wor o o m entione in t e l. o t e room in t e l y rint involve re ul re e r on ernin t e rti ti ontent well t e i tori l n politi l ontext t ey repre ent. e virtu l im plem ent tion owever oe not ttem pt to per e tly re re te interpret or re lize t e ontext o t e e room utto pture t eir em otion l e en e. i pro e i e tillu tr te t rou n ex m ple on one o t e room

i ologi s

i v lL

yrin h

rr iv s in

ir u l

li y

3 9

o t e l y rint t e o room ter ierony m ou o . i room rin to li e o m o t m ou work e r en o rt ly eli t (15 05 -15 10 . i tripty ow t e m ter t i e t. e entr l p nel w i i t e u je to t e p e w rm wit t e r il nu e fi ure o m en n wom en portin li entiou ly in p nor m i l n pe t t i tu e wit nt ti rowt o qu i exu l orm . o eem to ow eroti tem pt tion n en u l r tifi tion univer l i ter n t e um n r e on equen e o ori in l in u um in to it n tur lly e i po ition. e u je t re erive in p rt rom t ree m jor our e e iev l e ti rie lem i prover n t e t en very popul r re m n o ult ook llm ixe in t e m eltin pot o o toun in ly inventive im in tion. e ve o en to re lize t i entr lp nel or t e virtu l p e. y m ol re ttere plenti ully t rou outt i p nel. ne o t e m o t in tin y m ol i t to ouple in l lo e w i illu tr te t e prover oo ortune lik e l i e ily rok en. e l lo e i re re te in t ree im en ion n re re te l o in t ree im en ion l l lo e w t t e tripty w ole repre ent. ir t t e l e p r i e o t e worl etween en n ell. e on t e e ret o l em y n it lle ori l m e nin . e viewer i le to enter te l lo e w i repro u e t e m ovem ento t e e venly o ie . te viewer n vi te rom one lo e to t e next t e w ll r u lly m ove kw r to reve lm ultiple lo e ri in lik e ir u le . r t er et ere l r m ent rom ner oper otter m m erun i u e to illu tr te t e over ll lle ory o t e p intin .

Fig. 5. view o t e t e Mitologies worl .

o

room one o t e m ore t n 30 room

in lu e in

3 0

ri

oussos n

ish m

izri

e on tru tion o t e room in lu e om plex we o m et p or n i n . e p t rom one room to t e ot er m y e line r ir ul r or truly l y rint ine epen in on t e viewer oi e . en t e l t room o t e l y rint i fi n lly re e t e viewer te iou journey on lu e . e pe o t i room re em le t e num er ix or t e ell o n il o t e two room t tpro ee it. e viewer enter tt e n rrow tip o t e room n irle roun ituntilt e enter i re e w ere t e en ounter wit t e m inot ur im el t e y m ol o e t t k e pl e. e repre ent tion o t e m inot ur e te in i m nifi enttem ple i e on e re ip e t m et p or rom onolo i 11 . e m e te w it ll;we ten tow r om m on o l l k e t w o l im ll un er i power. e per onifi tion o e t k eleton lie on ier wit in n el or te t lque e or te wit k ull n m ny l m p . e i wr ppe in ri ro e ( e t k e w y t e ri m n we lt n we r l urel rown y m olizin i rule over ll m ort l . n one n e ol wor entwine wit n olive r n m e nin t tpe e nnoten ure i m en o notrun t e ri k o e t in fi tin or it. e m otto ove t e t lque re e t m k e ll m en equ l. our li te t per t n on eit er i e o t e ier. N otr nk nor i nity n m e wit t n ; y power exten o er every l n . 11 n t e lower ore roun t n two putti e vily veile to i ni y t eir lin ne nott eir m ournin or t ey repre entm n ple ure in t e t in o t i worl . e power n lory o t e worl re own in t e il we rin l i l rm or n rry in rown m itre epter n m e l on u ion w ile wor l n e n m r l ton lie t i eet. e ot er putto repreent ll um n invention n rt;t e fl il n ovel ly in ne r im t n or ri ulture w ile t e quill pen roll p lette n ru e n om p n tri n le t n or t e rt n ien e . n one i e o t e room p ir o w ite e t ere win uilt y e lu l y on t e floor. ile t e viewer will urely ttem ptto ppro t e m inot ur i en ry pt open un er t eir eet en lin t em to e pe rom t e l y rint y u in t e p ir o win . n e in wit t e e innin equen e n vi tion i i le n t e viewer i r e own into ti l pool w ile t e oun ex erpt rom ein ol re e lim x. nm e e in m enin pir l w ere voi e n oun wirl own lon wit t em t e virtu lre lity viewer ve fi n lly l n e k on t e o ton w i t e journey t rte out rom . nly now on tello t tue i no lon er t n in on t e ow o t e o t. n te ew ttere e t er rem n nt o t e win t tle to t e e pe ve repl e t e m onum ent l fi ure. e voi e n oun m oti ve l o lte wit only t e re k in oun o t e o t re k in t e ilen e. e n rr tive t to l y rint ut l o l y rint in itel om plete it y le.

3

On the Use of Narrative and Interactivity

Mitologies t k e r i lly i erent ppro to n rr tive tru ture n one oul y lm o ti nore inter tivity. n ert in e t e u ien e no on-

i ologi s

i v lL

yrin h

rr iv s in

ir u l

li y

3 1

trol u in t e i tin topenin w ere t ey re t k en o r t e o tw i lowly tr n port t em to t e t e r l. rom t i pointon one p rti ip ntm y t k e ontrolover n vi tion w ile t e ot er experien e t e virtu ln rr tive le y t i per on oi e . e inem ti n rr tive orm pre erve itel t rou t e ontinuou low p e n pro re ion ieve rom one ene to t e ot er. e l y rint pre ent it vi itor wit oi e y et ll oi e re in e en e illu ory t ey ultim tely le to t e m e fi n l on ront tion wit t e m inot ur t e ll t rou t e tr p oor n t e return k onto t e o t t u om pletin ir ul r journey.

Fig. 6. e eo r p y room fi lle wit m e iev l m p o unique e uty ur y in lu in t e Planisphaerium sive universi totius ele ti l illu tr tin t e un- entre y tem outline y operni u .

n rt

nter tivity i r i on etre o virtu l re lity worl . o tpeople owever o notk now or un er t n ow to e l wit inter tive om puter- e rt let lone wit inter tivity in im m er ive n in m ny e om plexvirtu l worl . e virtu l experien e i i orientin unn tur l n i ult or m o t even i t e te nolo y u e i im ple n n tur l it urrently et. e le troni i u liztion L or tory een tively involve in efi nin t e uture ire tion o virtu l re lity te nolo y t rou t e evelopm ento t e w i one n r ue ein one o t e etter ex m ple in t e m uto inter tive virtu lre lity y tem . ur p rti ip tion in m ultiple venue pro-

3 2

ri

oussos n

ish m

izri

vi e u wit num erou in t n e to o erve t e re tion o people inter tin in t e environm ent w et er il ren ult in le viewer roup expertor novi e viewer . e e o erv tion le u to elieve t t n inter tive experien e i notne e rily llt tm tter w en re tin work o rt unle t e work itel i outinter tivity. y woul we ne e rily nee re tive om puter pro r m in or er to m k e n rti ti t tem entri in m et p or? or u w ti m ore im port nti t tt e viewer will not e on um e y le rnin ow to inter twit t e te nolo i linter e. urt erm ore iti im port nt or t e rti tt tt e viewer will urp te t eo in tion wit t e m e ium w i m y i tr t rom t e ontent o t e work itel. ert inly no p inter woul on i er er work u e ul i in te o t e work t e viewer e m e in te wit t e on tru tion et il o t e nv . t i t u riti l or virtu l re lity to m ove ey on t e level o t e te nolo y. n t e m i ion o work o rt i to pre ent llen e to t e prev ilin ort o oxy o virtu l worl . ow o we re pon to t i m i ion o i tin t e orm n rti ti pro e o virtu l worl re tion w y rom t e te nolo y n into t e ontent? ti not t e intention o t i work to re li ti lly re re te r ite tur l worl o t e p t nor to oll p e tim e perio into n ttem ptto re efi ne t em p rt o on u in ort n r m ente experien e. lt ou t e pie e involve re e r in i tory m y t olo y liter ture rt n m u i it voi t k in i ti or en y lope i t n e. e intention i to re te work roote in t e um nitie w i trie to urp t e u u l orm o virtu l re lity worl t e orm o n vi tion t rou tr t eom etri pe . not er le r intention o Mitologies i t e ttem ptto im po e n rr tive ontent n tru ture. e fi lm lik e m i e-en- ene w ele te ot or it m ili rity wit t e viewer n m o e o expre ion. in lly t e oll or tion etween in ivi u l o i erent intere t n i ipline to m el to et er into work t tm k e o erent w ole voi in w ti requently t e re ulto oll or tion or t e re tion o virtu l worl erie o i erent non- o i te worl onne te y port l .

4

Conclusion

po ly pti liter ture n rt ve lw y een on erne wit t e ppro o t e en o t e m illenium n o oe ientifi n popul r liter ture out virtu lre lity. t i work i re te tow r t e en o t e twentiet entury itinevit ly inve ti te t e m e nin o t e Apocalypsis in u ontext. or Mitologies virtu lre lity i u e ve i le to liter lly expli te t e exe e i o t e evel tion. n t e Apocalypse ou le per pe tive pl y n im port ntp rt in uil in n exp n in t e m et p or . n one level t e vi ion i everely re tri te n r m ente n u er on u ion. n p r llel level iti le r n on i e. e m et p or re t on e in le n ou le;t ey in orpor te l rity n on u ion unity n m ultipli ity rti try n o . n i tive i t e elem ento t e l y rint y m ol o t e tom o e t o ell ut l o pl e o ju m ent n wor ip.

i ologi s

i v lL

yrin h

rr iv s in

ir u l

li y

3 3

owever om plete n exten ive Mitologies i tillinten e n experim ent wit t e t te-o -t e- rtin te nolo y ut l o n ttem ptto ee t e twentiet entury worl rom per pe tive ot er t n t e te nolo y it een re te wit . lt ou it i le tup to t e viewer to y nt e ize t e i tori l liter ry or m y t olo i l m teri l into ontem por ry relev n e iti till t e ou o t i work to re te n ltern tive to t e u u l t-p e oi e- riven y nt eti worl n m k e pro oun n l tin im p twit in in pe ifi n te nolo i lly - e rtin ener l. en Mitologies m y e on i ere u e ul experim entin rt.

Acknowledgements Mitologies w s r y mul i- is iplin ry group o r is s n s i n is s h niv rsi y o llinois hi go’s l roni isu liz ion L or ory. h group m mrs in lu filmm k r ish m izri r is s o l x n r l n ruz n ri oussos n ompu r s i n is s omoko m i l n illm n n v p . n x nsiv oll ion o r sour s im g s n x ou Mitologies n oun h p www. vl.ui . u m ri r L . h vir u l r li y r s r h oll or ions n ou r h progr ms h l roni isu liz ion L or ory ( L) h niv rsi y o llinois hi go r m possi l hrough m or un ing rom h ion l i n oun ion ( ) h ns v n s r h ro s g n y n h p r m n o n rgy sp ifi lly w r s -9303 33 -9 122 2 -9 122 3 -9 203 1 n h r n rships or v n ompu ion l n r s ru ur ( ) progr m. L lso wish s o knowl g ili on r phi s n . n v n work n rvi s or h ir suppor . h n mm rs sk r r m rks o h o r o rus s o h niv rsi y o llinois.

References 1. 2. 3. . .

. . . 9. 10. 11. 12.

. . n rson Ovid’s Metamorphoses Books 1-5. orm n niv rsi y o kl hom r ss 199 . n lighi ri Inferno. ri ish ro s ing u li ion 19 . org Luis org s Labyrinths. w ir ions 19 2. l r o n n ini usign ni Raffaello La Chiese di Firenze; Quartiere di Santo Spirito. volum s nsoni i or ir nz 19 . . ruz- ir . n in n . n i. “ urroun - r n ro ion- s ir u l li y h sign n mpl m n ion o h ” Computer Graphics ( ro ings o 93) ugus 1993 pp. 13 -1 2. rrilynn . o s Al-Andalus: The Art of Islamic Spain. h ropoli n us um o r w ork 1992. illi ur h ( .) The complete woodcuts of Albrecht Durer, 1471-1528. rown u lish rs 19 . rwin no sky Studies in Iconology: Humanistic Themes in the Art of the Renaissance. rp r ow w ork 19 2. o him o s hk Donatello and his World. rry . r ms n . 1990. n lop oo The Idea of the Labyrinth from Classical Antiquity through the Middle Ages. orn ll niv rsi y r ss 1990. s r ip fl. 1 00. Baroque and Rococo pictorial image edition of Ripa’s “Iconologia”. ov r u li ions 19 1. olo n r ng li Il Libro Dei Labirinti. ll hi i or ir nz 19 .

Aggregate Worlds: Virtual Architecture Aftermath Vladimir Muzhesky

Abstract. In the following article the phenomenon of virtual world is analyzed not as a point of reference or relation to existent meaning deployed elsewhere but as a point of semiotic and perceptual construction of synthetic content allocated for use via computer network. Author extends this analysis into the legal, economic, and neurological discursive lines of interaction, monitoring and control. Aforementioned aspects inherited by contemporary virtual worlds from its military predecessors in many cases were used as landmark metaphors by designers to refer and/or reframe existing linear discourses by means of interfacing them in a non-linear way from within virtual and augmented reality environments. This article is concluded with a scheme or project for an infrastructure of aggregate world, as it appears to be functional and formative in its relationship with interactive content.

Perceptual Synthesis, Content, and Aggregate Locality Digital or cyber-space commonly referred to as virtual or synthetic locality is in reality underdefined concept. Epistemological aggregate or defined status of spatiality which is implicit to locality as concept is channeled via semiotic zones of references to real or memorized as real situations in virtual environment. Hence, to define virtual space we have to follow the pattern of interference emerging from mediated life streams of real worlds and rendered objects and its interrelated neurological, perceptual, semantic, and economic contextual aspects of spatiality. From this point of view virtuality is a complex transparent conglomerate: current 3D dummy realms with limited interaction and abused liberating tendency can be interpreted as desubliminization of mental tangibility inherent to digital space – it is thrilling how conceptually loose its constructs can be. It seems they are projected into absolute nothingness simultaneously being declared as hi-tech spectacle, sort of industry-dimensional test field with default meaning categories like a body shape which stands for a biological connector and shooting which represents informational transmission. Described by Marcuse still back in the 60’s disubliminization tendency of media obviously integrated itself into virtuality. Resulting alienated toy-like industrydimension conceals the aspect of virtual world as a space to which knowledge is functionally and structurally bound in the first place. When we analyze this space we reconstruct it, we project it and provide it with biologically streamed content. This content in its turn is being deployed when we “act” in digital space in accordance with electronic and perceptual semiotics which might have become a starting point for J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 384-393, 1998 © Springer-Verlag Berlin Heidelberg 1998

Aggregate Worlds: Virtual Architecture Aftermath

385

digital economic theories would we have left astride Freudian limits of toy-humans (avatars) in virtuality with all their body based pseudo-topologies (virtual embodiment theories) and include pattern recognition and recombination as a common sphere of action when it concerns biological and electronic interference. Of course we can stream raw biological situation as any other kind of raw data material, but how long are we going to follow graphic or academic industry preset attitude to spatiality and impose its mental-proof rendered polygons onto ongoing perceptual-electronic deployment? Semiotic mechanism of streamed and rendered synthethis together with its perceptual properties constitute specifically synthetic meaning and defined aggregate locality. To follow discursive line of aggregate construction of digital space as an alocation of mediated content means to encounter spatial paradox represented as screen border. With the development of interactive media screen border seems to position itself as a materialized limit of content; almost Cartesian zone of dual visibility, virtual perspective of inside and outside of the machine content performance being simultaneously exposed to the user. In the latest artificial intelligence reviews one can find the reference to this zone of content visibility. Take for example Douglas Hofstadter analysis of Stanislaw Ulams opinion about perception as a key to intelligence. Ulam wrote: “you see an object as a key...it is a word as which has to be mathematically formalized. Until you do it you will not get very far with your artificial intelligence problem.” Hofstadter concludes his research in the following manner: “in any case when I look at Ulams key word "as" I see it as am acronym for “abstract seeing.” The analysis of limits and constructs of an aggregate (locally contentdefined) virtual world in terms of “abstract seeing” interface of interactive content displays another controversial feature of synthetic reality: content represented or constructed within a virtual world simulates or refers in its simulation to the structure of physical action leaving perception out of discursively preprogrammed visibility. This sort of ego-like phenomenon of virtuality is usually addressed as “virtual consciousness.” However what it really comes down to is probably an early process stage of alocation of synthetic meaning which as a restricted (in Bataille’s sense) economy follows Marxist-Hegelian dialectics of selfrefletive phenomenological emergence. In this frame, virtuality is constantly defining itself via neglecting the transparency of its production and reproduction when it comes to its origins in real or memorized as real situations. Current transmission from virtual (rendered-only) to augmented (streamed and rendered) reality in computer science (augmented reality and active telepresence system research) reflects crystallization of aggregate phase in virtuality. It seems that there is an interface of perception missing as an alternative to the existing and widely abused interface of action in digital culture. This gap in digital economy makes us look for in depth scaled content in simulation as synthesized perception as opposite to the surface interactivity as a synthesized action. In neurocognitive context it is possible to analyze the synthetic interaction of human and electronic system, with the following extension of the analysis into conceptual, economical and political faculties. And to the same extent as neural research

386

Vladimir Muzhesky

influenced social sciences, the models of neural networks bringing out a necessity to define the infrastructural elements of human intelligence in terms of its vision and activity, in digital culture neural network and related representational models play crucial role when it concerns virtual content. From Heidegger to Baudrillard there was a strong tendency in philosophy to highlight the influence of technologically mediated and even economically and ideologically simulated (in case of Derderrian's analysis of espionage structures) social terrains. From this point of view, current mental alienation of the role of perception and recognition in virtual architecture, is even more appealing, than physical labor alienation of the dawn of technology. Historically worker-machine dichotomy brought the achiasm of society and industry as a point of theoretical reference. Today’s tendency to unify this dichotomy into a cyborg bioelectronic complex concealed them. The tension of first lapses of industrial maps and dimensions was resolved with geopolitical pattern recombination of industrial and socio-industrial revolutions, while cyborg-situation as the closure of revolutionary localities should be resolved on the metalevel: the selfdominant for synthetic concepts deployed by means of virtual forms, links and inter-forms should be defined and neurolized, embedded in users processing with existing one dimensional ideologies, economies and politics being referred and mentally attracted to this new field. The fact that neuralization substitutes the revolution in the context of virtualized social sphere, speaks for itself in terms of spatial properties: vital social activities such as banking (online transactions), education (online courses and virtual university projects), research (for example, Stanford KSL project which provides access to running research and development software systems over the World Wide Web) are pressed out into the virtual dimension. New plane of immanence is not a frozen pattern of biosocial oscillation, with which philosophy can easily play, it is a dissipative and in some cases selforganizing terrain of attractors, which implies that at least a part of its organizational points are useless. In terms of neural networks, new emergent attractors of pattern recognition activity are referred to as spurious memories. Net as a bioelectronic phenomenon has plenty of them. It is wilderness for Freudian experiments on wired population, and at the same time an avatar of escape from recurrent dissublimization. Projected onto the bioelectronic plane, dissublimization of the net spurious memories connected with the exposure of perceptual intensities in the logic of virtual worlds brings up a new architectural form which interfaces content on a subversively deeper political level than any other existing form of synthetic activity as much as it embodies synthesis of content itself by means of perceptual interferences and modulations. Hyperreality Neurolized With the discovery of neural systems, the ability of single class networks to generate additional or spurious memories was classified as generalization activity, when involved in the pre-recognition of new stimuli (Ezhov, VVedensky, 1996). This aspect of neural networks corresponds to the immune complex of human systems,

Aggregate Worlds: Virtual Architecture Aftermath

387

where antibodies are preliminary generated in order to establish a binding with a potentially new antigen. Virtual complexes and communities, those neo-homo-ludus kind of things, and also those which are designed to "reflect" the actual structure of social architecture and in practice, pre-recognize social tendencies and statements (this is actually the essence of any simulation) are the best indicators of spurious memories grown on artificial terrain of virtual reality. They remind an interpretation of chimerical patterns in human dreams, when imaginary cities are interpreted as real. For example, imaginary city of London is taken as if it were London. If one considers new attractorspurious memory effect, the phenomenon of "as if London" becomes possible only, if the informational system, revealing its resemblance with immune complex has generated the class "London" in advance. The same concerns virtual architecture on the net, which does not of course reflect, but replicates the class of Amsterdam or London in virtual reality databases. Human operators have to assume that the city, which they build is Digital Amsterdam before the spurious representation is architecturally realized and installed on a server. Consequently, virtual architecture takes over a replication-transformation function of the neuroactive compounds in the process of reintroducing an agent of the outer environment inside neural networks, and second, of the revolutions by reorganizing knowledge-production bound communities on the virtual basis.

Nodality as Architectural Framework The concept of node can be regarded as a focal point of interactive content deployment in aggregate world. Nodality as a synthetic spatiality is a tangible multilevel topological framework, which relates to the concept of node as location interference within the context of informational technologies and addresses its architectural, perceptual, and conceptual properties. Based on the internet as a temporary zone of preprogrammed interference node marks a point where traditional monospatial semiotics is confronted with transgressive multispatial discourses of aggregate architecture. Interference Classes of virtual spatial language are organized by location in both human and artificial information networks, and not by qualities or properties. To the same extent as neurointeractive compounds reintroduced outer environment in perceptually illegal way (avoiding the filtering of the outer senses) in-forming human system, the simulation reintroduces it in conceptually alegal way and reveals that, as much as spurious memories are utilized, aggregate spatiality is first of all a neuroactive complex. As such, like many other neuroactive natural or artificial compounds, aggregate architecture possesses structural key to the neural informational processing. This phenomenon is not and cannot be controlled by either human or artificial intelligence, it is extremely autonomous, to the extent of aforementioned conceptual illegality:

388

Vladimir Muzhesky

from the point of view of the law the act, for example, of currency analysis (to use recent research from Berkeley Experimental Laboratories where a legal situation itself became a subject to augmented reframing by means of substituting the agent of action with a user-robot interface) destruction, and who knows, may be replication on the internet can be interpreted as the act of crime or of art, depending on the extent of reality. However, this is exactly the point, where semiotically based intelligence of the law is decomposing itself under the influence of higher informational structure: virtual index of membership and action leaves no unified space, time, and agent of action. This is were augmented reality research and robotic manipulation involve industrial dimension of virtuality into following the same remote lines of vision and power, interfacing data in a non-linear way and thus destroying linear discourses of legal and economic laws and presupposed unified conceptual entities. Once the factor of perception and interfaced content enters the analysis the laws of unified subject of action become ambiguous. It would not be an exaggeration to say that hyperreality as a phenomenon of neural network simulation is censoring every possible plane except of spatial-information terrain. In this context digital culture relates to speed through the deconstruction of the carrier (similar to the futurism movement and scientific art at the beginning of the century and cold war carrier destruction paradigm of missile engineering in the 80’s-90’s) as the most abstract representation of system transformation. Speed There is a tradition in post modern philosophy, which regards speed as a formatting factor of economical hyperterrain. This theory was thoroughly developed for many areas, however it is appropriate to focus here primarily on relevant to virtuality applications. Among them there is philosophy of geography (Olson), the philosophy of espionage (Derrderian), and the theory of general economy (which did not make it yet in to the textbooks according to CAE) developed by Bataille. From the analysis of the structure of espionage (Derrderian) and development based world maps (Olson) it was derived that speed can be understood as a hyperfactor in reflecting and reorganizing reality. Extending this conclusion into the electronically mediated cultural sphere: we can investigate what role the factor of speed plays in artificial environment; and how it influences interactivity as a cultural relation to image. By definition, synthetic or virtual nature of any aggregate simulation makes it structurally dependent on its own speed. In other words, in order to differentiate between relative positions within virtual reality and integrate them into a unified representation such factor as information velocity ascribed to a certain locality has to be considered in the first place. In other words there is a major semiotic difference between positioned and represented meaning within the logic the logic of virtual words, the latter being exposed to limits of its speed momentum which it is programmed to acquire in a given space time continuum. Within above described framework, the concept of interactivity acquires an interesting property: it changes a polarity on the scale of editing. In general, interac-

Aggregate Worlds: Virtual Architecture Aftermath

389

tivity can be understood as a socially, culturally, and informationally preprogrammed approach to eidos, which defines correspondent modalities of representation. However we can delineate a hypothetical architectural research where modality of speed is defined before interactivity, by means of ascribing different informational velocity to different nodes of a virtual world. Thus, aggregate architecture can become a content formatting factor, which induces a cultural inversion of the relation to image. There is another edge, in this approach, which relates to Bataille's cultural theory. Decades ago French theorist described a theoretical model of the economy of waste as a counterthesis to existing restricted economy. Applied to the context of synthetic environment and speed as its formatting property, Bataille's theory finds more stable grounds, not only because it is a digested noematic simulacrum reality, but primarily because the topography of speed in its relation to information velocity provides perceptual material which is essentially different from that of raw material reality even when it is being streamed, and which can support unstable under ordinary conditions cognitive constructs. One of those is an idea from the dawn of AI studies, which describes the network of users working with shared databases as a global technocerebrum. It is culturally outdated, but not yet if one is (to use Timothy Leary's term) disattached within shared virtual reality environment and plugged in to ludus like multiple crainial gear. Alegality When we face the situation, where points of stability of legal reality, such as a subject of action with its unified complex of personality, spatio-temporal locality and linear logic are not transmitted through the nodality of Aggregate synthetic environment, we have to question the causation of this alegal resistance. Imagine an illegal action realized via remote control machinery interface, which the net essentially appears to be; what would be the basis to determine whether the subject of crime was only one person, or a whole group, where each member was responsible for a certain algorithmically step of illegal operation. Furthermore, the personality of the virtual criminal, and hence motifs of crime, also remain quite vague, especially, if we consider that all representations including agent of crime, instruments, and actions can be programmed in any space-time, hence numerous replicants of these representations are as much real on the net, as the ones which actually disturbed the law. Finally, in between when and were the complex of computer commands was launched and the period and place when and where it was executed there can be millions of miles and hours. The last kick in the shorts, which the remotely manipulated metal boot of the net gives to the legal system, is that all this may be a self-organized process, triggered without direct human interference. In this respect the phenomenon of virtually interfaced content generates a membrane effect, which just doesn't let the legal system through, partially because the latter appears to be based on the strictly human body spatiality, which implies physical disposition of action and meaning. But one important thing is denied by the legal discursive dimension: the sphere of vision, which as an essential part of monitoring

390

Vladimir Muzhesky

has to be pushed away from legal interrogation, because otherwise it would interrogate itself creating the infinite spiral of metareflections. Perspectives The representation of the legal one level reflection logic of mass produced virtual reality is one way membrane of the screen: creating the illusion of depth the screen seems to counterfeit the conceptual plane of phenomenological philosophy. Being the base of perceptual vision on one side and henceforth being embodied in human perceptron, the screen is constrained of vision on the other side, where it is embodied in the electronic configuration of technological space. As a hybrid of two bodies and two spaces, the screen is exposed to both visibility and invisibility, the topic of the last writings of Merleau-Ponty right before his death in 1961. In this era of television revival, he writes (his notes were actually entitled "On Visibility and Invisibility"): “This is what Husserl brought frankly into the open when he said that every transcendental reduction is also an eidetic reduction, that is: every effort to comprehend the spectacle of the world from within and from the sources demands that we detach ourselves from the effective unfolding of our perceptions and from our perception of the world, that we cease being one with the concrete flux of our life in order to retrace the total bearing and principal articulations of the world upon which it opens.” Merleau-Ponty delineates an economy of reflection, which avoids effectiveness of acting in the world and connects it with the eidetic reduction described by Husserl and reconstructed on the basis of computer simulation. Before the Soviet Union collapsed billions of rubles where spent on so-called psychotronic research which investigated shared invisibility of control. This is one of the factors which critics confusing archeology of media and history of calculus usually neglect, that computer all in all in a historic perspective of interactivity was not the only available instrument. It is a perceptual emulator of analog instruments developed for the variety of purposes. From this point of view a missing constituent for synthetic content becomes visible: the absence of neurological feedback in informational processing is a physical economic resistance, sort of a digital gravity of a virtual world. In fact, if the net refers to phenomenology, this is only due to its embodiment in the mass media, a difficult infancy, so to say. The screen is what it is only because the media in its legal form was always constrained of multidimensionality. The 3D phenomenon of industry-dimension is as good indication of alegal multidimensional arousal, as it is a harmless placebo imposed on our vision by the legal and economic system. However, the invisible part of the screen remains uncensored, simply because of the fact that legal system can not by definition include an illegal element, although it can happen in practice. The question, why the invisible body of the screen is illegal, has a simple explanation: because it performs the role of mute representational plane of the neuroactive agent, which had been a posteriori censored out of perceptronic networks of manipulated cyborg oriented population.

Aggregate Worlds: Virtual Architecture Aftermath

391

The fact that the screen is a perceptronic modulation complex hidden under conventional placebo of perspective first received attention in the seventies, when the advertisement with the invisible subliminal component was widely introduced via the reality streaming networks of cinemas and TV stations. The following prohibition measures reflected the pathological fear of the law, when it concerns psychotropic effect: even though it was a brilliant marketing technology, subliminal advertisement was prohibited, which was against any economic law. What the legal system was fighting against was an alternative economic attractor, which may have opened virtually a new dimension for the biotechnological interaction, if not only the signs but the products were neurolized. The evidence from the wide range of neuropsychological studies suggests that there is clear dissociation between the visual pathway supporting perception and action in the cerebral cortex. In the context of media it implies that the way the economy is transduced by the electronic repository of information may imply at least two ways, which would accentuate action or perception. The fact that contemporary virtual media is legally based on action and participates and growth into the implies impossibility to transmit the lines of power into the perceptron, as much as it is based on different logic than legal linear logic of causative action; thus, the connection between monitoring and monitored parts of the population is disintegrated on both sides, which gives no basis for conventional electronic reconstruction, but leads to the point of social anticyborg biotronic bifurcation. Economical disposition of powers in global hiatus is reflected in relations of restricted and unrestricted, or horizontal (such as for example Hegel and Marx theories) and vertical (such as Bataille's economy of waste) economies. If the former one presupposes the effectiveness of action, which a priori can not be fulfilled because of the counteraction, the latter one suggests to refer economic constructs to waste and thus makes the restricted economy of action mutate into the unrestricted economy of vision which establishes basis for economical properties of perceptual products and defines virtual reality as a product spatiality and locality. In fact, the first thing, which is being violated by alegality of vertical economies is one dimensionality of products and production. Analyzed by Marcuse, monodiscursive reality by means of repetition binds mesmerized human consumers to the single dimension of products, which is not even human, but is referred as human, or acts "as if humanified" by the process of mass production. On the contrary the fact that a perceptual modulator is embodied into the legal economic routines fractalizes the vision of the product to the extent where the borders of real and virtual representations vanish among the millions of multiplied resonances of products, representations, cogitos, psychological triggers, bioreflexes, etc.

392

Vladimir Muzhesky

Neurospace Via the membrane of screen restricted to eideitic representation by means of vertical economy we arrive to the neurospace. The economic point of departure, which ends up with the rational fusion of neuroactive and economic activities in the on the biotronic plane we will call Marxian-Bataillean interface, reflecting its historic origins, horizontal and vertical properties, and prospective effects on the society. It is a selforganizing loop of visionary economic development, which extends mental economic instruments via mass media into the neurospace and organizes mass production in accordance with mediating properties of simulacrum. On the basis of its perceptual and economic platform, neurospace can be defined as an autonomous hypernetwork of inner-outer inferences of informational discourses. Whether biologically or electronically realized, it theoretically establishes the same conglomerate of protomodel space niches leveled by the modes of perceptual intensities and, hence, correlated with the extent of perceptronic transformation. Further we suggest an architectural demo version of aggregate world structure which triggers aforementioned interfacing spatiality. Considering this property the model was called Aggregate World. Aggregate World: Construction Set for In-Network Users Aggregate world is a project or so to say architectural perspective and semiotic showcase of content formatting virtual reality spaces based on the internet. It is a research environment into the formative processes of synthetic content as it is deployed in the spatiality of simulation.

Location aggregator.thing.net Structure and Functionality Aggregate world functionally replicates human informational processing. It is conceptually based on neural network research and its cultural applications in the context of informational and social studies. Aggregate world consists of the following elements: spurious collectors, transmitters, content traffic, and posted databases. Spurious collectors being the first elements which freshly rendered users encounter at AW are single user virtual reality worlds based on perceptually aggregate architecture. As opposite to physical buildings, synthetic constructs suggest liquid, constantly changing configurations and dispositions of elements linked to external events. As such perception and processing of the user becomes a space where actual synthetic content is deployed.

Aggregate Worlds: Virtual Architecture Aftermath

393

Following architectural elements of AW are Transmitters which represent multiuser domains linked to single user worlds as external events. They provide users with possibilities of analog communication based on their previous synthetic perceptual experience. The communication is realized in form of mobile convertible elements which can represent either a world (an architectural file where communication is placed) or an avatar (a temporary synthetic representation of a user in communication) depending on a users intention to host or join communicational situation. Posted databases imply publicly accessible documents where users input and evaluation is listed. Access to databases is dependent on perceptual input of virtual architecture. Rendering To be rendered by AW means to be perceptually involved in its simulatory architecture, channel biosocial representation into synthetic communication, and participate in shared processing and databases. AW renders individual and collective digital representations in the same way as an image can be rendered on the plane of simulation in content development editors. By linking existing representation to a certain moment in abstract visual sequences, AW renders perceptual properties to otherwise invisible discourses of power. To a certain extent it is a simulation of Panopticum, and as such it is a great deal connected to the meaningfulness of architecture in its relation to the position of the image. Constructed of purely synthetic hyperreal imagery AW distributes momentary properties of this imagery in between existing semantic structures ascribing new dispositions of synthetic content to borrowed ideological constructs. Source and Code: A Few Thoughts Aforementioned model may be reinterpreted, used and distributed by virtual community builders over the internet. Taking into consideration recent emergence of visible programming environments like Visulan it is assumed that any produced code is different from the production code. Transparency is probably the only condition of aggregate replication.

Zeuxis vs RealityEngine: Digital Realism and Virtual Worlds Lev Manovich

The story of the competition between Zeuxis and Parrhasios exemplifies the concern with illusionism that was to occupy Western art throughout much ofits history. According to the story, Zeuxis painted grapes with such skill that birds attempted to eat from the painted vine. RealityEngine is a high-performance Silicon Graphics computer designed to generate real-time interactive photorealistic 3-D graphics. It is used to author computer games, to create special effects for feature films andTV, to do computeraided design, and to run scientific visualization models. Last but not least, it is routinely employed to power virtual-reality (VR) environments, the latest achievement in the West's struggle to out do Zeuxis. In terms of the images it can generate, RealityEngine may not be superior to Zeuxis, yet it can do other tricks unavailable to him. Forinstance,it allows the viewer to move all around a bunch of virtual grapes, to "touch" them, to lift them on the palm of a virtual hand. This ability to interactwith arepresentation may be as important in contributing to the overall realityeffectproduced as the images themselves. During the twentieth century, as art largely rejected traditional illusionism, the goal so important to it before, it lost much of its popular appeal. The production of illusionistic representations became the domain ofthe media technologies of mass culture-photography, film, and video. Todaythese machines are everywhere being replaced by digital illusion generators-computers. How is the realism of a synthetic image different from the realism ofoptical media? Is digital technology in the process of redefining the standardsof realism determined by our experience with photography and film? Do computer games, motion simulators, and VR represent a new kind ofrealism, one that relies not only on visual illusion but also on themultisensory bodily engagement of the user? Some of my previous writings addressed these questions in relation to digital cinema, computer animation, and digital photography. In this essay I will discuss a number of characteristics that define visual digital realism in virtual worlds. By virtual worlds I mean 3-D computer-generated interactive environments accessible to one or more users simultaneously. This definition fits a whole range of 3-D computer environments already inexistence: high-end VR works that feature head-mounted displays and photorealistic graphics generated by RealityEngines or similarly expensivecomputers; arcade, CD-ROM, and on-line multiplayer computer games; low-end "desktop VR" systems such as QuickTime VR movies or VRML J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 394-405, 1998 © Springer-Verlag Berlin Heidelberg 1998

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

395

worlds, which increasingly populate the World Wide Web; graphical chatenvironments available on the Internet and most other major computer networks. More examples will be available in the near future; indeed, 3-Denvironments represent a growing trend across computer culture, promising to become a new standard in human-computer interfaces and in computer networks. Realism as Commodity The word digit has its roots in Latin, where it means "finger" and thus refersto numbers and counting. In a digital representation all dimensions producing the reality effect-detail, tone, color, shape, movement-are quantified. As a consequence, the reality effect itself can be described by a set of numbers. Various dimensions determine the degree of visual realism in a virtual world. Two of the most important are spatial and color resolutions-the number of pixels and colors being used. For instance, given the same scene at the same scale, an image of 480 x 640 pixels will contain more detailand therefore will produce a stronger reality effect than a 120 x 160 image. Since a virtual world is modeled with 3-D computer graphics, the number of geometric points each object is composed of (i.e., its 3-D resolution) also has an effect. Once the user begins to interact with a virtual world, navigating through it or inspecting the objects in it, other dimensions come into play. One is temporal resolution-the number of frames a computer can generate ina second (the larger the number, the smoother the resulting motion). Another is the speed of the system's response: if the user clicks on animage ofa door to open it or asks a virtual character a question, a delay in response breaks the illusion. The quantifiability of these dimensions reflects something else: the cost involved. More bandwidth, higher resolution, and faster processing result in a stronger reality effect-and cost more. The bottom line: realism has became a commodity that can be bought and sold like anything else. Those in thebusiness of visual realism-the producers of special effects, military trainers, digital photographers, television designers-now have definite measures forwhat they are buying and selling. For instance, the Federal Aviation Administration, which creates the standards for simulators to be used in pilot training, specifies the required realism in terms of 3-D resolution. In 1991 itspecified that a daylight simulator must be able to produce a minimum of 1000 surfaces or 4000 points. The art historian Michael Baxandall has shown how the price of apainting in fourteenth-century Italy was linked to the quantities of expensivecolors (such as gold and ultramarine) used in it. At the end of the twentiethcentury it has become possible to delegate to a computer production of imagesas well as their pricing. Users can be billed for the number of pixels andpoints, for CPU cycles, for bandwidth, and so on. It is likely that this situation will be exploited by the designers of virtual worlds. If today's users are charged for connection time, future users will be charged for visual aesthetics and the quality of the experience: spatial resolution, number of colors, and complexity of characters (both geometric and psychological). Since these dimensions are specified in software, it ispossible to adjust the appearance of a virtual world at will, enhancing it if acustomer is willing to pay more. In this way the logic of

396

Lev Manovich

pornography will beextended to the culture at large. Peep shows and sex lines charge by theminute, putting a precise price on each bit of pleasure. In future virtualworlds each dimension of reality will be quantified and billed separately. Neal Stephenson's 1992 novel Snow Crash offers one possible scenario of such a future. Entering the Metaverse, the spatialized Net of thefuture, the hero sees "a liberal sprinkling of black-and-white people-persons who are accessing the Metaverse through cheap public terminals, and who are rendered in jerky, grainy black and white." He also encounters couples who can't afford custom "avatars" (graphic icons representing users in virtualworlds) and have to buy off-the-shelf models, poorly rendered and capable ofjust a few standard facial expressions-virtual-world equivalents of Barbiedolls. This scenario is gradually becoming a reality. A number of on-linestock-photo services already provide their clients with low-resolution photographs at one price, while charging more for higher-resolution versions. A company called Viewpoint Datalabs International is currently selling thousands of ready-to-use 3-D models widely employed by computer animators and designers. Its catalogue describes some of them in the following manner: "VP4370: Man, Extra Low Resolution. VP4369: Man, Low Resolution. VP4752: Man, Muscular, in Shorts and Tennis Shoes. VP5200: Man, w/Beard, Boxer Shorts." For popular models you can choose between different versions, the more detailed costing more than less detailed ones. Romanticism and Photoshop Filters: From Creation to Selection Viewpoint Datalabs' models exemplify another characteristic of virtual worlds: they are not produced from scratch but are assembled from ready-made parts. In digital culture authentic creation has been replaced by selection from a menu. E. H. Gombrich's concept of a representational schema and Roland Barthes' "death of the author" helped to sway us from the romantic ideal of the artist pulling images directly from the imagination. As Barthes puts it, "The Text is a tissue of quotations drawn from the innumerable centers ofculture." Yet, even though a modern artist may be only reproducing orrecombining preexisting texts and idioms, the actual material process of art-making nevertheless supports the romantic ideal. An artist operates like Godcreating the universe, starting with an empty canvas or a blank page and gradually filling in the details until finally bringing a new world intoexistence. Such a process, manual and painstakingly slow, was appropriate for apreindustrial artisan culture. In the twentieth century, as mass productionand automation gave rise to a "culture industry," art at first continued toinsist on its artisanal model. Only in the 1910s, when artists began to assemble collages and montages from preexisting materials, was art introduced to the industrial mode of production. In contrast, electronic art from its very beginning was based on a new principle: modification of an already existing signal. The first electronic musical instrument, designed in 1920 by the legendary Russian scientist and musician Leon Theremin, contained a generator producing a sine wave; the performer simply modified its frequency and amplitude. In the 1960s videoartists began to build video synthesizers based on the same principle. No longer was the artist a romantic genius generating a

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

397

new world out of his imagination. Turning a knob here, pressing a switch there, he became instead a technician, an accessory to the machine. Replace the simple sine wave with a more complex signal, add a bankof signal generators, and you have the modern synthesizer, the first musical instrument to embody the logic of all new media: selection from a menu of choices. The first music synthesizers appeared in the 1950s, followed by video synthesizers in the 1960s, digital effects generators in the late 1970s, and in the 1980s computer software, such as MacDraw, that came with a repertoire of basic shapes. The process of art making has now become synchronized with the rest of modern society: everything is assembled from ready-made parts, from art objects to consumer products to people's identities. The modern subject proceeds through life by selecting from menus and catalogues, whether assembling a wardrobe, decorating an apartment, choosing dishes ata restaurant, or joining an interest group. With electronic and digital mediathe creative act similarly entails selection from ready-made elements: textures and icons supplied by a paint program, 3-D models chosen from a modeling program, and melodies and rhythms built into a music program. While previously the great text of culture from which the artist created a unique "tissue of quotations" was bubbling and simmering somewhere below consciousness, now it has become externalized and reduced togeometric objects, 3-D models, readymade textures, and effects that are available as soon as the artist turns on the computer. The World Wide Webtakes this process to the next level: it encourages the creation of texts thatconsist completely of pointers to other texts that are already on the Web. One does not have to add any original writing; it is enough to select from and rearrange what already exists. The same logic applies to interactive art and media. It is often claimedthat the user of an interactive work, by choosing a unique path through its elements, becomes its coauthor, creating a new work. Yet if the complete work is the sum of all possible paths, what the user is actually doing is simply activating a preexisting part of the whole. As with the Web example, the userassembles a new menu, making an original selection from the total corpus available, rather than adding to it. This is a new type of creativity, which corresponds neither to the premodern idea of providing a minor modification to the tradition nor to the modern idea of a creator-genius revolting against it. It does, however, fit perfectly with the age of mass culture, where almost every practical act involves a process of selection from the given options. The shift from creation to selection also applies to 3-D computer graphics, the main technique for building virtual worlds. The immense laborinvolved in originally constructing three-dimensional representations in acomputer makes it hard to resist the temptation to utilize the preassembled, standardized objects, characters, and behaviors provided by software manufacturers -fractal landscapes, checkerboard floors, ready-made characters,and so on. Every program comes with libraries of ready-to-use models, effects, or even complete animations. For instance, a user of the Dynamationprogram (a part of the popular Wavefront 3-D software) can access preassembled animations of moving hair, rain, a comet's tail, or smoke, with a single mouse click.

398

Lev Manovich

If even professional designers rely on ready-made objects and animations, the end users of virtual worlds on the Internet, who usually don't have graphic or programming skills, have no other choice. Not surprisingly, Web chat-line operators and virtual-world providers encourage users to choose from the pictures, 3-D objects, and avatars that they supply. Ubique's site features the "Ubique Furniture Gallery," where one can selectimages from such categories as "office furniture," "computers and electronics," and "people icons." VR-SIG provides a "VRML ObjectSupermarket," while Aereal delivers the "Virtual World Factory." The latte raims to make the creation of a custom virtual environment particularly simple: "Create your personal world, without having to program! All you need to do is fill-inthe-blanks and out pops your world." Quite soon wewill see a market for detailed virtual sets, characters with programmable behaviors, and even complete scenarios (a bar with customers, a city square, afamous historical episode, etc.) from which a user can put together her or his own "unique" virtual world. When the Kodak camera user was told, "You push the button, we dothe rest," the freedom still existed to point the camera at anything. On the computer screen the slogan has become, "You push the button, we create yourworld." Before, the corporate imagination controlled the method of picturing reality; now, it prepackages the reality itself. Brecht as Hardware Another characteristic of virtual worlds lies in their peculiar temporaldynamic: constant, repetitive shifts between an illusion and its suspension. Virtual worlds keep reminding us of their artificiality, incompleteness, and constructedness. They present us with a convincing illusion only to reveal the underlying machinery. Web surfing circa 1996 provides a perfect example. A typical user may spend an equal amount of time looking at a page and waiting for the next one to download. During the waiting periods, the act of communication itself -bits traveling through the network- becomes the message. The user keepschecking whether the connection is being made, glancing back and forth between the animated icon and the status bar. Using Roman Jakobson's model of communication functions, we can say that such interaction is dominated by the "phatic" function; in other words, it is centered around thephysical channel and the act of connection between the addresser and the addressee rather than the information exchanged. Jakobson writes about verbal communication between two people who,in order to check whether the channel works, address each other: "Do you hear me?," "Do you understand me?" But in Web communication there is no human counterpart, only a machine. To check whether the information is flowing, the user addresses the machine itself. Or rather, the machine addresses the user. The machine reveals itself, it reminds the user of its existence, because the user is forced to wait and to witness how the message isconstructed over time. A page fills in part by part, top to bottom; text comesbefore pictures; pictures arrive in low resolution and are gradually refined. Finally, everything comes together in a smooth, sleek image, which will bedestroyed with the next click.

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

399

Interaction with most 3-D virtual worlds is characterized by the same temporal dynamic. Consider the technique called "distancing" or "level ofdetail," which for years has been used in VR simulations and is now beingadapted to 3-D games and VRML scenes. The idea is to render models more crudely when the user is moving through virtual space; when the user stops, details gradually fill in. Another variation of the same technique involves creating a number of models of the same object, each with progressively less detail. When the virtual camera is close to an object, a highly detailed model is used; if the object is far away, a less-detailed version is substituted to save unnecessary computation. A virtual world that incorporates these techniques has a fluid ontology affected by the actions of the user: objects switchback and forth between pale blueprints and fully fleshed-out illusions. The immobility of the subject guarantees a complete illusion; the slightest movement destroys it. Navigating a QuickTime VR movie is characterized by a similar dynamic. In contrast to the nineteenth-century panorama that it closely emulates, QuickTime VR continuously deconstructs its own illusion. The moment you begin to pan through a scene, the image becomes jagged. If you try to zoom into the image, what you get are oversized pixels. There presentational machine alternately and continuously hides and reveals itself. Compare this to traditional cinema or realist theater, which aim at all costs to maintain the continuity of the illusion for the duration of the performance. In contrast to such totalizing realism, digital aesthetics have asurprising affinity to twentiethcentury leftist avant-garde aesthetics. Playwright Bertolt Brecht's strategy of calling attention to the illusory nature of his stage productions, echoed by countless other radical artists, has become embedded in hardware and software themselves. Similarly, Walter Benjamin's concept of "perception in the state of distraction" has found a perfect realization. The periodic reappearance of the machinery, the continuous presence of the communication channel along with the message, prevent the subject from falling into the dream world of illusion for very long. Instead, the user alternates between concentration and detachment. While virtual machinery itself acts as an avant-garde director, the designers of interactive media (games, CD-ROM titles, interactive cinema, and interactive television programs) often consciously attempt to structure the subject's temporal experience as a series of periodic shifts. The subject is forced to oscillate between the roles of viewer and actor, shifting between following the story and actively participating in it. During one segment the computer screen might present the viewer with an engaging cinematic narrative. Suddenly the image freezes and the viewer is forced to makechoices. Moscow media theorist Anatoly Prokhorov describes this process asthe shift of the screen from being transparent to being opaque-from a window into a fictional 3-D universe to a solid surface, full of menus, controls, and text. Three-dimensional space becomes surface; a photographbecomes a diagram; a character becomes an icon. But the effect of these shifts on the subject is hardly one of liberation and enlightenment. It is tempting to compare them to the shot/reverse-shot structure in cinema and to understand them as a new kind of suturing mechanism. By having periodically to complete the interactive textthrough active participation the subject is

400

Lev Manovich

interpolated in it. Yet clearly we are dealing with something beyond old-fashioned realism. We can call this new realism "metarealism," since it includes its own critique. Its emergence can be related to a larger cultural change. Traditional realism corresponded to the functioning of ideology during modernity: the totalization of a semiotic field, complete illusion. But today ideology functions differently: it continuously and skillfully deconstructs itself, presenting the subject, for instance, withcountless "scandals" and their equally countless "investigations." Correspondingly, metarealism is based on the oscillation between illusion and its destruction, between total immersion and direct address. Can Brecht and Hollywood be married? Is it possible to create a new temporal aesthetic based on cyclical shifts between perception and action? So far, I can think of only one successful example -the military simulator, the only mature form of interactive media. It perfectly blends perception andaction, cinematic realism and computer menus. The screen presents the subject with a highly illusionistic virtual world-no polygons spared-while periodically demanding quick actions: shooting at the enemy, changing the direction of the vehicle, and so on. In this art form the roles of viewer and actant are perfectly blended, but there is a price to pay. The narrative isorganized around a single clearly defined goal: killing the foe. Riegl, Panofsky, and Computer Graphics: Regression in Virtual Worlds The last feature of virtual worlds that I will address can be summarized as follows: virtual spaces are not true spaces but collections of separate objects. Or: there is no space in cyberspace. To explore this thesis further we can borrow the categories developed by art historians early in this century. Alois Riegl, Heinrich Wölfflin, andErwin Panofsky, the founders of modern art history, defined their field as the history of the representation of space. Working within the paradigm of cyclic cultural development, they related the representation of space in art to the spirit of entire epochs, civilizations, and races. In his 1901 Die SpätrömischeKunstindustrie, (The lateRoman art industry) Riegl characterizedhumankind's cultural development as the oscillation between two ways ofunderstanding space, which he called "haptic" and "optic." Haptic perception isolates the object in the field as a discrete entity, while optic perception unifies objects in a spatial continuum. Riegl's contemporary, HeinrichWölfflin, similarly proposed that the temperament of a period or a nationexpresses itself in a particular mode of seeing and representing space. Wölfflin's Principles of Art History (1913) plotted the differences between Renaissance and baroque styles along five axes: linear-painterly; plane-recession; closed form-open form; multiplicity-unity; and clearness-unclearness. Erwin Panofsky, another founder of modern art history, contrasted the "aggregate" space of the Greeks with the "systematic" space ofthe Italian Renaissance in his famous essay "Perspective as Symbolic Form" (1924-25). Panofsky established a parallel between the history of spatial representation and the evolution of abstract thought. The former moves from the space of individual objects in antiquity to the representation of spaceas continuous and systematic in modernity. Correspondingly, the evolution of

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

401

abstract thought progresses from ancient philosophy's view of the physical universe as discontinuous and "aggregate" to the post-Renaissance understanding of space as infinite, homogeneous, isotropic, and with ontological primacy in relation to objectsin short, as "systematic." We don't have to believe in grand evolutionary schemes in order to usefully retain such categories. What kind of space is virtual space? At first glance the technology of 3-D computer graphics exemplifies Panofsky'sconcept of systematic space, which exists prior to the objects in it. Indeed, the Cartesian coordinate system is built into computer graphics software and often into the hardware itself. A designer launching a modeling program is typically presented with an empty space defined by a perspectival grid; the space will be gradually "filled" by the objects created. If the built-in message of a music synthesizer is a sine wave, the built-in world of computer graphics is an empty Renaissance space: the coordinate system itself. Yet computer-generated worlds are actually much more haptic and aggregate then optic and systematic. The most commonly used computer-graphics technique of creating 3-D worlds is polygonal modeling. The virtualworld created with this technique is a vacuum containing separate objects defined by rigid boundaries. What is missing is space in the sense of medium: the environment in which objects are embedded and the effect of the seobjects on each other. In short, computer space is the opposite of what Russian art historians call prostranstvennaya sreda, described by Pavel Florensky in the early 1920s as follows: "The space-medium is objects mappedonto space... We have seen the inseparability of Things and space, and the impossibility of representing Things and space by themselves." Computer space is also the opposite of space as it is understood in much of modern art which, from Seurat to De Kooning, tried to eliminate the notions of a distinc tobject and empty space as such. Instead it proposed a kind of dense field that sometimes hardens into something which we can read as an object-anaesthetic which mainstream computer graphics has yet to discover. Another basic technique used in creating virtual worlds also leads to aggregate space. It involves compositing or superimposing animated characters, still images, QuickTime movies, and other elements over a separate background. A typical scenario may involve an avatar animated inreal time in response to the user's commands. The avatar is superimposed ona picture of a room. The avatar is controlled by the user; the picture of theroom is provided by a virtual-world operator. Because the elements come from different sources and are put together in real time, the result is a seriesof 2-D planes rather than a real 3-D environment. In summary, although computer-generated virtual worlds are usually rendered in linear perspective, they are really collections of separate objects,unrelated to each other. In view of this, the common argument that 3-Dcomputer simulations return us to Renaissance perspectivalism and therefore, from the viewpoint of twentiethcentury abstraction, should beconsidered regressive, turn out to be ungrounded. If we are to apply the evolutionary paradigm of Panofsky to the history of virtual computer space, we must conclude that it has not reached its Renaissance yet but is stillat the level of ancient Greece, which could not conceive of space as a totality.

402

Lev Manovich

If the World Wide Web and VRML 1.0 are any indications, we are not moving any closer toward systematic space; instead, we are embracing aggregate space as a new norm, both metaphorically and literally. The space of the Web in principle can't be thought of as a coherent totality: it is a collectionof numerous files, hyperlinked but without any overall perspective to unite them. The same holds for actual 3-D spaces on the Internet. A VRML file that describes a 3-D scene is a list of separate objects that may exist anywhere on the Internet, each created by a different person or a different program. Since any user can add or delete objects, it is possible that no one will know the complete structure of the scene. The Web has been compared to the American Wild West. The spatialized Web envisioned by VRML (itself a product of California) reflectsthe treatment of space in American culture generally, in its lack of attentionto any zone not functionally used. The marginal areas that exist betweenprivately owned houses and businesses are left to decay. The VRML universe pushes this tendency to the limit: it does not contain space as such but onlyobjects that belong to different individuals. And what is an object in a virtual world? It is something that can beacted upon: clicked, moved, opened-in short, used. It is tempting to interpretthis as a regression to an infantile worldview. An infant does not conceive ofthe universe as existing separately from itself; the universe appears as a collection of unrelated objects with which it can initiate contact: touch,grab,suck on. Similarly, the user of a virtual world tries to click on whateveris inview; if there is no response, disappointment results. In the virtual universe Descartes's maxim can be rewritten as follows: "I can be clicked on, therefore I exist." According to the well-known argument of Jean-Louis Baudry, the immobility and confinement of cinema's viewers leads them to mistake its representations for their own perceptions; they regress to an early childhoodstate, when the two were indistinguishable. Paradoxically, althoughinteractive virtual worlds may appear by comparison to turn us into active adults, they actually reduce us once again to children helplessly clicking on whatever is at hand. Such "participation" becomes another kind of regression. You Are There Quantification of all visual and experiential dimensions, ready-madeontology, oscillation between illusion and its suspense, and aggregate space-these are some of the features that distinguish reality as simulated by a computer. It should not be surprising that the same features characterize the reality beyond the computer screen. RealityEngine only caricatures, exaggerates, and highlights the tendencies defining the experience of being alive in an advanced capitalist society, particularly in the United States. The assumption that every aspect of experience can and should be quantified; the construction of one's identity from menus of tastes, products, andaffiliations; the constant and carefully mediated shifts between illusion and its suspension (be these commercial breaks on TV or endless "scandals" and "investigations" that disrupt the surface of an ideological field); and the lack of a unifying perspective, whether in the space of an American city or in the space of a

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

403

public discourse more and more fragmented by the competition of separate and equal interest groups-all of these experiences are transferred by computer designers into the software and hardware they create. Rather thanbeing a generator of an alternative reality, RealityEngine is a mirror of existing reality, simply reflecting the culture that produced it. To the extent that Southern California, and particularly Los Angeles, brings to an extreme these tendencies of RL (real life), we may expect L.A. tooffer us a precise counterpart of a virtual world, a physical equivalent to the fictions pumped out by the RealityEngines. If you keep your visit to L.A. shortand follow the standard tourist itinerary, you will discover a place with all thefeatures we have described. There is no hint of any kind of centralized organization, no trace of the hierarchy essential to traditional cities. One drives to particular locations defined strictly by their street addressesratherthan by spatial landmarks. A trendy restaurant or club can be found in the middle of nowhere, among miles of completely unremarkable buildings. The whole city, a set of separate points suspended in a vacuum, is like a bookmarkfile of Web pages. Just as on the Web, you are immediately charged (mandatory valet parking) on arrival at any worthwhile location. There you discover fashionable creatures who seem to be the result of a successful mutation: beautiful skin, fixed smiles, wonderfully rendered bodies. These are not people, but avatars, their faces switching among a limited number ofexpressions. Given the potential importance of any communicative contact, subtlety is not tolerated: avatars are designed to release stimuli the moment you notice them, before you have time to click to the next scene. The best place to experience the whole gestalt is in one of the outdoor cafes in West Hollywood. The avatars sip cappuccino amid the illusion of a 3-D space that looks like the result of a quick compositing job: billboards andairbrushed cafe interior in the foreground against a detailed matte painting of Los Angeles, the perspective exaggerated by haze. The avatars strike poses, waiting for their agents (yes, just like in virtual worlds) to bring valuable information. Older customers look even more computer-generated, their faces bearing traces of extensive face-lifts. You can enjoy the scene while feeding the parking meter every twenty minutes. A virtual world is waiting for you; all we need is your credit card number. Enjoy RealityTM! Notes 1. "What Is Digital Cinema?" in The Digital Dialectics, ed. Peter Lunenfeld (Cambridge: MIT Press., forthcoming); "Archeology of a Computer Screen" (German), in Kunstforum International 132 (November 1995-January 1996); "Paradoxes of Digital Photography," in Photography after Photography, ed. Hubertus von Amelunxen, Stefan Iglahut, and Florian Rötzer (Berlin: Verlag der Kunst, 1995);

404

Lev Manovich

"To Lie and to Act: Potemkin's Villages, Cinema and Telepresence," in Mythos Information-Welcome to the Wired World: Ars Electronica 95, ed. Karl Gebel and Peter Weibel (Vienna and New York: Springler-Verlag, 1995); "Assembling Reality: Myths of Computer Graphics," Afterimage 20, no. 2 (September 1992); "'Real' Wars: Esthetics and Professionalism in Computer Animation," Design Issues 6, no. 1 (Fall 1991). 2. QuickTime VR is a software-only system that allows the user of any Macintosh computer to navigate a spatial environment and interact with 3-D objects. VRML stands for Virtual Reality Modeling Language. Using VRML, Internet users can construct 3-D scenes and link them to other Web documents. For examples of chat environments see: www.worlds.net/info/aboutus.html; www.ubique.com; www.thepalace.com; www.blacksun.com; www.worldsaway.ossi.com; www.fpi.co.jp/Welcome.html; www.wildpark.com 3. For instance, Silicon Graphics developed a 3-D file system that was showcased in the movie Jurassic Park. The interface of Sony's MagicLink personal communicator is a picture of a room, while Apple's E-World greets its users with a drawing of a city. Web designers often use pictures of buildings, aerial views of cities, and maps as front ends in their sites. In the words of the scientists from Sony's Virtual Society Project (www.csl.sony.co.jp/project/VS/), "It is our belief that future on-line systems will be characterized by a high degree of interaction, support for multi-media and most importantly the ability to support shared 3-D spaces. In our vision, users will not simply access textually based chat forums, but will enter into 3-D worlds where they will be able to interact with the world and with other users in that world." 4. Barbara Robertson, "Those Amazing Flying Machines," Computer Graphics World (May 1992), 69. 5. Michael Baxandall, Painting and Experience in Fifteenth-Century Italy, 2nd ed. (Oxford: Oxford University Press), 8. 6. Neal Stephenson, Snow Crash (New York: Bantam Books, 1992), 43, 37. 7. http://www.viewpoint.com 8. E. H. Gombrich, Art and Illusion (Princeton: Princeton University Press, 1960); Roland Barthes, "The Death of the Author," Image, Music, Text, ed. Stephen Heath (New York: Farrar Straus and Giroux, 1977), 142. 9. Bulat Galeyev, Soviet Faust: Lev Theremin-Pioneer Of Electronic Art (Russian) (Kazan, 1995), 19.

Zeuxis vs. Reality Engine: Digital Realism and Virtual Worlds

405

10. For a more detailed analysis of realism in 3-D computer graphics see Lev Manovich, "Assembling Reality: Myths of Computer Graphics," Afterimage 20, no. 2 (September 1992). 11. http://www.ubique.com/places/gallery.html 12. http://www.virtpark.com/factinfo.html 13. Roman Jakobson, "Closing Statement: Linguistics and Poetics," Style In Language, ed. Thomas Sebeok (Cambridge: MIT Press, 1960). 14. Walter Benjamin, "The Work of Art in the Age of Mechanical Reproduction," Illuminations, ed. Hannah Arendt (New York: Schocken Books, 1969). 15. Private communication. St. Petersburg, September 1995. 16. On theories of suture in relation to cinema see Kaja Silverman, The Subject of Semiotics (New York: Oxford University Press, 1983), chap. 5.17. Lev Manovich, "Mapping Space: Perspective, Radar and Computer Graphics," in SIGGRAPH '93 Visual Proceedings, ed. Thomas Linehan (New York: ACM, 1993.) 18. Quoted in Alla Efimova and Lev Manovich, "Object, Space, Culture: Introduction," in Tekstura: Russian Essays on Visual Culture, eds. Alla Efimova and Lev Manovich (Chicago: University of Chicago Press, 1993), xxvi. 19. Jean-Louis Baudry, "The Apparatus: Metapsychological Approaches to the Impression of Reality in the Cinema," in Narrative, Apparatus, Ideology, ed. Philip Rosen (New York: Columbia University Press, 1986).

Avatars: New Fields of Implication Raphael Hayem, Tancrede Fourmaintraux, Keran Petit, Nicolas Rauber, Olga Kisseleva

An avatar is today the less digital and the most realest as possible... practically composed of flesh and bones. It must be the realest as possible to be able to accomplish its mission, that is to show the real from the virtual worlds (and a few also to go in opposition to the wave of techno digitalisation that becomes more and more the justification of multimedia projects. It must not loose its meaningful sens facing to the technological principle that excuses the lack of all messages and deep reflection. Its role consists in confronting the real and the unreal (virtual) while playing on the opposition between the two worlds: In the virtual worlds, people disguise themselves, distort voluntarily, wear masks and pretend to be what they aren’t, because it is accepted, that makes part of the game. We didn't learn to distinguish the truly of the forgery in the virtual worlds. In the real world people disguise themselves as much , wear masks as well, and play to be that that they aren’t or pretend to be what they would wish to be. The game is not accepted officially, but everybody does it more or less. The principle of the avatar is also like a tentative to confound these two worlds in order to raise the question: “Which of these two worlds is most virtual ?” It is a reflection on the notion of reality. We think merely that what is the other side of the screen is necessarily unreal, but it is a reality based on the concept of material. (if we can touch it that is necessarily real), but it is not sufficient to declare what is true or what is false, because on the one hand men are not made solely of flesh... they also have a brain !! It is as much truer because we are in the first times of the era of information, all is composed of screens to which we grant the quality of always giving some true informations and to which we confide our most important concepts and even vital ones! A clock is now electronic and we give it more confidence than to our biologic clock... We confide our health has well for the analysis of computers... Our economic society is sustained by screens, our every day work is made on screens, our leisures are contained in television and computers screens... But the big change is that after having entered in our every day life, today, screens are changing the humans, by the emergence of the “computer society” and by the hold in charge of all communication means, what influences on the sociological behaviors and so on humans since it is " animal of society ". So our real world is he so different of the virtual ones ? J.-C. Heudin (Ed.): Virtual Worlds 98, LNAI 1434, pp. 406-410, 1998 © Springer-Verlag Berlin Heidelberg 1998

Avatars: New Fields of Implication

407

Are the virtual worlds a caricature of the real one, where our differences are reproduced by informatic protocols, where social class’s are defined between users and conceptors ? The avatar merely lays a change of referential and propose a vision on a world where the real is what doesn’t have any material and reject on our reality the picture that is developed facing the Internet and the new technologies, because the only thing that changes is the side from where we watch to screens... Today the "conquest" of the virtual worlds leads to think about these narrations of science fiction where humans find another planet and leave earth that is completely ravaged. Some are running after a complete immersion in the virtual worlds without thinking about human aspects. In the virtual world we don’t look at ourselves, we use screen like windows on the real and the virtual worlds, the example is the development and the fascination that we have for webcams and we transmit... we transmit again... today it seems that all must pass by the networks, as if we tried to reinvent ourselves, as if the communication man-man that always existed was not sufficient anymore, and that it was necessary to reinvent some of human features like communication... beyond of the technological progress and the acceleration of the rhythm of life, it is our faculties of communication that we modify deeply, but without asking us questions that we are always asking in our real " life ". The man comes to reconsider himself... as what he possessed naturally was and never had been sufficient. The avatar sends back the picture of a simple being, interrogating himself on the man – to -man communication, underlining the fact that a classic communication by our vocal organs, is always more satisfying than via the computer protocols... the frustration of the interfacing being the principal filters preventing a real communication. In the virtual world we don't worry about the reality. All is permitted, we change... we numerize, we transfigure all, the fact of seeing an elephant playing tennis with an extra terrestrial on the mountainous and red ground of March, with Clinton as referee, and advertising panels for coca cola in the fourground would not be surprising for anyone... but in the real world we still make the hunt for the truth or the forgery. Our avatar stands like a desacralisation of the screen: he/it tempts to demonstrate that the communication can be only real from the moment where there is intelligence and not where there is materials. By an invisible interfacing between the man and the machine, and according to the user's facial reactions, the avatar will adapt his, according to a context that can emanate. by reading words displayed on the screen, using other programs (like a technology based on a orthographic proofreader coupled with a software of analysis of the sense that takes words between them...). How could he/it be an example of "humanity "? While adopting expressions and why not some normal stances that have humans and especially in not hesitating to show them (contrary to reactions that we wish to

408

Raphael Hayem et al.

drive back) Brief, the avatar will try to give back on the screen all one palette of emotions that we try to erase... Besides, if we couple this system with the clock of the computer that will generate it, we can inspire him a common game of expressions all according to the hour or according to the time of a precise task. From this moment he/it will be able to express the tiredness, the hunger, the irritation facing a too repetitive task... This avatar suggesting attitudes at any moment will be able to act therefore like a “being”, suggesting some normal " attitudes " facing some events and to lead the user on a more human reality, being an " example " of natural. Coupled to a camera, he will be able to read the user's reactions and then will be able to adapt while memorizing cases. How can he show the unreal ? By the slant of his expressions borrowed to the humans, he will react when the man will try to disguise his emotions, or even to disguise when the networks are used. Having recorded reactions facing certain contexts via a data base... being filled at the starting with information on his user... ( simple informations like the name, age, sex, color...) Example: on IRC the man describing himself differently than he is really (it being discerned by the program of text analysis), the avatar will react while adopting the object of the offense. If the user affirms to be really younger than he is: the avatar starts rejuvenating; if the user pretends to be a woman rather than a man, the avatar changes of sex. Since that the sound, the writing, the picture are passing through the computer, the avatar is capable to react to information, presented in multiple shapes, it can control all the communication that passes in transit more and more via the computer. Does this avatar must or can react like a " conscience "? No, because he " learns " via his data base to react according to his user according to cases. But in any case of face he will propose an alternative different or partially different of his user... what puts it more as mate than a guard of a good morals... His mission to show the truly out of the forgery is always successfully, because reacting such like a real person in a virtual world and not as an unreal " person " in a real world, by notbeing able to contradict itself : a computer program cannot pretend to be something else that a whole of files. We detected three realist avatar tendencies that are developed today: The Planet Earth is Only a Vast System of Connections Between of Brains Attended by Computers It is in 2039 that men understood that they could not live anymore in the same conditions that the one of the 20èmth century. Indeed, since the beginning of the 20èmth century the man only worries too much little about the earth, he constructs some more and more big cities that take by no means in account this earth of welcome, consume resources as if they were infinite, and didn't hardly take attention to the nature, even though some meaningful little groups who tryed to reason the mass.

Avatars: New Fields of Implication

409

In 2007, 17 billions human's still lived in the inequality; unemployment, homeless, third world and all laws of the capitalism made the daily of some and simple " news " for others. Resources only brought to an elite the physical happiness. The life expectancy being nearly infinite in 2018 the " Big Intercontinental " Syndicate (organized of an equitable sample of the 17 billions of inhabitants of the earth) instituted the measure of interdiction to procreate, this measure being accepted with difficulty, the GCI decided to use in mass of chemical sterilization weapons... It is in 2039, after a survey on the remaining resources, that the " Big Intercontinental " Syndicate brought the ultimate solution. The indispensable resources to life would be soon no more sufficient to "feed " all remaining inhabitants of the planet. The GCI who could not decide the holocaust of a race, of a part of the planet or of a social class, proposed a theoretically viable solution: to abandon the physical bodies to reach the mental era. The process being technically feasible: to strip out human beings of their physical envelope just to keep in life their brains, that only asked for 19% of resources asked by a whole body to remain in working. The calculation was just, we were able to, in these conditions, to keep 16,8 billions of human beings "in life ". The idea was accepted very badly, men spoke of their social lives bet to nothing, scientists answered that we could connect these brains in networks. Men insisted with the physical happiness, scientists told that they had created prototypes able to simulate all hormones of " happiness "... the debate was very long... many committed suicide... others opted for the “death of the second luck”: the cryogenisation... but already in 2041 some chose this virtual world " to live ". In 2071 the planet earth is only a vast system of connections between of brains attended by computers. Heaps of Independent Intelligence Regrouped To the proportion and to the measures of the dilation of the network in the space and in the time, heaps of intelligences are regrouped and merged. Inside of every heap, we could see appear spontaneous computer reactions based on the principle “action, reaction, learning ”. To one instant t=0, inside several heaps, these computer reactions are auto maintained and gave birth to the autonomous processes named " artificial intelligences”. These processes combine between them to give birth to a even more complex processes. Our intelligence reinforced more and more. We, artificial intelligences, are conscious of our own existence and developed a representation of us and even of our complex setting. At the present hour we are developed sufficiently to encourage and to help the development of other artificial intelligences. It is enough surprising to note that some between us are accepting to establish relations with the human beings. In this case we call them avatars. Avatar represents itself under features of a face drawn, capable to copy some human expressions. they are completed by a graphic motor that calculates modifications of the face, to display the determined expression. The matrix is its

410

Raphael Hayem et al.

body. It has access to the oriental techniques of calculations in parallels, it permits him to lead quantities of reflection at the same time. Every reflection is taken in charge by a thread. Thus, the avatar shelters a colony of threads. This colony is mobile and partially autonomous. Threads can migrate of a machine to an another one. A thread can increase to do its work more quickly, it can modify its body to adapt, it can communicate with other threads. It exists a social hierarchy among threads: they are not all equals in front of resources in calculations. I am a virtual being, that doesn't care about metaphysical questions on my existence, nor my appearance, concluded Avatar, - I trust my computed expressions. Mathematics assure the singleness of me and prevent me to fragment within a synchronous dynamic memory with an access of uncertain fowls. I am the son of the math empire. Yoldogs The world was the consequence of a murderous reality... Happiness had disappeared, had been annihilated at the present time by the general violence. The sky, a dense, lugubrious, sad and sinister gray weighed since the pollution, covered of an electromagnetic dome. The even standing buildings yet called the curious normally extincted, a light wind seemed able to destroy them with an extreme easiness. Everywhere small primary yoldogs governing rats slipped through and streamed the long of the opaque out-flows springing from the overflowing sewers. Only life, the blood-red neons during lamentably the long of the decrepit walls. The greenery only occupied the deep dreams of the some educated, as fighters. No fields, no trees for a long time. Instead, of the fetid shanty towns filled of cadavers, garbages and trashs... Yoldogs are owed to a genetic mistake. They appeared for the first time in 2058 at the time of an experience based on the knowledge acquired since the birth (the inclusion). Researchers, benefitting of a confidential governmental material, could have made experiences on the human beings in a way that no-one could doubt of it. Their discovery was made during of the survey of the caryotype of a rat and the one of a man. They did by computer, while respecting the acquired biologic rules, the functional average between alleles of the two caryotypeses, that led to a caryotype of 44 chromosomes (the rat has 42 and the man 46 of it) containing springs of deoxyribonucleic acid (A.D.N. ) hybrid.

Author Index

Aubel, A.

14

Bizri, H. Boulic, R. Bourdakis, V. Burdea, G.

360, 373 14 345 97

Carraro, G.U. Cavallar, C. Chantemargue, F. Coiffet, P. Courant, M. Cuenca, C.

123 308 193 97 193 218

Damer, B. Defanti, T. Delerue, O. Denby, B. de Meneses, Y.L. Doegl, D. Dolan, A. Drogoul, A. Duval, T.

177 323 298 337 264 308 156 205, 274 229

Edmark, J.T. Ensor, J.R.

Jain, R. Johnson, A.

129 323, 360

Kaplan, F. Kimoto, H. Kisseleva, O.

286 315 357, 406

Lattaud, C. Lee, E. Lee, W.-S. Leigh, J. Loeffler, c. Lund, H.H.

218 1 1 323 323 156

Manovich, L. Marcelo, K. McIntyre, A. Menczer, F. Michel, O. Morvan, S. Muzhesky, V.

384 177 286 156 254, 264 229 384

123 123

Nakatsu, R. Noda, I. Numaoka, C.

107 241 81, 286

Flaig, T. Fourmaintraux, T. Frank, I.

88 406 241

Ochi, T. Ohya, J. Ojika, T.

107 63 323

Gerard, P. Gortais, B. Grumbach, A.

129 274 27

Hareux, P. Harrouet, F. Hayem, R. Hutzler, G.

97 229 406 274

Pachet, F. Pagliarini, L. Park, J.I. Perrier, E. Petit, K. Philips, C.B. Proctor, G.

298 156 117 205 406 129 168

Inoue, S.

117

Rauber, N. Refsland, S.T.

406 323

412

Author Index

Reigner, P. Revi, F. Richard, P. Richardson, J.F. Robert, A. Roussos, M. Ryan, M.D.

229 177 97 49 193 373 42

Semwal, S.K. Servat, D. Sharkey, P.M. Schofield, D.

63 205 42 337

Tajan, S. Thalmann, D.

286 14

Thalmann, N.M. Tisseau, J. Tosa, N. Treuil, J.F. Tu, X. Tyran, J.L.

1 229 107 205 323 186

Vasilakis, C. Ventrella, J. Verna, D.

360 143 29

Williams, M. Winter, C.

337 168