Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders [1 ed.] 9781617287428, 9781617281105

Eye movements are a key factor in human vision. During steady state fixation where the visual attention is voluntarily c

244 114 5MB

English Pages 231 Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders [1 ed.]
 9781617287428, 9781617281105

Citation preview

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010. ProQuest

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

EYE AND VISION RESEARCH DEVELOPMENTS

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

EYE MOVEMENT: THEORY, INTERPRETATION, AND DISORDERS

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

EYE AND VISION RESEARCH DEVELOPMENTS Additional books in this series can be found on Nova‟s website under the Series tab.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Additional E-books in this series can be found on Nova‟s website under the E-book tab.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

EYE AND VISION RESEARCH DEVELOPMENTS

EYE MOVEMENT: THEORY, INTERPRETATION, AND DISORDERS

DOMINIC P. ANDERSON

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

EDITOR

Nova Science Publishers, Inc. New York Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2011 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 631-231-7269; Fax 631-231-8175 Web Site: http://www.novapublishers.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers‟ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book. LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Eye movement : theory, interpretation, and disorders / editor, Dominic P. Anderson. p. ; cm. Includes bibliographical references and index.

ISBN:  (eBook)

1. Eye--Movements. 2. Eye--Movement disorders. I. Anderson, Dominic P. [DNLM: 1. Eye Movements--physiology. 2. Eye Movement Measurements. 3. Ocular Motility Disorders. WW 400 E968 2010] QP477.5.E92 2010 612.8'46--dc22 2010016686

Published by Nova Science Publishers, Inc. † New York Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

CONTENTS

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Preface

vii

Chapter 1

Eye Movements in Non-Visual Cognition Dragana Micic and Howard Ehrlichman

Chapter 2

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations Pasquariello Giulio, Cesarelli Mario, Romano Maria, Bifulco Paolo, La Gatta Antonio and Fratini Antonio

53

Chapter 3

Fixational Eye Movements and Ocular Aberrometry Justo Arines Piferrer

67

Chapter 4

Eye-Movement Patterns in Hemispatial Neglect Sergio Chieffi, Alessandro Iavarone, Andrea Viggiano, Giovanni Messina, Marcellino Monda and Sergio Carlomagno

81

Chapter 5

Eye-Gaze Input System Based on Image Analysis under Natural Light Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

91

Chapter 6

What We See and Where We Look: Bottom-Up and Top-Down Control of Eye Gaze Laura Perez Zapata, Maria Sole Puig and Hans Supèr

103

Chapter 7

Characterizing Eye Movements for Performance Evaluation of Software Review Hidetake Uwano

119

Chapter 8

The Ontogenetic Hypothesis of Rapid Eye Movement Sleep Function Revisited James P. Shaffery and Howard Roffwarg

161

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

1

vi Chapter 9

Contents Behavioral Elements during Face Processing: Eye and Head Movement Activity and Their Connection to Physiological Arousal Andreas Altorfer, Marie-Louise Käsermann, Ulrich Raub, Othmar Würmle and J. Thomas Müller

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Index

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

199

209

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

PREFACE Eye movements are a key factor in human vision. During steady state fixation where the visual attention is voluntarily centered on a fixation estimulous, the eye shows involuntary small movements that exhibit an erratic trajectory. This new book describes fixational eye movements, their mathematical models, and the factors that affect fixation. Also explored, herein, is Congential Nystagmus, which is one of the diseases that affect binocular vision, reducing the visual quality of a subject, saccadic eye movements, eye movement patterns in hemispatial neglect and eye-gaze input systems used by people with severe disabilities. Saccadic eye movements direct the eyes toward new points of interest in the visual environment. Although predominantly externally triggered and performed in the service of vision, saccades also occur in cognitive activities that do not seem to require visual processing. In many examples of such “stimulus independent” thinking, high rates of saccadic activity have been observed. At the same time, there are also examples of stimulus independent thinking in which the eyes remain static. In either case, there is compelling evidence that these patterns of saccadic activity are more linked to task-related processing of internal information than to visual processing. The authors refer to saccades that accompany processing of internal information as non-visual eye movements (NVEMs) and variations in saccadic activity during different cognitive tasks as non-visual gaze patterns (NVGPs). Although a regularity of human behavior, patterns of stimulus-independent saccadic eye movements have been studied only intermittently in recent years. In Chapter 1, the authors review the history of research on such eye movements and describe evidence linking them to aspects of memory function. Historically, there have been two distinct themes in this research area. One theme emphasizes the interaction between endogenous cognitive activity and visual processing. For example, in the context of social interaction, eye movements have been studied for their role in conversation and gaze aversion. In this view, saccadic activity is driven by the interplay between the need to attend to internal thought processes and the need to minimize the potentially distracting effects of the visual presence of another person. A second theme has been that spontaneous saccadic activity directly reflects the nature of ongoing cognitive activity. There have been three different variants of this theme. The quasivisual approach examined saccadic activity as it pertains to visual aspects of cognition either in terms of visual perception or visual imagery. The activation approach considered saccadic activity a manifestation of either general motoric activation or asymmetrical hemispheric activation. The cognitive processing approach studied ocular activation as a reflection of

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

viii

Dominic P. Anderson

specific processing requirements of cognitive tasks. Empirical and anatomical evidence converge to support the idea that ocular activity occurring when no visual processing is required by a task may be linked to memory functions. The proposed linkage between eye movements and memory has both theoretical and practical implications, ranging from the evolution of higher cognitive functions to the use of ocular activity as an indicator of cognitive processes present in various situations such as driving while conversing and altered states of consciousness. Congenital Nystagmus (CN) is one of the diseases that can affect binocular vision, reducing the visual quality of a subject. It is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations and, although identified more than forty years ago, its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. The majority of patients affected by CN show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). CN etiology has been related to deficiencies in saccadic, optokinetic, smooth pursuit, and fixation systems as well as in the neural integrator for conjugate horizontal gaze. Although numerous studies have described CN pathophysiology and its relation to the visual system, actually, CN etiology still remains unclear. In recent years, a number of control system models has been developed in order to reproduce CN; results don‟t fully agree on the origin of these involuntary oscillations, but it seems that they are related to an error in „calibration‟ of the eye movement system during fixation. Chapter 2 aims to present some of the different models of the oculomotor system and discuss their ability in describing CN features extracted by eye movement recordings. Use of those models can improve the knowledge of CN pathogenesis and then could be a support for treatments planning or therapy monitoring. Eye movements are a key factor in human vision. During steady state fixation where the visual attention is voluntary centred on a fixation estimulous, the eye shows involuntary small movements that exhibit an erratic trajectory. Different studies suggest that these fixational eye movements are crucial for visual perception and the mantainance of visual attention. However, they exhibit a not so construtive roll in ocular aberrometry when presented during the measurement of ocular aberrations. The objective instruments used to determine the optical quality of the eye includes a fixational estimulous to keep, with the subject collaboration, the eye pupil centred with the instrument axis. Although useful for obtaining a coarse centring, the fixational movements induce an erratic effective lateral displacement of the eye pupil during the ocular inspection, preventing from the necessary fine alignment that imposes the correct estimation of the eye optical aberrations. The authors will show in this chapter how these involuntary fixational movements influence the estimated ocular aberrations, inducing an erroneous estimation of the statistical properties of the indiviual ocular refractive errors, and therefore, limiting their correction. Chapter 3 will be organized as follows. In section one the authors will describe the fixational eye movements, their mathematical models, and the factors that affect fixation. The authors will include at the end of this section a brief description of the main systems used nowadays for measuring the eye movements. In section 2 the authors will introduce the concept of Ocular Aberrometry, starting with its description and continuing with the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Preface

ix

presentation of the mathematical representation of ocular aberrations, and the description of the Hartmann-Shack wavefront sensors. Section 3 will be devoted to analyze the influence of the fixational eye movements on the estimation of ocular aberrations and the correction of refractive errors via refractive surgery or customized contact lenses. This analysis will be based on the examination of mathematical models and numerical simulations. In section 4 the authors will discuss the future prospects that concern the treatment of fixational movements in the framework of ocular aberrometry. Finally in section 5 the authors present the conclusions. As discussed in Chapter 4, hemispatial neglect is usually defined as failure to attend to the contralesional side of space. An approach to the study of neglect has been that to evaluate eye movement patterns of neglect patients, assuming that eye movements are a valid indicator of direction of their spatial attention. Eye movements have been analyzed while patients, simultaneously, performed different kinds of tasks, such as line bisection, visual search, text reading, scenes and face viewing. Overall, these studies showed that in neglect patients visual fixations and attention are oriented preferentially towards the ipsilesional side and there was a marked lack of active exploration of the contralesional side. Eye-gaze input system has been reported as a novel human-machine interface. The operation of this system requires only the user‟s eye movement. Using this system, many communication aid systems have been developed for people with severe physical disabilities, such as patients with amyotrophic lateral sclerosis (ALS). Eye-gaze input systems commonly employ a non-contact type eye-gaze detection method for which infrared or natural light can be used as a light source. The detection method that uses infrared light can detect eye-gaze with a high degree of accuracy. However, it requires a high-cost device. The detection method that uses natural light requires ordinary devices, such as a home video camera and a personal computer; therefore, the system that uses this method is cost-effective. However, the systems that operate under natural light often have low accuracy. As a result, they are capable of classifying only a few indicators for eye-gaze input. The authors have developed an eyegaze input system for people with severe physical disabilities, such as ALS patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal gaze positions through a simple image analysis and does not require special image processing units or sensors. This eye-gaze input system also compensates for measurement errors caused by head movements; in other words, it can detect eye-gaze with a high degree of accuracy. In Chapter 5, the authors present their eye-gaze input system and its new method for eye-gaze detection. We perceive the world by continually making saccadic eye movements. The primary role of saccadic eye movements is to bring the visual signals on the central part of the retina (fovea) where visual processing is superior and provides best visual capacities. It is therefore that the visual signals at the foveal region will be more likely perceived consciously. Thus on the hand visual signals at the fovea guide saccades and on the other hand they are used to create our conscious perception of the visual environment. Are the visual signals that guide the saccade the same as the ones that produce our perception? Current research shows that saccade and perceptual signals are closely related and favor the idea that the same visual signal guides saccades and gives rise to perception. Thus we see where we look at. However, on the basis of visual stimuli that highlight the discrepancy between visual (bottom-up) and perceptual (top-down) information, the authors can isolate to some extent signals that control saccade behavior from the ones that give (or do not give) rise to perceptual awareness. The

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

x

Dominic P. Anderson

findings of these studies show a distinction between signals for saccade guidance and for perception. In these cases, there is a discrepancy between where we look at and what we see. In Chapter 6 the authors will briefly give an overview of the bottom-up and top-down signals that control saccade guidance and perception and discuss whether they are the same or not. In Chapter 7, the authors introduce research on the analysis of reading strategy in software review by characterizing a developer‟s eye movements. Software review is a technique to improve the quality of software documents (such as source code or requirements specification) and to detect defects by reading the documents. In software review, difference between individuals is more dominant than review techniques and other factors. Hence, understanding the factor of individual differences between a high-performance reviewer and a low-performance reviewer is necessary to develop practical support and training methods. This research reveals the factors affecting review performance from an analysis of the reading procedure in software review. Measuring eye movements on each line and in a document allow us a correlation analysis between reading procedure and review performance. In this research, eye movements are classified into two types: Eye movements between lines and eye movements between documents. Two experiments analyzed the relationship between the type of eye movements and review performance. In the first experiment, eye movements between lines of source code were recorded. As a result, a particular pattern of eye movements, called a scan, in the subject‟s eye movements was identified. Quantitative analysis showed that reviewers who did not spend enough time on the scan took more time on average to find defects. These results depict how the line-wise reading procedure affects review performance. The results suggest that a more concrete direction of reading improves review performance. In the second experiment, eye movements between multiple documents (software requirements specifications, detailed design document, source code, etc.) were recorded. Results of the experiments showed that reviewers who concentrated their eye movements on high-level documents (software requirements specifications and detailed design document) found more defects in the review target document efficiently. Especially, in code review, reviewers who balanced their reading time between software requirements specifications and detailed design document found more defects than reviewers who concentrated on the detailed design document. These results are good evidence to encourage developers to read high-level documents in review. In the early-to-mid-nineteen sixties, Roffwarg and collaborators reported that the highly activated stage of rapid eye-movement sleep (REMS) in humans is, proportionately and absolutely, greater during sleep in the late-fetal/postnatal period than at any point later in life [141]. This finding, surprising at the time, accords with the pattern in most all other mammals. It led us to hypothesize that the initial plentifulness of this unusual state within total sleep -- which, in turn, claims a predominant share of all existence in early life -functions to supplement a critical operation of sensory stimulation in the wake state, namely, to support and enhance normal brain development [141]. A body of evidence has grown that reinforces a developmental function of REMS, and no contravening evidence has emerged to date. Not all of the biological consequences of the processes enacted in brain during the two major stages of sleep have been identified, though a number of findings in the last decade offer promise of unraveling the uncertainties about the functions of these states. Chapter 8 is

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Preface

xi

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

an effort to update and interpret the contributions of the REMS state during brain maturation (the “REMS ontogenetic hypothesis”) as well as some of non-REMS. The authors also explore directions for future investigations into the mechanisms operative during REMS that facilitate normative development in the central nervous system. As explained in Chapter 9, emotion recognition, validation, and appraisal are tasks of everyday life that are crucial for social interaction. As a prerequisite for emotion processing – as a sender or a receiver - one has to use a perceptual apparatus that includes eyes and head (eye-movements, gaze shifts). For sending and hiding emotional cues to the environment one has to direct the face to the other person in a way that they can recognize the facial activity. This is done usually by turning the head and, if required, by changing the body position. For receiving stimuli out of the environment a distinct orienting is needed if the oculomotor range of about +/- 55° is exceeded. Additionally, for this analysis three-dimensional head movements are involved [1] [2]. However, the subtle coupling between eye and head movements is still debated. The oculocentric view suggested by Bizzi [3] is reexamined in favoring a gaze feedback hypothesis which proposes that eye and head positions are monitored by corollary processes calculating an internal representation of the required gaze position [4]. Based on the lack of knowledge concerning these perceptual interrelations, several experiments are planed to investigate eye-head coordination, especially in emotion recognition. Additionally data are reported that point to the relevance of head movement as purposeful behavioral expression. In this respect the behavioral analysis [5] is expanded by using psychophysiological indices [6] [7] to validate the emotional as well as the communicative device of head movement patterns.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 1-52 © 2011 Nova Science Publishers, Inc.

Chapter 1

Eye Movements in Non-Visual Cognition Dragana Micic* and Howard Ehrlichman Department of Psychology, Queens College and The Graduate Center of the City University of New York, USA

Abstract

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Saccadic eye movements direct the eyes toward new points of interest in the visual environment. Although predominantly externally triggered and performed in the service of vision, saccades also occur in cognitive activities that do not seem to require visual processing. In many examples of such “stimulus independent” thinking, high rates of saccadic activity have been observed. At the same time, there are also examples of stimulus independent thinking in which the eyes remain static. In either case, there is compelling evidence that these patterns of saccadic activity are more linked to task-related processing of internal information than to visual processing. We refer to saccades that accompany processing of internal information as non-visual eye movements (NVEMs) and variations in saccadic activity during different cognitive tasks as non-visual gaze patterns (NVGPs). Although a regularity of human behavior, patterns of stimulus-independent saccadic eye movements have been studied only intermittently in recent years. In this chapter, we review the history of research on such eye movements and describe evidence linking them to aspects of memory function. Historically, there have been two distinct themes in this research area. One theme emphasizes the interaction between endogenous cognitive activity and visual processing. For example, in the context of social interaction, eye movements have been studied for their role in conversation and gaze aversion. In this view, saccadic activity is driven by the interplay between the need to attend to internal thought processes and the need to minimize the potentially distracting effects of the visual presence of another person. A second theme has been that spontaneous saccadic activity directly reflects the nature of ongoing cognitive activity. There have been three different variants of this theme. The quasivisual approach examined saccadic activity as it pertains to visual aspects of cognition either in terms of visual perception or visual imagery. The activation approach considered saccadic activity a manifestation of either general motoric activation or asymmetrical hemispheric activation. The cognitive processing approach studied ocular activation as a reflection of *

E-mail address: [email protected]. (Corresponding author)

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

2

Dragana Micic and Howard Ehrlichman specific processing requirements of cognitive tasks. Empirical and anatomical evidence converge to support the idea that ocular activity occurring when no visual processing is required by a task may be linked to memory functions. The proposed linkage between eye movements and memory has both theoretical and practical implications, ranging from the evolution of higher cognitive functions to the use of ocular activity as an indicator of cognitive processes present in various situations such as driving while conversing and altered states of consciousness.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Introduction In accord with their role of gatherers of information from the visual world, saccades are predominantly “exogenous” in nature (i.e., driven by visual stimuli), determined by the visual context and modulated by perceptual goals (Montaqnini & Chelazzi, 2005). Whether involuntary (i.e., stimulus driven) or voluntary (i.e., driven by a perceptual goal), they swiftly channel the gaze toward relevant sources of information and in this way enable constant input of visual information (Ross & Ma-Wyatt, 2003). Nevertheless, although the majority of saccades function in the service of vision, both empirical and anecdotal evidence recognize the existence of saccadic eye movements that are not related to external stimuli and are, in essence, endogenous in nature (i.e., related to internal cognitive events). These spontaneous, visual-stimulus-independent eye movements are often psychologically silent in the sense that they are not consciously monitored, yet are persistent companions of non-visual cognitive processes involved in thinking and memory. This thought-related ocular activity does not occur only in experimental or clinical settings but is an easily observed regularity of human gazing behavior. The systematic study of saccades in cognitive processing was pioneered by Yarbus (1967), who studied their role in visual perception and proposed that saccades enable selection of relevant information from visual scenes. However, before Yarbus demonstrated the role of saccadic eye movements in visual processing, researchers like Ladd, Moore, and Totten observed that saccadic eye movements do not always pertain to visual processing (Antrobus, 1973). Adding to initial evidence of the existence of non-visual ocular motility, Day (1964) reported that saccadic eye movements tend to occur immediately after a person is asked questions requiring some reflection but not after simple factual questions, and that people have a tendency toward making either leftward or rightward eye movements while answering reflective questions. Day named ocular movement along the horizontal axis following reflective questions the “lateral eye movement” (LEM) phenomenon, and suggested that a systematic relationship exists between attentional processes and the lateral direction of horizontal eye movements. Following the initial reports of eye movements that did not seem to be involved in acquisition of visual information from the environment, scientific interest in non-visual ocular motility grew around two main ideas: one, that the phenomenon reflects an interaction of endogenous cognitive activity and visual processing, and the other, that the phenomenon directly reflects the nature of ongoing cognitive activity. Throughout decades of empirical endeavor in delineating this intriguing phenomenon, these two research themes each branched out into several fields of inquiry and for the most part developed in parallel, “scanning” the phenomenon from different perspectives. The interactive approach examined eye movements in the social context focusing on their role in conversation and gaze aversion. Within the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

3

cognitive theme, quasi-visual approach related the phenomenon to visual perception and imagery, activation approach considered it a manifestation of either general motoric or asymmetrical hemispheric activation, and cognitive approach related it to specific processing requirements of cognitive tasks. Although conceptually different, all three cognitive approaches considered ways in which control of eye movements can be modulated by cognition suggesting that ocular motility may be affected by neural activation responsible for ongoing cognitive processing. The early 1970s saw the emergence of functional cerebral lateralization as a major new research area and a number of researchers proposed that the mechanisms responsible for LEMs could be linked to human brain asymmetries (Bakan, 1970; Galin & Ornstein, 1974; Kinsbourne, 1972). These investigators proposed that the direction of gaze shifts reflected increased activation of the contralateral hemisphere. We will describe this research in detail later, but at this point we want to note that the LEM researchers were concerned only with the initial gaze shift that occurs as people begin thinking about a question, that is, with a single, discrete saccadic eye movement. However, other researchers noted that “non-visual” ocular motility is not limited to a single, “reflective” eye movement; for example, people make multiple eye movements when responding to cognitive tasks such as answering auditorily presented questions that do not seem to involve visual perception (e.g., Antrobus, 1973; Weiner & Ehrlichman, 1976). This observation has been consistently confirmed in subsequent research thus, defining ocular motility related to cognitive processes as a phenomenon based on variations in frequency of saccades that spontaneously occur while people are engaged in non-visual cognitive tasks. Existing evidence indicates that frequency with which saccadic eye movements occur in non-visual cognitive tasks is closely connected to the processing demands of such tasks. While some tasks produce high rates of saccadic activity other tasks produce low rates of saccadic eye movements. For example, eyes are almost immobile while a person is occupied by listening to a piece of music but will tend to move rapidly when that person attempts to recall information related to hearing that musical piece (e.g., context), or to the piece itself. However, it is important to note that during cognitive tasks eyes do not move at a steady rate. Ocular motility in such tasks involves a wide range of combinations of motionless and moving gaze: single saccades, groups of two or three eye movements, and bursts of saccadic eye movements all of which appear randomly and are separated by time periods of various length in which gaze is stationary. As two mutually exclusive states of ocular activity, eye movements and gaze fixation combine to create patterns of ocular activity that coordinate with requirements of tasks. We refer to those visual stimulus-independent changes in ocular dynamics as non-visual gaze patterns (NVGPs). Although ocular motility can be assessed in terms of latency, direction and frequency of eye movements, NVGPs have been described only in terms of frequency. The frequency of eye movements has been determined by counting either the number of deflections on an EOG recording that meet inclusion criteria, or by counting the number of eye movements from a video record. The rate of eye movements has been collected over various time periods, ranging from 20 seconds to three minutes. For the purpose of this review, frequency of eye movement from various studies will be represented as “eye movement rate” (EMR) defined as number of eye movements per second. The extant research evidence strongly supports the idea that some saccadic activity is contingent on the nature of ongoing cognitive activity and is mainly driven by cognitive

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

4

Dragana Micic and Howard Ehrlichman

(endogenous) and not perceptual (exogenous) processes. In this paper we will describe the major theoretical accounts of saccadic activity in non-visual cognitive tasks and provide a detailed description of studies that have contributed important insights into various aspects of the phenomenon. We will also discuss possible neural mechanisms of ocular involvement in non-visual cognition by integrating findings of eye movement research and findings of other relevant areas of investigation. Throughout this paper we shall refer to saccades that occur when people are engaged in cognitive activity that is not primarily visual and which do not appear to be activated in the service of visual perception as “non-visual eye movements” (NVEMs). However, we wish to emphasize that this label should not be taken to imply either that no visual processing is occurring during such eye movements or that visual processing is not related to the presence of these eye movements.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Saccadic Eye Movements Saccades are rapid, conjugate eye movements that swiftly shift the gaze among points distributed at the same viewing distance in the visual field (Cumming, 1978). Their high velocity, ballistic nature, and absence of continuous control during movement, make saccades efficient gatherers of information for the fast-processing visual system. Saccades are the only type of eye movements that bring new information onto the fovea; all other eye movements (smooth pursuit, vergence and vestibular) serve to stabilize the image already on the fovea. They are controlled by cortical oculomotor centers (e.g., FEF; PEF) and generated in the brain stem (Munoz, 2002; Pierrot-Deseilligny, Ploner, Muri, Gaymard, & Rivaud-Pechoux, 2002). Their physiological profile (angular displacement, velocity, and acceleration) is a product of the mechanisms used by pretectal structures to control mechanical pointing of the eyes (e.g., inhibition), and of the morphology of the oculomotor muscles that move the eyes (Cumming, 1978). Saccadic eye movements are accomplished by a set of six extraocular muscles that work in pairs to control angular direction of the eyeball. Each extrinsic ocular muscle has a tonal part that acts like a low frequency mechanical filter and a twitch part that can respond at extremely high frequencies. These muscles are not prone to fatigue (acceleration is directly proportional to the net muscular force of the movement) (Lion & Brockhurst, 1951). Pretectum creates neural signals for saccadic movement based on retinal error signals (Cumming, 1978) and does not create an efferent copy of its commands for the rest of the system (Fulton, 2000). Saccades are among the fastest movements performed by human muscles with a peak velocity that can be as high as 600°/s (Young & Sheena, 1975). They can take as short as 100 ms (i.e., express saccades) to initiate a response to a visual stimulus (Young & Sheena, 1975) and depending on their magnitude can last between 30and 120ms (Cumming, 1978). New saccades are triggered in response to peripheral (e.g., extrafoveal) stimuli (Rayner, 1998). In order to be registered, a visual stimulus must last for at least 12.5 ms. The percept of the stimulus persists for 250ms which is equivalent to the average duration of fixation during reading (Yang & McConkie, 2001). Because each eye movement brings new foveal stimulation, this fixation time is necessary to ensure that a stimulus is completely processed and that successive retinal images do not erase the previous ones (Antrobus, 1973). The new stimuli are brought onto the fovea by ether major or minor saccades. The major saccades are

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

5

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

associated with the low frequency tonal response of the oculomotor muscles. They include large saccades (greater than 6.2 degrees) and small saccades (1.2-6.2 degrees). The minor saccades are associated with twitch muscles and include minisaccades (amplitude of .033-1.2 degrees; duration 10-20 ms) and microsaccades (.01 degrees in amplitude; duration less than 10 ms). The minor saccades are essential for the perception and interpretation of visual information: microsaccades contribute to object or character group scanning while minisaccades shift the line of sight to other features of the same object or to a new character group in reading. Large saccades typically occur within a 3-20 second range, often along with changes in head and body orientation (Cumming, 1978; Fulton, 2000; Zuber, Semmlow, & Stark, 1968 ). While the minor saccades can only be registered with specialized equipment, major saccades are readily observable and easily studied when eyes are open, covered, closed and in the dark. Eye movements related to non-visual cognition are the major saccades. Reasons and neural mechanisms behind the appearance of the major saccades when visual processing is not needed are under ongoing empirical scrutiny. As mentioned earlier, some explanations of NVEMs emphasize interactions between endogenous cognitive activity and visual processing while others emphasize direct effects of ongoing cognitive processes on ocular motility. We begin our review by discussing the interactive approaches. We then discuss the “direct” explanations. By analyzing descriptive and explanatory potential of each explanation we attempt to illustrate the development of the scientific inquiry of non-visual ocular motility. Lastly, we present a neuroanatomically based explanation and consider theoretical implications and practical applications of spontaneous saccadic activation in non-visual cognition.

Eye Movements and Social Dynamics: From Need for Affiliation to Gaze Aversion The most prominent “interactive” approach has involved eye movements that occur in a social context, particularly the context of dyadic conversation, a situation in which both internal cognitive and external visual processing are clearly present. In social situations, people‟s eye movements may reflect various interpersonal processes as much as, or more than they reflect the need to process visual information. For example, while mutual eye contact certainly allows people to perceive their interlocutor‟s face, it also is a signal of the looker‟s attitude and intentions vis a vis the other. Empirical investigations have revealed that eye contact during dyadic interaction can take place anywhere from zero to 100 % of the available time. The duration of eye contact is thought to reflect points in conversation (e.g., listening or speaking), nature of the topic (e.g., personal), individual differences (e.g., gender differences) and the type of relation between two individuals (e.g., cooperation) (Argyle & Dean, 1965). For example, there is more eye contact in discussions of less personal topics than in discussions of more personal topics (Exline, Jones, & Maciorowski, 1977). Women tend to engage in more eye contact in various situations than men (Bailenson, Blascovich, Beall, & Loomis, 2001), and more eye contact is displayed in collaborative than in competitive relationships (Bavelas, Coates, & Johnson, 2002).

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

6

Dragana Micic and Howard Ehrlichman

As a form of non-verbal communication, eye movements and eye contact may be important for various aspects of social interaction. It has been suggested that in a social setting, eye contact may function in three ways: first, by maintaining eye contact a speaker can receive feedback information about the direction of the interlocutor‟s attitude, attention, and emotional state (Exline, 1963) second, eye contact can ensure successful synchronization of speech (Kendon, 1972), and third, eye contact is an important way of establishing the desired degree of intimacy during interaction (Argyle, Lalljee, and Cook, 1968). Some of the early explanations of social ocular behavior were based on the notion that eye movements and eye contact in particular may be significant reinforcers of social interaction.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Social Reinforcement and Eye Contact Social interaction involves verbal and non-verbal behavior. Reinforcers such as “Good”, and “Right” have been found to increase the occurrence of verbal behaviors (Exline, & Messick, 1965). Whether or not such reinforcers can also affect nonverbal behaviors was examined by Exline and Messick who suggested that eye contact can be a potent indicator of the nature of one‟s involvement with other individuals in a social situation. In a social triad, eye contact can signal preference for one person (Exline, Gray, & Schuette, 1965) while avoidance of eye contact often indicates consternation or embarrassment (Gray, 1965). The proposition that eye contact may be one of the fundamental reinforcers of human social behavior rests on its presence early in ontogeny. The attention to direct eye gaze is present from birth (Farroni, Csibra, Simion, & Johnson, 2002) while smiling in response to masks representing the top of the head including eyes occurs in the second month of life (Spitz, 1946). It has been suggested that as other species, human infants imprint their mothers but being immobile, can only follow the mother‟s face with their eyes (Gray, 1958). The process of imprinting and the innate reward value of eye contact are at the root of the explanation of the complex role that eye contact plays in social interaction, the emission, the search for, and reception of social feedback. Research on ocular motility conducted within the framework of social reinforcement mainly emphasized expressional and communicative aspects of visual behavior in relation to personal traits such as dependency, a tendency of the individual to rely on others for help, approval and attention. In response to dependency arousal, some people make more stares while some make frequent eye movement, but in general all spend a long time in eye contact with another person. As eye contact seems to be innately rewarding (Argyle & Dean, 1965) the value of the social reinforcer for looking should be higher to individuals that tend to be dependent on the approval of others and lower in the dominant individuals. Consequently, according to the reward model, the pattern of looking behavior may be indicative of the personality type, with dependant individuals looking along the line of regard more than dominant individuals. According to the feedback model (information seeking model) postulated by Bitterman (1960), in which social reinforcement is conceptualized as a source of information, the amount of looking behavior in the absence of verbal information indicates the extent to which a person will seek information from another. In other words, an individual will perform a visual search for non-verbal feedback on their social conduct. However, while Exline and Messick (1967) did not find evidence that reinforcement of a glance associated with speech in an ongoing social interaction acts to the same effect as interaction of dependency needs and

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

7

social reinforcement of verbal behavior, there is evidence that visual behavior may be determined by different expectancies for approval and by the relative importance assigned to receiving reinforcement from individuals of different status (Efran, 1968). The pattern in which interaction of personality and social reinforcement affects looking behavior during speech suggests that looking behavior of which people are often unaware, cannot be fully understood from a perspective of social reward. Studies of eye contact as a signal of intimacy between people showed that social ocular behavior may reflect a complex interplay of the approach (i.e., imprinting of mother‟s face during infancy) and avoidance drives (i.e., anxiety about reactions of others) of behaviors relevant to affinitive motivation. The approach forces include the need for feedback and need for affiliation. The avoidance forces are fear of being seen, of revealing an inner state, and of rejection. The equilibrium theory of the emission of the affiliative behavior (Argyle & Dean, 1965) examined functional significance of visual interaction for the expressive aspect of social interaction and proposed that different aspects of intimacy such as eye contact, smiling, and physical proximity are always in equilibrium. The intimacy equilibrium model was later modified to suggest that eye contact reflects agreement between the intimacy level and the circumstance created by the combination of intimacy dimensions such as the role, distance, and topic of conversation (Exline, Jones, & Maciorowski, 1977).

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Face-to-Face Dialogues and Speech Patterns: Looking while Listening and Speaking One notable observation of Exline et al. (1977) was that in a face-to-face dialogue significantly more eye contact (i.e., frequency and duration of fixations) occurred while listening than while speaking. This asymmetry of gaze patterns in face-to-face dialogues was first described by Kendon (1967) while Exline et al. found this trend to be insensitive to congruency of intimacy dimensions. Based on video recordings of seven pairs of students engaged in small talk and the analyses of the beginning and ending of all sentences lasting at least five seconds, Kendon showed close association between speech patterns and patterns of looking: individuals tended to look at the other person more when listening than when speaking and the speaker frequently looked at the listener with glances that were shorter than those produced while listening. The proportion of looking-while-listening time was 2.5 to 3 times greater than the proportion of looking-while speaking time (e.g., Kendon, 1964; Exline, Gray, & Schuette, 1965). This asymmetry of gazing behavior in dialogue has been repeatedly confirmed (Exline et al., 1975; Kendon, 1967) even when interlocutor is not physically present (Ehrlichman, 1981), and found to generalize across ethnic groups (Exline, et al., 1977). Speech Synchronization: Eye Movements as Turn-Taking Cues The Kendon (1967) study marked the conception of the new approach to empirical investigations of gazing behavior in face-to-face dialogues. Although the field of gaze research was still dominated by the studies of the relationship between gaze and emotions, interpersonal attitudes or personality differences, a number of studies modeled after Kendon‟s

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

8

Dragana Micic and Howard Ehrlichman

approach applied microanalysis to examine the relationship between gaze and audible and visible behaviors involved in the dialogue. In contrast to the studies in which gaze was related to variables external to the dialogue itself (e.g., Argyle & Cook, 1976), the new studies focused on the relationship between patterns of gaze and speech and were interested in the function of gaze in the dialogue. Two patterns of gaze, long, end-of-a-phrase gaze and short gaze occurring within long utterances were proposed to contribute to different aspects of a dialogue. Kendon (1967) proposed that the prolonged gaze at the end of sentences enables smooth exchange of turns in dyadic conversation, acting as a cue for the willingness to switch turns and thus, synchronizing the ongoing discourse. Although there is evidence that eye movements during speech can serve as a “turn-holding cue” and the return of fixation as a “turn-yielding cue” (Doherty-Sneddon, Bruce, Bonner, Longbotham, & Doyle, 2002), the suggestion put forth by Kendon, that eye contact serves to synchronize conversation has not been confirmed in subsequent research (e.g. Beattie 1978a; 1978b; Beattie & Barnard, 1979). If the role of eye contact is to allow for a smooth conversation, the absence of such cues should have detrimental effects on the fluency of communication. However, comparison of face-to-face and audio-only conversations found no difference between two media on most parameters of speech fluency. In contrast to the speech synchrony prediction, more interrupted utterances were found in face-to-face than in auditory situation (e.g., Beattie & Barnard, 1979). This finding was replicated and extended by Williams (1978) who compared the quality of speech (i.e., interruptions, pauses and length of utterances) under face-to-face, audio, and video experimental condition and found significantly more interruptions and pauses in faceto-face condition than in the visual and audio condition. Audio and video conditions did not differ on any measure of speech quality. The mean utterance length was not affected by the conditions. In contrast to the hypothesis that eye contact serves to synchronize conversation, most interruptions of speech were found when the experimenter was physically present and eye contact could be truly established. The fact that video condition was more similar to the audio than to the face-to-face condition, suggested that a factor more complex than visual cues, for example social presence (e.g., Beattie 1978b) may be at work in determining speech patterns. It was suggested that visual communication when participants in a conversation are physically present enable the speaker and the listener to see and be seen by each other (Rutter, Stephenson, Ayling, & White, 1978). The link between speech interactions and physical presence of another person inspired a new line of studies and hypotheses about possible causes of speech disturbances. Speech Regulation: Eye Movements as Moderators of Effective Communication The second pattern of gaze in conversation, short gaze performed by the speaker during long utterances, was considered as having a regulatory function in eliciting “accompaniment signals” from the listener. According to Kendon (1967), whether general (e.g., “yeah”) or specific (e.g., a frown to a sad point in the speaker‟s narrative), the “accompaniment signals” convey attentiveness and the level of understanding on the part of the listener. Frequent glances directed toward the listener and searching for signals of understanding such as head

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

9

nods and smiles (Brunner, 1979; Krauss, Fussell, & Chen, 1995) occur particularly at the points in the stream of speech where information about comprehension is considered helpful. As such, eye contact by the speaker does not signal willingness to switch roles but serve to assess the level of the listener‟s attentiveness or the need for some response without the intent to end the speaking term (Bavelas et al., 2002). For example, gaze can be understood as a type of a non-verbal cue that alongside facial expressions, gestures and voice pitch supports communicative and expressive aspects of social interaction (Argyle & Kendon, 1967). The importance of moment-by-moment collaboration based on non-verbal signals has been repeatedly demonstrated (e.g., Bavelas et al., 2002). The fact that non-verbal signals carry information relevant to the effectiveness of communication means that they carry a cognitive load. Therefore, in a face-to face dialogue, where physical presence of individuals engaged in a conversation produces an abundance of visible “back-channel” responses, speakers must concurrently perform two cognitively demanding tasks: speech planning and monitoring of the listener for non-verbal signals of comprehension, confusion, or agreement (e.g., Argyle & Cook, 1976; Beattie, 1978b). After Butterworth (1978) proposed that these two complex tasks can affect gaze direction, investigations of interpersonal gaze were directed both toward monitoring functions of eye contact and the reasons and the role of gaze aversion.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Gaze Aversion The dynamics of visual interaction in conversation includes periods of eye contact and periods in which eye contact is broken. On average, during conversation people gaze at another person 31 percent of the time (range = 15 - 62 %, SD = 14%) (Argyle, 1967; Bavelas et al., 2002; Kendon, 1967). Breaking of eye contact during the ongoing discourse occurs with a certain degree of regularity: people tend to look at their interlocutor at the end of sentences and phrases within them, while at the beginning of long utterances, they tend to look away (e.g., Kendon, 1967; Beattie, 1981). Multiple lines of evidence indicate that shifts of gaze away from interlocutor occur mostly during thinking and speech planning (e.g., Beattie, 1981; Kendon, 1967), predominantly at the start of a speaking term or at phrase boundaries (Goldman-Eisler, 1967), more while speaking than listening (e.g., Argyle & Ingham, 1972), and more in conversations about difficult than easy topics (e.g., Exline & Winters, 1965). In contrast, eye contact with the listener is more likely to occur at the end than at the beginning of sentences (e.g., Beattie, 1981). Coocurrence of cognitive and ocular activity during periods in which gaze is directed away from the listener suggested to the researchers of interpersonal gaze the possibility that ocular motility as a form of non-verbal behavior may serve more than just communicative and expressive functions. Accordingly, eye movements began to be investigated in terms of their functional significance in cognitive processes such as speech production and recall of information.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

10

Dragana Micic and Howard Ehrlichman

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Cognitive Overload - Visual Interference Hypothesis The first explanation of the link between speech production and changes in visual interaction was offered by Argyle and Dean (1965), who suggested that individuals look away before they start to speak and when they have to think about what they are saying because the input provided via eye contact is distracting. Butterworth (1978) elaborated on this explanation and argued that the reduction of visual input is needed only when the cognitive demands of speech planning are great. When they are moderate, speakers direct their gaze toward the listener for the feedback useful for the ongoing speech planning. Feedback of this type would be most needed after the most important points of the message have been conveyed and the demands for speech planning are low. Studies of the point in speech at which changes in gaze direction occur (Beattie, 1978; Beattie, 1981; Cegala, Alexander, & Sokuvitz, 1979) produced empirical support for this conjecture. For example, there is evidence that speakers tend to avert their gaze when they are experiencing difficulties formulating speech, such as during filled pauses and when their speech is hesitant (Cegala et al. 1979). Based on these findings gaze aversion was related to effortful cognitive processing in which limited-resource cognitive processor suffered from information overload. At the center of the cognitive overload - visual interference hypothesis is the proposition that in the situation when two tasks draw on limited visual or attentional resources, ocular motility would be reduced with the purpose of eliminating competition of incoming visual information with the processing of difficult cognitive tasks. An additional assertion is that gaze is averted at critical points in the task to avoid processing of visual cues that may be unnecessary, distracting or arousing. Gaze aversion is conceptualized as an automatic process activated during cognitively demanding tasks such as difficult verbal reasoning and difficult arithmetic tasks (Doherty-Sneddon et al., 2002) for the purpose of facilitating cognitive processing by inhibiting perceptual visual processing (Glenberg, 1997; Glenberg, Schroeder, & Robertson, 1998). Glenberg (1997) proposed that the cognitive system continuously processes environmental events, and that cognitive activities (e.g., language processing), call for the disengagement from the continuous influx of environmental information in order to be processed satisfactorily. This is a functional account of ocular behavior according to which disengagement from the environment ensures enhanced efficiency of cognitive processing. However, the effect of disengagement was considered dependant on the processing demands of the task: It was proposed that while perceptually driven tasks (e.g., object naming) might be facilitated by environmental stimuli (Griffin & Oppenheimer, 2006), cognitively difficult (Butterworth, 1978) conceptually driven tasks such as mental imagery or retrieving information from LTM would benefit from environmental disengagement (Glenberg et al., 1998). That the need for gaze aversion depends on difficulty of conceptually driven tasks was supported by the findings of significantly higher rates of looking away during recall of autobiographical information from the distant past (50%) than from the recent past and the present time (40% and 37 % respectively). During cognitively difficult tasks, adults (Glenberg et al., 1998) and older (8-year-old) but not younger (5-year-old) children (DohertySneddon et al., 2002) often avert their gaze from their interlocutor. When answering difficult

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

11

questions (e.g., autobiographical questions from distant past, general knowledge questions), adults avert their gaze away from a live human face, from questions displayed on the computer screen (Glenberg, et al., 1998), and from video-mediated faces (Doherty-Sneddon & Phelps, 2005). Findings of improved accuracy of answers reported by Glenberg et al. (1998), Doherty-Sneddon et al. (2002), and Doherty-Sneddon, Riby, Calderwood, and Ainsworth (2009) are consistent with the suggestion that gaze aversion occurs to ensure performance by eliminating visual distractions and thus, managing cognitive load. Similarly, the cognitive overload hypothesis predicts that restriction of gaze aversion should negatively affect the ability to articulate speech. This prediction was supported by Beattie (1981) who found that speech dysfluency tends to be more frequent when gaze of the speaker was constrained than when it was free. He attributed this pattern to cognitive interference created by the continuous monitoring of the listener, and to the gaze-induced physiological arousal that can disrupt speech planning. The physiological arousal explanation of continuous gaze effect was not supported: the most common dysfluency in Beattie‟s study was filled pauses (“erm”, “uhm”), the only dysfluency that does not seem to increase in response to anxiety (Mahl, 1956). Beattie however, did not consider the possibility that simply restricting the natural pattern of gaze during conversation could be disruptive. In face-to-face dialogues, difficulties in articulating utterances may be the result of two factors: the restriction of the naturally occurring conversational gaze flux or the physical presence of an information-laden stimulus such as human face. The human face is believed to hold a special perceptual status and is intrinsically difficult to ignore (Kemp, McManus, & Pigott, 1990), and, as such, is considered the ultimate source of interference. Nevertheless, Ehrlichman (1981) found no evidence of cognitive interference (e.g., hesitations) due to continuous gaze directed toward the visual image of a face on a screen. In contrast to the prediction of the model, latency and mean length of pauses were shorter when participants viewed the face than when they viewed a grey oval, and when gaze was fixed than when it was free. Overall, participants performed equally well on verbal (e.g., interpreting meanings of proverbs) and visuo-spatial questions (e.g., describing a visual image) requiring complex (i.e., speech) and simple (i.e., list) answers with fixed and free gaze. None of these findings are consistent with the cognitive overload- visual interference explanation of gaze aversion. Micic, Ehrlichman, & Chen (submitted) examined the effect of gaze fixation per se (fixation target was a white, 1x1 cm square set on the white background) on the performance on a number of cognitive tasks requiring various levels of retrieval from long term memory (LTM). They found no evidence of a detrimental effect of successful gaze fixation on task performance. Ehrlichman (1981) presented an alternative interpretation of the pattern of ocular activity during conversation. He observed an exception to the looking while speaking and listening pattern: in the presence of a visual image of the interlocutor‟s face, the rate of eye movements is very low during listening and increases during speaking but eye movements occur equally often during listening and speaking in the presence of a grey oval Based on these findings, Ehrlichman proposed that the amount of gaze reflects “relative strengths of two opposing forces”: the tendency to look at the other person in order to monitor the efficacy of communication, and the tendency to make eye movements. He suggested that thinking and speaking are associated with the tendency to make saccadic eye movements and that this tendency is suppressed during listening in order to monitor for “back channel” responses such as facial expression. According to this model, the asymmetrical pattern of ocular dynamics of

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

12

Dragana Micic and Howard Ehrlichman

conversation could be explained in terms of arousal, cognitive change (Singer, Greenberg, & Antrobus, 1971), or attention. The tendency to make eye movements may be stronger while thinking and speaking due to higher levels of arousal and cognitive change during speech production than during speech decoding. That tendency can be modified by the attentional shift toward the other person during listening. The attentional shift could occur because of the salience of the facial cues provided by the interlocutor, because attending to one‟s own internal thoughts may reduce attention to the other person, or because of simultaneous activation of both factors. In any case, the pattern of speech-related gaze would reflect the trade-off between eye movements and looking accomplished through oculomotor suppression. Thinking and speaking are accompanied by numerous eye movements in various directions and not by recurrent sets of gaze aversion (i.e., a single eye movement away from the stimulus) followed by re-fixation of the stimulus. Indeed, the strength of the tendency to make such eye movements is underscored by the finding that people are often unable to completely suppress these eye movements even when instructed to do so. Micic et al. (submitted) found that instructed fixation during cognitive processing did not reduce ocular motility to zero. Multiple eye movements continued to occur in response to questions requiring search from LTM such that 27 % of participants failed to satisfy the criterion for fixation. These eye movements appear to be linked to cognitive activity and could reflect either some cognitive aspect of task-related processing (e.g., memory search), the need to ensure adequate processing in the resource-limited processor (cognitive load), or the need for the prevention of interference from perceptual environmental information (interference hypothesis). Although explanations of ocular dynamics during conversation offered by the cognitive overload - visual interference hypothesis may seem self-evident, they are based on a flawed propositional foundation (Ehrlichman, 1981). The primary proposition of this hypothesis is that ongoing cognitive activity can affect ocular activity. As we will see later, there is a great deal of support for this proposition. The secondary proposition that ocular activity serves to reduce visual interference imposed by the presence of another person is, however, not empirically, but logically based. Specifically, it is derived as the necessary consequence of the primary proposition without considering that other factors including the task itself may be responsible for the behavior. Although reduction of interference may be a plausible role of a single eye movement that would shift the gaze away from the interviewer (i.e., gaze aversion), the cognitive overload – visual interference hypothesis does not offer a substantive explanation of eye movements that occur following the initial eye movement away from the stimulus. The more plausible account of the spontaneous eye movements of conversation proposes that contrary to the assumption that people have a tendency to look at each other but avert their gaze to avoid cognitive overload, people have a tendency to move their eyes but suppress it so they cold look at each other (Ehrlichman, 1981). The validity of the cognitive overload - visual interference approach to the study of oculomotor patterns in non-visual cognition was seriously challenged by the finding of null effects of complex and simple visual stimuli on performance of visuo-spatial and verballinguistic tasks (Ehrlichman, 1981). To account for such a finding, Ehrlichman (1981) considered the possibility that comparing the effect on performance of a grey oval and the visual image of an experimenter‟s face on the screen may not have been a fair comparison and that both stimuli may have been equally distracting. Implicit here is the proposition that

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

13

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

the apparent complexity of a visual stimulus (e.g., the amount of detail) may be unrelated to the complexity of visual processing. This is a reasonable proposition since increasing energy expenditure and time needed to process more complex visual environments would be evolutionarily disadvantageous. The implication of this proposal is that the fundamental assumption of the cognitive overload – visual interference hypothesis, namely, that eye movements are reduced or gaze is redirected to reduce interference, is erroneous. The visual system always processes the complete visual field. Consequently, it may be equally demanding to process a face, a grey oval, or a small white square on the white background. Although saccadic eye movements will tend to be directed toward the areas in the visual scene that carry most information (Yarbus, 1967), perceiving a visual scene may always require “pixel-by-pixel” integration of elementary components such as lines, edges and color. Sequential activation of the striate and extrastriate cortex captured by event-related potential indicates that the same processing „protocol‟ is used whether a visual scene is created in bottom-up or top-down fashion (Farah, 1995). Point-to-point processing of visual stimuli strongly argues against the applicability of the cognitive overload – visual interference hypothesis in the visual domain but does not exclude it from the repertoire of mechanisms used to facilitate cognitive processing. This hypothesis seems best suited for the explanation of the phenomenon of gaze aversion in social interaction since it has been shown that physical presence of another person is distracting (Williams, 1978), and that people tend to direct their gaze away from the interlocutor during speech planning (Cegala et al., 1979) and when answering questions (Glenberg, 1997). However, in contrast to the saccadic eye movements typical of various cognitive tasks, gaze aversion may be more related to external stimuli than to internal cognitive processes. Although the possibility that gaze aversion may be reflecting the interfering effect of visual information per se cannot be entirely discounted, it appears that it mostly reflects the arousing effect (e.g., emotional arousal) of the physical presence of another individual.

Imagery Scanning Hypothesis: REM and Visual Imagery in Sleep and Wakefulness The interest in eye movements in mental processes gained momentum with the early research in the field of sleep and dreaming (Aserinsky, 1967; Aserinsky & Kleitman, 1953). Evidence that rapid eye movements (REMs) accompany dreaming was obtained by comparing physiological indices of dreaming collected with electrooculographic (EOG) recordings of ocular activity and electroencephalographic (EEG) recordings of brain activity during sleep with subjective reports of presence or absence of dreaming (e.g., Dement & Kleitman, 1957a; 1957b). Subjective reports were collected using the interruption technique which requires that subjects are awakened either after a period of ocular activity or after a period of ocular quiescence, as registered by the EOG, and asked to report their last images from sleep. Participants tended to report visual imagery when they were awakened after REM periods and no visual imagery after periods of ocular quiescence. The night-long recordings of ocular and brain activation revealed that high-amplitude eye movements of REM sleep tended to co-occur with the low–amplitude EEG activity over the frontal and occipital areas

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

14

Dragana Micic and Howard Ehrlichman

in both interrupted and non-interrupted sleep leading to the conclusion that this pattern of ocular activity was associated with dreaming (Aserinsky & Kleitman, 1955). The interruption technique was used in many studies of REM sleep (e.g., Dement & Kleitman 1957a; 1957b) which suggested that the incidence of the vivid-dream reports after REM sleep was about 80 percent. In addition to the relationship between REM and dreaming, researchers like Roffwarg, Dement, Muzio, and Fisher (1962) suggested that reliable predictions of the direction of eye movements during REM sleep could be based on the dream reports however; the validity of such findings is questionable due to the absence of blind matching procedures and possible bias by the experimenter expectations. Subsequent research indicated a direct and reliable relationship between gaze shifts reported in lucid dreams and the direction of eye movements registered by the EOG (LaBerge, 1990; LaBerge, Nagel, Dement, & Zarcone, 1981). However, this type of relationship had been suggested long before the discovery of the EOG: eye movements during sleep were considered by researches such as Moore (as cited in Antrobus, 1973) and Totten (1935) to represent the looking at or scanning of the visual imagery of dreams. The advent of the EOG, the discovery of the REM sleep, and the use of the interruption technique introduced the psychophysiologal approach to the study of mental phenomena such as dreaming, hallucinations and mental imagery. However, the study of REM sleep may not be as relevant to the research on eye movements seen in non-visual cognitive activities during wakefulness as once thought. For example, there is no evidence that eye movements of REM sleep are of the saccadic type seen in the cognitive tasks that do or do not involve visual perception. In general, eye movements of sleep and wakefulness seem to be qualitatively different and generated from different cortical areas of the oculomotor network. Ocular motility during REM sleep includes both slow, roving eye movements and fast eye movements superimposed on the slow activity while eye movements of waking are of the fast, saccadic type (Jacobs, Feldman, & Bender, 1972). In terms of cortical contribution to saccadic generation, during REM sleep, compared to eyes closed awake, activity is higher in supplementary motor area (SMA) (Ioannides, et al., 2004) and lower in the parieto-occipital brain regions including the precuneous, cuneous, superior parietal lobule and posterior part of intraparietal sulcus (IPS) (Hufner, et al., 2008). ). The parietal region is active during saccades with eyes open and when looking straight ahead. However, there is now strong evidence that REM sleep is not the physiological equivalent of dreaming: REM seems to be controlled by the brainstem oscillator while dreaming seems to be mediated by forebrain mechanisms (Solms, 2000). There is also strong evidence that dreams are not uniquely related to REM sleep; they also occur in NREM sleep, and the two types of dreams are qualitatively different, with the former being endowed with vivid imagery and latter consisting of simple recurring themes (Fosse, Stickgold, Hobson, 2004). Long before there was physiological evidence of a dissociation between REMs and dreaming, observations of eye movements during sleep were linked to dream reports of vivid imagery leading researchers of human mental processes to propose that mental imagery of dreams may be based on the same processes as the visual imagery of wakefulness. What dreams and mental imagery have in common are visual images and rapid eye movements. Therefore, the physiological commonality of wakefulness and sleep was attributed to quasivisual properties of dreams and mental imagery; it seemed almost natural to assume that eye movements have to do with the visual aspects of dreaming and imagery. Accordingly, the first hypothesis of ocular activity in mental processes, the imagery scanning hypothesis, proposed

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

15

that eye movements associated with thought reflected mental “scanning” of visual images in both sleep (Aserinsky & Kleitman, 1955) and waking (Hebb, 1968; Totten, 1935). The imagery scanning hypothesis was founded on the theoretical work of Hebb and empirical evidence available at that time (e.g., Totten, 1935). While Hebb (1968) based his theoretical explanation of the relationship based on the parallelism between perceptual and imaginal processes, Totten‟s (1935) account of the role of eye movements in imagery was based on the findings of her experiments. She used long time exposure photographs to make records of the changing eye position by registering the beam of light reflected of the cornea while her participants were visualizing simple geometric forms. Totten‟s findings that eye movements during visual imagery „trace‟ the outline of the imagined object inspired Lorens and Darrow Hebb, and Bertashvili (Kowler & Steinman, 1977) to propose saccades as the mechanism of the creation of visual images The proposition that eye movements were involved in scanning and creation of visual images was derived from the basic assumption of the analogy between processes of visual perception and visual imaging. Accordingly, saccades, being the gatherers of information crucial for the formation of the visual percept (Yarbus, 1967), were also thought of as the providers of the necessary elements of visual imagery. The notion of isomorphism between processing of information from the physical environment and information that is not based on any currently present environmental stimulus (i.e., stimulus-independent mentation) has been supported by neuroimaging data. Studies using positron emission technology (PET) present evidence of activation in the visual cortex while person is creating and manipulating a visual image (Kastner, Pinsk, De Weerd, Desimone, & Ungerleider, 1999) and activation of the motor cortex while person is imagining the actions of the repetitive motor task (PascualLeone et al., 1995). However, prediction of the high involvement of eye movements in visual mentation has not been met with sufficient empirical support. Although eye movements during REM sleep may occasionally reflect visual dream content (e.g., Berger & Oswald, 1962), it has been shown that REMs occur periodically through the night (Aserinsky, 1967) and are dissociable from dreaming (Solms, 2000). For waking eye movements, tenability of the imagery scanning hypothesis has been brought into question by the evidence of low rates of eye movements during visual imagery. The interruption technique developed for the study of dreaming applied to daydreaming revealed that visual imagery occurred in periods of ocular quiescence, not in periods of ocular motility which, according to participants‟ reports tend to be associated with thinking (Antrobus, Antrobus, & Singer, 1964). The initial findings of high rates of eye movements in thinking and low rates of eye movements in visual imagery were supported in subsequent research (e.g., Klinger, Gregoire, & Barta, 1973). The imagery scanning hypothesis predicts that during mental imagery, more eye movements will accompany moving images because the components of the image are moving. Studies of instructed imagery in which participants were asked to visualize either a stationary (e.g., mountain in the distant horizon) or a moving image (e.g., tennis match observed from the net) while maintaining the same vantage point provided support for this prediction (e.g., Amadeo & Shagass, 1963; Antrobus et al., 1964). Moving imagery was found to be accompanied by higher eye movement rate than static imagery when eyes were open (M EMR static = 0.06; M EMR moving = 0.12) and closed (M EMR static = 0.12; M EMR moving = 0.17) (Antrobus et al. 1964). Amadeo and Shagass also report higher EMR in moving imagery compared to baseline.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

16

Dragana Micic and Howard Ehrlichman

It is not clear what this small increase in EMR during moving imagery represents. One possibility is that difference in EMR between moving and stationary imagery may not be related to the motion per se since following a moving object is a function of smooth pursuit and not saccadic eye movements, but to operations related to motion. For example, saccades may serve to create motion, update the appearance of the moving object or of the changing scene. Since it is known that motion is processed in the dorsal stream of visual processing which also includes an area involved in the generation of visually guided saccades (Goodale, & Milner, 1992), it may be possible that the increase in saccadic activity registered by the EOG in moving imagery may be related to the moment-to-moment change in the content of an image created by movement. However, from that viewpoint, variability within directed imagery could be better explained by the cognitive change hypothesis (Antrobus, 1973) described later in the text. Studies of daydreaming and directed imagery do not require other mental operations (e.g., mental rotation) or verbalization in addition to image formation. However, the relationship between eye movements and mental processes has been most extensively studied with questions requiring a verbal response upon completion of question-directed processing. Typically, the EMR of tasks requiring visuo-spatial processing have been compared to the EMR of tasks requiring verbal-linguistic processing. Spatial questions have consistently been found to be associated with low EMR relative to verbal-linguistic questions (Ehrlichman & Barrett, 1983; Klinger et al., 1973; Weiner & Ehrlichman, 1968). This low rate is not affected by the verbal requirements of the answer. After equating response requirements across spatial and visual task to allow the EMR to vary along visuo-spatial dimension, Weiner and Ehrlichman (1976) reported comparable EMR for answers requiring one word, a list of items or extended speech (Mean one word = 0.70; Mean list = 0.61; Mean speech = 0.60) with eyes open. Ehrlichman and Barrett (1983) found that performing similar tasks with eyes covered in the dark preserves the EMR pattern across different response types (Mean list = 0.48; Mean speech = 0.60). Having in mind the limited justification for the direct comparison of EMR in spatial questions with EMR findings of directed imagery tasks without a verbal response, it is likely that having to produce a verbal answer may have increased saccadic activation. Nevertheless, if verbalization does have an effect on saccadic production that effect seems to be additive, hence preserving the difference in EMR between verbal and spatial questions. We also note that across studies, spatial questions used in assessing ocular activity vary considerably in their visuo-spatial processing requirements. For example, some tasks require simply forming a mental picture and giving a verbal signal when a clear and vivid image is achieved. Other questions require forming a static image and reporting a specific detail from it (e.g., “What color is the top stripe of the American flag?”). Another type of question requires recalling a sequence of images from an episodic memory (e.g., “Describe the route by which you came to ETS today.”), while a variety of questions call for additional mental operation to be performed on the visual image (e.g., “What other letters do you get by rotating or flipping a lower case printed p?”). While differing in processing requirements, all these questions involve recall of information. They tap into one‟s knowledge of the world (e.g., “If you are the minister at the wedding, on which side of you does the bride stand?”), autobiographical memory (“How many windows are there in your house or apartment?”), and episodic memory (“What did you eat for breakfast yesterday?”). We further note that, although they involve various operations, and some of them could be answered without

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

17

creating visual imagery, on average, these tasks are consistently accompanied with low ocular motility. Low EMR has been considered a reliable physiological index of the presence of visual imagery (Barrett & Ehrlichman, 1982). Ehrlichman, Weiner, and Baker (1974) and Weiner and Ehrlichman (1976) reported low rates of eye movements in response to the questions described above (relative to a variety of verbal-linguistic questions) to the point of intermittent complete suppression (“stares”). However, since as previously discussed, spatial tasks may depend on processes other than visual imagery (e.g., memory) it is possible that the EMRs consistently found in spatial tasks have been erroneously attributed to the presence of visual imagery. For example, in their Experiment 1, Ehrlichman, Micic, Sousa, & Zhu (2007) examined the effect of imagery on the rate of eye movements by varying imagery modality (visual versus auditory) while holding processing requirements (i.e., the extent to which a task required searching through LTM) and response requirements (i.e., number of words in the answer) of the tasks constant. The low-retrieval tasks used for this purpose were Mental Alphabet Shape and Mental Alphabet sound. They required accessing the alphabet, evaluating each letter, keeping track of the number of letters that satisfied either an orthographic (e.g., three straight lines) or a phonological (e.g., long “E” sound) criterion, and producing a oneword answer. The high-retrieval tasks were Auditory Word Retrieval and Visual Object Retrieval. These tasks involved retrieval of three words rhyming with the target word and retrieval of three objects with specific visual properties, respectively. Participants reported significantly more imagery in connection with the low and high retrieval visual tasks than for the two versions of the auditory tasks. Nevertheless, the difference in EMR between high and low retrieval tasks could not be attributed to visual imagery since ocular activity varied along the dimension of retrieval requirements and not visual imagery. Findings of Ehrlichman et al. (2007) provide strong evidence in support of the notion that visual imagery may not be related to saccadic eye movements. If visual imagery did indeed require saccadic activation, it would be impossible to answer questions requiring “scanning” of a visual image while maintaining a steady gaze. However, Kowler and Steinman (1977) found that it is in fact, possible to answer questions requiring visual imagery (e.g., : “How many windows are there on the second floor of your parent‟s home?”), while fixating the center of a grey disc. Based on their findings of successful fixation (M EMR task = 0.06) and performance and reports of other research (e.g., Janssen & Nodine, 1974), Kowler and Steinman concluded that visual imagery does not require saccades. Nevertheless, there is evidence that eye movements seem to be related to the content of imagery (Brandt & Stark, 1997) and may contribute to generation of an image once the object on which the image is to be based is no longer visible. For example, Laeng and Teodorescu (2002) found that gaze fixation during imagery impaired the ability to form images of visual stimuli previously viewed without gaze restriction. This finding is at odds with the finding of Kowler and Steinman (1977) that visual imagery tasks can be successfully performed without saccades. However, these two discrepant findings can be reconciled by recognizing the difference in the familiarity of the visual scene on which the image is based and the ways in which that scene had been encoded (visual, one-time encoding of an unfamiliar scene presented as a picture of a fish versus a familiar scene repeatedly encoded through multiple sensory channels). As suggested by Farah (1988), visual imagery is visual in the sense of its neuroanatomical substrate, but is not visual in the sense of representing only information acquired through visual sensory channels. When performing tasks involving imagery,

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

18

Dragana Micic and Howard Ehrlichman

individuals may choose between visual and non-visual spatial representations. It is possible that visual imagery of a recently experienced scene encoded only visually may depend on saccades while a visual image based on an old memory established through multiple codes may not. Hebb (1968) made several important remarks concerning imagery, one of them being that images are in essence memory representations of objects and as such are not dissociated from thought. It is therefore possible that a better account of the differences in EMR across various tasks could be given by relating ocular activity in those tasks to various aspects of memory (e.g., short-term, long-term) than to scanning of imagery. Interestingly, neurophysiological evidence is in agreement with the proposal that saccadic eye movements observed in a variety of non-visual tasks do not reflect visual imagery. While visual imagery activates visual cortex (e.g., Kosslyn et al., 1993), voluntary, self-paced saccades do not (Miyauchi, Misaki, Kan, Fukunaga, & Koike, 2008). Although not accompanied by imagery, the saccadic eye movements registered by Miyauchi et al. (2008) appear to be of the kind that would be involved in scanning of visual images according to the imagery scanning hypothesis. To the extent to which the brain activation results of Miyauchi et al. are applicable to spontaneous saccades seen in verbal and spatial tasks, failure of the functional account of saccades in mental imagery offered by the imagery scanning hypothesis could be attributed to the lack of functional anatomical link between visual imagery and saccadic activity.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Images or Words: Hemispheric Activation Model of Lateral Eye Movements The first evidence of eye movements that differed consistently in terms of direction was provided by Day (1964), who noted that people tend to look preferentially to one side while attempting to answer questions requiring reflection, and who coined a term lateral eye movement (LEM) to emphasize the laterality of gaze shifts. LEM refers to the first eye movement that shifts the gaze either to the left or to the right immediately after a person is asked a question requiring some reflection (Ehrlichman & Weinberger, 1978). Day considered the presence of stimulus-independent gaze shifts while attempting to answer a question to be concomitant to attentional shift from a passive to an active expressive mode, and proposed that gaze direction was a stable behavioral characteristic related to perceptual, cognitive and physiological characteristics of a person. In contrast to Day, Bakan (1969) suggested that asymmetries of gaze direction were related to the asymmetries of brain function and with that proposition he laid down the cornerstone of the hemispheric asymmetry hypothesis. The gaze direction preference suggested by Day (1964) was supported by Duke (1968), who found that individuals make 86% of eye movements in the same direction irrespective of their eye dominance. LEMs were also found to be related to visual attention, EEG alpha activity, anxiety, Scholastic Aptitude Test scores and hypnotizability (Bakan, 1969) thus, substantiating the proposition that people may be classified as “left-movers” or “right movers” (Day, 1964). Left movers were found to be more easily hypnotized, more likely to report clear visual images, more interested in social science and humanities, and more likely to have a higher verbal than mathematical score on the SAT than right movers. Bakan

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

19

believed that these individual difference characteristics could be linked to individual differences in the relative activation of the left and right hemispheres. According to this view, shifts of gaze occur due to increased activation of the hemisphere contralateral to the direction of the gaze, and that direction of LEMs identifies the hemisphere more prone to such activation. The underlying assumption was that excitability of the hemispheres varied from person to person. However, the proposition that direction of LEMs is always related to physiological activation (e.g., EEG activity) and personality traits has not been met with empirical support (Ehrlichman & Weinberger, 1978 In contrast to Bakan (1969), who was interested in individual differences, Kinsbourne (1972) related LEMs not to personality but to ongoing cognitive function. He investigated LEMs in the context of brain asymmetry research with a goal to determine whether lateral shifts of gaze could be related to hemispheric specialization and be used as a motor measure of laterality in cognitive function and not as a measure of a general personal trait of hemisphericity. The brain asymmetry research at that time utilizing both clinical (e.g., “splitbrain studies) and nonclinical (e.g., dichotic listening studies) approach indicated that in righthanded individuals, language, verbal processing, mathematical abilities, and encoding and retention of verbal material were left lateralized, and that spatial learning and memory, processing of spatial relations, patterns and location were right lateralized (Gazzaniga, 1995). Evidence that LEMs are reliably related to the known right- and left-lateralized functions, offered a prospect of a simple and efficient paradigm for studying brain asymmetries (Kinsbourne, 1972). Kinsbourne (1972) hypothesized that if the direction of LEMs identified the hemisphere specialized for the type of cognitive processing required by a cognitive task, questions depending on the left- hemisphere processing would result in rightward eye movements and questions requiring right-hemisphere processing would result in leftward eye movements. Specifically, the eyes should move to the left in response to a question requiring spatial processing, and to the right in response to questions requiring verbal processing. Verbal questions typically used in LEM research included word definitions (e.g. „A large spotted animal with a long neck‟), proverb interpretation (e.g., „Let sleeping dog lie‟), sentence generation, and semantic correction of sentences (e.g., Ehrlichman, Weiner, & Baker, 1974). Spatial questions required formation and manipulation of mental images (“Make a mental image of a poodle.”), processing of spatial orientation (“If a person is facing the rising sun, where is the South in respect to him?”), and direction (e.g., recalling the direction of Lincoln‟s face on the penny) (Ehrlichman et al, 1974). To account for the differential effect of verbal and spatial questions on LEMs, Kinsbourne (1972) put forth a neurophysiological model, according to which task-induced asymmetric hemispheric activation directed both attention and physical orientation toward contralateral external space (Ehrlichman et al., 1974). In this model, the brain is conceptualized as an integrated network in which activation of systems not directly related to the processing of task demands could occur due to presumed “cross-talk” between neural systems. Based on that conception of neural function, the pattern of motor activation could be attributed to “overflow” of activity from cognitive areas responding to task demands into areas of attentional and oculomotor control within the same hemisphere (Ehrlichman & Weinberger, 1978). As such, oculomotor activation during cognitive tasks was a part of the general activating effect of cognitive processing, an aspect of a turning tendency created by disproportionate activation of one hemisphere by task requirements. Therefore, lateral gaze of

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

20

Dragana Micic and Howard Ehrlichman

shifts are epiphenomenal to ongoing cognitive processing but can serve as a physiological indicator of lateralized thinking. The hemispheric asymmetry account of LEMs received some support from the early studies. Kinsbourne (1972) videotaped 20 left-handed and 20 right-handed individual while answering verbal, numerical, and spatial questions. He found that among right-handed individuals verbal questions tended to be followed by horizontal eye movements typically to the right. Verbal questions elicited more rightward eye movements than spatial and numerical questions. Spatial questions elicited more leftward and vertical eye movements than verbal questions. Among left-handed individuals, horizontal eye movements commonly occurred on all questions and question type had no effect on gaze direction. Math questions did not affect gaze direction in either group. Performance was equivalent across question types, thus eliminating the possibility the differences in gaze direction could be an artifact of some aspect of performance (e.g., verbalization). To account for vertical, particularly upward eye movements in visuo-spatial processing, it was suggested that upward shifts of gaze reflect bilateral hemispheric activation. Kocel, Galin, Orenstein and Merrin (1972) used an array of 20 verbal and 20 spatial questions and found that verbal questions (i.e., sentence generation and word definition) and mathematical computation (i.e., simple arithmetic) elicited significantly more LEMs than spatial questions (i.e., imagery, spatial direction) and musical questions. Twenty two out of the 23 participants included in the study made significantly more rightward LEMs in response to verbal questions (68%) than in response to spatial questions (45 %). This difference was thought to represent the differential effect of the cognitive mode on the direction of LEMs. Ehrlichman et al. (1974) sought to replicate findings of vertical and horizontal eye movements reported by Kocel et al. (1972) and Kinsbourne (1972) with a series of three experiments. Tasks in this study included spatial questions (e.g., visual image), neutral questions (e.g., favorite book) and an extended set of verbal questions designed to sample a wider range of language-related processes such as syntactic and semantic knowledge of language and some logic problems. Questions were answered either in front of the experimenter or in front of a camera, and eye movements were recorded immediately after the question was asked. There was no significant difference in the direction of horizontal eye movements for the verbal and spatial questions. Differences were observed in the vertical dimension with more downward eye movements to verbal than to spatial questions, and more upward eye movements to spatial than to verbal questions (Kinsbourne, 1972). Although, a number of investigations supported the basic propositions of the hemispheric activation model (e.g., Gur & Harris, 1975; Schwartz, Davidson & Maer, 1975; Weiten & Etaugh, 1974), attempts at replicating the original findings of Kinsbourne and Kocel, were met with numerous failures. A comprehensive review of the literature on LEM between 1972 and 1977 (Ehrlichman & Weinberger, 1978) found that only nine out of 19 reviewed studies concluded that verbal questions elicited more rightward LEMs than spatial questions. The review revealed a number of differences among the reviewed studies regarding the presence or absence of the experimenter in the room, types of questions, the recording method, and scoring criteria. Nevertheless, the inconsistency of reported findings seemed to be attributable more to more fundamental problems such as validity of the questions and theoretical limitations than to methodological discrepancies among studies; the inference that LEMs reflect asymmetric hemispheric activation was based on an underlying but not-verified

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

21

assumption that verbal questions activate left hemisphere more than the right and that spatial questions activate right hemisphere more than the left. All studies that found no difference in LEM and most of those that found difference in LEM in response to verbal and spatial questions reported the occurrence stares, the complete absence of gaze shifts as subjects answered a particular question. More stares and more leftward and upward eye movements are reported in association with spatial questions while fewer stares and more down and right movements are found in verbal questions (Galin & Ornstein, 1974; Kinsbourne, 1972; Kocel, Galin, Ornstein, & Merrin, 1972). Ehrlichman et al. (1974) who found a significantly higher number of stares for spatial questions (M number of stares = 10.82) than for verbal questions (M number of stares = 8.18) suggested that the increase in staring from verbal to spatial questions occurred due to overall reduction of EMR when reflecting on spatial questions. Weiner and Ehrlichman (1976) categorized stares into long and short by using the median length of 2.58 seconds. They found that staring behavior was modulated by question type and that more long stares occurred to spatial than to verbal questions. Ehrlichman and Weinberger (1978) noted that there was a high occurrence of visuospatial trials that yielded only stares or purely vertical eye movements. In contrast to the proposed relationship between LEMs and verbal and spatial questions, the relationship between those types of questions and stares has been consistently supported in research (DeGennaro & Violani, 1988; Galin & Orenstein; MacDonald & Hiscock, 1992). Subsequent research attested to the lack of consistency in LEM research reported by Ehrlichman and Weinberger (1978). For example, Saring and von Cramen (1980) asked spatial and verbal question and recorded the direction of the first eye movement before the question, during the question, immediately after the question presentation, at the beginning of the answer, and after the answer. In contrast to the hemispheric asymmetry prediction, eye movement analysis across reflection (i.e., starts after the question presentation, ends before the answer) and answer (i.e., starts at the beginning, ends at the end of an answer) phase of the trial, found more shifts to the left for verbal than for spatial questions. Raine (1991) addressed the issue of the validity of questions and tested the possibility that inconsistent findings of LEM research could be the result of confounding with visual stimuli present during experimentation with a study conducted in the dark. To test the construct validity of verbal (i.e., proverb interpretation) and spatial (i.e., image visualization) questions he administered two measures of laterality extracted from the Wechsler Adult Intelligence Scale (WAIS-R): the Digit Span subtest, associated with the left-hemisphere functions, and the Block Design subset, associated with the right-hemisphere functions, and combined them with a verbal (i.e., reporting the predominant consonant-vowel pair) and nonverbal (i.e., pitch discrimination) version of a dichotic listening task. The four measures served as independent measures of laterality. The analysis of the first eye movement of the reflection phase showed the predicted rightward shift for the verbal questions but no consistent eye-direction pattern for the spatial questions. Moreover, LEM scores were not related either to dichotic listening tasks or to the WAIS subtests. The study failed not only to verify differential hemispheric effect of verbal and spatial questions typically used in LEM research, but more importantly, it failed to show relationship between the direction of eye movements and standard tests of brain asymmetry. The findings of Raine (1991) suggested that the expected association between lateral gaze shifts and hemispheric specialization may be difficult to demonstrate. However, although traditional LEM research found only sporadic evidence of task-specific effects on the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

22

Dragana Micic and Howard Ehrlichman

direction of lateral eye movements, the possibility of association between lateral gaze shifts and hemispheric asymmetry cannot be excluded. We note that brain activation studies of sleep found evidence of the link between lateral eye movements and lateralization of function. During REM sleep, oculomotor activity seems to be biased toward leftward eye movements (Ioannides et al., 2004), and frontal eye field (FEF) and dorsolateral prefrontal cortex (dlPFC) activations are only significant for the right hemisphere (Hong, Gillin, Dow, Wu, & Buchsbaum, 1995). Under the assumptions that eye movements of REM sleep coincide with dreaming, and that dreams are predominantly visual, this finding supports the notion that gaze direction may indicate the more active hemisphere, and that the effect of asymmetric brain activation on eye movements may be more evident in spontaneous mental activity such as dreaming than in directed thinking. However, Ehrlichman, Antrobus, and Wiener (1985 ) found no evidence of greater right hemisphere activation in a study of REM sleep and EEG asymmetry. Moreover, the assumption that visual imagery is right-lateralized has not been established. Indeed, there is evidence that both hemispheres are involved in the production of visual imagery (Ehrlichman & Barrett, 1983; Farah, 1995). In his 1964 paper, Day had noted that LEMs are more likely to occur after questions requiring reflection than after factual questions, and Meskin and Singer (1974) noted that questions requiring “extensive memory search” such as “What color was your first bike?” were more likely to be followed by LEMs than questions requiring “minimal search” such as “What is your mother‟s name?”. Both observations suggest that the amount of memoryrelated processes required by the tasks may be an important modulator of the effect of verbal and spatial tasks on eye movements. Furthermore, there is no evidence that questions categorized as visuo-spatial do not activate verbal processes. For questions such as asking for the color of one‟s first bike, the answer may be entirely based on verbal processing, or entirely based on the visual image, or it may be reached through various degrees of contribution from both processes. Failing to consider lateral gaze tendencies as a function of cognitive dimensions other than the visuo-spatial - verbal distinction may have been the main reason for the insufficient explanatory power of the hemispheric asymmetry model and its ultimate rejection as a theoretical account of ocular activation in non-visual thinking.

Interference Hypothesis The visual interference hypothesis is based on two main assumptions: first, that all visual information, internally generated or supplied from the environment is processed in a limitedcapacity central processor (Antrobus, 1973) and second, that external visual stimuli have a priority over internally generated visual imagery in accessing the processor. This processing bias is thought to have an important adaptive value and to ultimately function in service of survival since the well-being of humans depends on constant processing of their immediate visual environment (Glenberg, 1997). Nevertheless, humans do not spend their whole life scanning their surroundings; they function in both perceptual and cognitive modes (Antrobus, 1973). Since cognitive and perceptual modes are not mutually exclusive, external visual stimuli and internally generated stimuli compete with each other to enter the processor. Because the processor is assumed to be of a limited capacity, it can be easily overloaded when two task share common visual and attentional resources. The overload causes

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

23

interference: processing of one task suffers due to the presence of simultaneous tasks requiring the same processing operations. This hypothesis proposes that when the primary task requires visual processing (e.g., visuo-spatial question), or significant attention (e.g. difficult task), the pattern of eye movements will automatically change to prevent interference. Since each eye movement brings new external information into the processor, prevention of interference can be achieved either by reducing the frequency of eye movements or by redirecting the gaze (i.e., gaze aversion). Singer (1975) referred to the reduction of visual input by inhibition of eye movement frequency as “gating out,” a process by which task-irrelevant visual stimuli are excluded from processing, while Marks (1973) noting that interference may be modality specific, suggested that the reduction of visual input through decrease in the rate of eye movements would be most needed during processing of visual images. Since gaze aversion is most relevant to social interactions that include eye contact, it was discussed in detail in the section on interpersonal behavior. The current discussion focuses on the role of the frequency of eye movement in the control of interference. Empirical support for the interference account of non-visual cognition related saccadic activity was sought for by comparing ocular motility when people were answering spatial (e.g., “What does your stove look like?”) or verbal-linguistic questions (e.g., “What is the meaning of a proverb: The rolling stone gathers no moss.”). This comparison produced consistent findings of lower EMR in spatial than in verbal-linguistic tasks (e.g., Ehrlichman, 1981; Ehrlichman & Barrett; 1983; Hiscock, & Bergstrom, 1981; Weiner & Ehrlichman, 1976). For example, Weiner and Ehrlichman (1976) found significantly fewer eye movements in visuo-spatial questions (Mean EMR = 0.66) than in verbal questions (M EMR = 0.84) regardless whether the answer could be articulated in one word, a list of words, or through extended speech. This finding was replicated (M EMR spatial = 0.70; M EMR verbal = 1.02) when data on ocular motility was collected with the EOG and the requirements of the verbal response were equated across tasks (i.e., verbal–speech; spatial-speech; verbal-list; spatiallist) (Ehrlichman &Barrett, 1983). According to the interference hypothesis, this slowing of eye movements typical of spatial questions occurs due to the presence of visual stimuli, with complex and engaging visual stimuli creating more interference than simple stimuli. However, there is little evidence to support this assertion. Overall, eyes move at a comparable rate in the presence of a complex stimulus such as an image of a human face on the screen or a simple stimulus such as a gray oval or a video camera (Ehrlichman 1981; Weiner & Ehrlichman, 1983). Furthermore, analysis of the EMR while individuals listen to a question, reflect on it, and produce their answer, indicates that the difference in the rate of eyes movements seems to be related more to the ongoing cognitive process than to the variability in the visual display. Ehrlichman (1981) found that individuals made significantly fewer eye movements while looking at the face (M EMR = 0.27) than while looking at the oval (M EMR = 0.91) as they listened to questions. During thinking and speaking, their eye movements occurred at the rate close to one per second irrespective of the complexity of the visual stimulus. This set of findings challenged the assumption that complex stimuli produce more interference and consequently, greater reduction in eye movements. According to the interference hypothesis, reduction of the rate of eye movements serves to reduce interference of visual information thereby optimizing cognitive processing and, ultimately, improving performance. Accordingly, if eyes ere not permitted to move (e.g., gaze

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

24

Dragana Micic and Howard Ehrlichman

fixation), the damaging effect of interference should be evident in poor performance, and more so with a complex than a simple stimulus. However, this prediction has not been supported in research that manipulated complexity of the visual stimulus by presenting a visual image of the experimenter‟s face via a closed-circuit video monitor and a gray oval (Ehrlichman 1981). Ehrlichman (1981)measured performance using latency defined as the time between the end of the question and the beginning of the answer, fluency, assessed as the mean length of hesitation pauses in verbal responses operationalized as unfilled segments lasting at least 200ms, and quality of answer as rated on a 7-point scale. The interference hypothesis would predict more interference during fixation that during free-eye movement trials. It would predict longer latency, longer pauses and poorer quality of answers while fixating than while free to move the eyes, and when having to fixate the face than the oval. However, during fixation latency was shorter, while pauses and the quality of speech showed no effect of fixation. Furthermore, latency and pauses were shorter in the presence of a face than a gray oval. In contrast to the predictions of the interference hypothesis, there was no interaction between eye-condition (fixed, free) and the stimulus (face, oval) and oval was found to be more disruptive than the face. Because the only real difference between the two stimuli was the presence or absence of the interviewer‟s face, it was suggested that performance may be better when interlocutor is visible. However, there was no evidence that having to fixate either the experimenter‟s face or a gray oval negatively affected performance on visuo-spatial and verbal-linguistic tasks. The finding that gazed fixation does not affect performance was confirmed by Micic, et al. (submitted). This study will be discussed in detail later in the text but its main findings are introduced here as they relate to the performance predictions of the interference hypothesis. Micic et al. replaced the ad hoc measures of performance (e.g., response latency, hesitations in speech) used by Ehrlichman (1981) with clear and standard measures of performance (e.g., hits, false alarms, number of words) typical of auditory n-back tasks and tests of phonemic fluency (to be explained later). They found no evidence of interference due to fixation: performance on the fluency tasks and a continuous performance task was comparable with fixed or free gaze. The lack of evidence that gaze fixation creates interference could be understood in two ways: either the approach of two studies was not satisfactory for testing the interference model or the main assumption of the hypothesis that interference occurs due to the presence of visual stimuli is incorrect. While testing the interference hypothesis by immobilizing shifts of gaze is limited to voluntary eye movements suppression, the validity of the interference hypothesis can be further challenged by introducing experimental conditions that involve dark environment or closing of the eyes. If, as the interference hypothesis suggests, eye movements are reduced due to the presence of external visual stimuli, the change in EMR should not occur in the dark. Moreover, the difference in EMR between verbal and spatial tasks should disappear since there would be no competition for the processing resources. By asking verbal-linguistic and visuo-spatial questions in the light and complete darkness, Ehrlichman and Barrett (1983) provided strong evidence that when the eyes are free to move, shifts of gaze are systematically related to the ongoing cognitive activity and are not contingent on the presence of visual stimuli. Eyes moved at a higher rate during verballinguistic that during visuo-spatial tasks both in a fully illuminated environment and in complete darkness. Furthermore, comparison of ocular motility as measured by EOG during spatial and verbal tasks with that of a no-task resting period revealed that EMR increased

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

25

during verbal tasks relative to the baseline when the eyes were open in both full light and total darkness. The EMR of verbal-linguistic tasks (M light = 1.04; M dark = 0.69) was higher than the EMR of no-task baseline (M light= 0.83; M dark = 0.45, respectively). The EMR of spatial tasks was found to be lower than the EMR of baseline in the light (M = 0.73) but higher in the dark (M = 0.54). There is no explanation for the EMR in visuo-spatial tasks obtained in the dark, but although slightly higher than the baseline, this frequency of eye movements is significantly lower than the one associated with verbal-linguistic tasks performed in the dark. According to the interference hypothesis, ocular motility is reduced during visuo-spatial questions because they require processing of visual imagery which could be negatively affected due to processing of the physically present visual stimuli. However, although this explanation of ocular quiescence in visuo-spatial tasks may appear intuitively correct, it cannot be used to account for the increase in EMR during verbal questions. One way the interference hypothesis could account for the increase in EMR was offered by Weiner and Ehrlichman (1976), who proposed that for verbal questions, the increase in EMR may be less interfering than the slowing of ocular motility. They suggested that ocular slowing may result in focusing on few points which could potentially lead to conceptual elaborations (e.g., associations) that may be as detrimental when answering verbal questions as accessing new visual information may be for the spatial questions. This proposition is consistent with the finding that eye movements interfere with verbal processes less than with spatial processes related to working memory (WM) (Lawrence, Myerson, Oonk, & Abrams, 2001). We note that the evidence against the interference account of ocular behavior in nonvisual cognition does not exclude the possibility that reduction of eye movements or gaze aversion in some situations occur due to distraction present in the environment However, the evidence strongly suggests that the pattern of eye movements is not systematically related to changes in the visual environment but to various cognitive tasks. Furthermore, low ocular activity is not exclusive to spatial tasks; it also occurs in situations in which it could not be accounted for by the interference hypothesis. For example, eyes move at a low rate during stationary imagery relative to moving imagery (Antrobus, 1973), and when people are engaged in a fantasy relative to when they are suppressing it either with their eyes open (Antrobus et al., 1964; Singer & Antrobus, 1965) or covered (Singer & Antrobus, 1965). Low EMR also occurs during processing of auditory stimuli, specifically in auditory vigilance tasks. This category of tasks requires a motor response (e.g., key press) to at least two different tones in a high-low tone detection test (Singer, 1975) or to a specific pattern of stimuli (e.g., letters) within an ongoing random string of stimuli in the continuous performance task (Ehrlichman et al., 2007). These tasks have a very strong attentional component but they do not qualify as difficult tasks and do not evoke visual imagery. The EMR associated with this task remains low when eyes are closed (M = 0.32) and even when verbalization is required such as in an auditory vigilance with shadowing (M = 0.38) (Ehrlichman et al., 2007, Experiment 2). While Ehrlichman et al. (2007) considered the finding of low EMR in auditory vigilance task in terms of the task‟s low retrieval requirements (to be discussed later), Singer (1975) considered it from two viewpoints: as a part of general motor inhibition needed to optimize processing by increasing signal to noise ratio, and from the perspective of a limited capacity perceptual-cognitive processor. Evidence of the multiple representational codes (Posner, 1973) and processing capacities (Brooks, 1968) of the central processor, led Singer (1975) to

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

26

Dragana Micic and Howard Ehrlichman

assume multisensory nature of the processor and to propose that interference may be crossmodal. According to this view, gating out of visual information may occur when processor is overloaded with auditory information and ocular inhibition may be learned or simply built in, so that it continues to appear even when the eyes are closed. Early theoretical indications of sensory gating and its relationship to cognitive processes and eye movements have received some support from imaging studies. Sensitivity to visual interference was detected in the PFC (e.g., Postle & Brush, 2004; Postle, Brush, & Nick, 2004) and the neuroanatomical substrate of the gating control process was found within the dlPFC (e.g., Postle; 2005). Sakai, Rowe and Passingham (2002) suggested that activation of the dlPFC seems to be responsible for „active maintenance‟ which strengthens the coupling of activity between FEF and intraparietal sulcus, the general area of the parietal eye field (PEF). This finding of the brain imaging research, though not directly concerned with eye movements, is more consistent with the idea that eye movements of non-visual cognition may be related to memory functions, as proposed by the memory hypothesis (to be explained later) than to gaiting out of visual input, as suggested by the visual interference hypothesis.

Mirroring Body or Mind: General Arousal versus Cognitive Change

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Arousal Hypothesis The arousal hypothesis proposes that saccadic eye movements occur as non-specific motoric concomitants of ongoing cognitive activity, and that their frequency reflects changes in arousal or activation caused by factors such as question difficulty or emotional quality of the material. Accordingly, it predicts that tasks associated with high levels of arousal are also associated with high EMR and tasks associated with low levels of arousal are linked to low EMR. It also predicts correlations between EMR and indexes of general physiological arousal such as tonic heart rate (HR), galvanic skin response (GSR), and alpha brain waves (alpha EEG). These predictions have been met with partial empirical support. Antrobus (1973) compared the rate of eye movements during trials of an auditory vigilance task (i.e., press two keys to signal tone differentiation) to the frequency of eye movements during inter-trial rest periods. Ocular activity was low during task trials and high during periods of rest in which a 10-fold increase in eye-movement and blinking frequency and amplitude was found for some individuals. This effect persisted even when the subjects‟ motor activity was controlled during trial and rest periods (i.e., the keys were pressed during rest trials in an alternating right left sequence). The pattern of change in HR over trial and inter-trial periods was similar to that of ocular activity, thus offering support for the suggestion that EMR increases due to increased arousal during inter-trial periods. That ocular motility may be a secondary effect of the emotional aspect of cognitive tasks was examined by Singer and Antrobus (1965). They found that individuals involved in a task with a strong emotional component, such as a fantasy of a person, make more eye movements while suppressing their fantasy than while generating it, irrespective of whether their eyes are open or closed. However, the prediction that EMR will be higher during emotionally charged fantasy than emotionally neutral fantasy was not supported. Furthermore, the general arousal

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

27

explanation of ocular activation was not supported. The rate of eye movements was found to be related to HR but HR could not be related to either cognitive activity (imagine vs. suppress) or reported task difficulty. Considering task difficulty as a possible factor of ocular activation via increased arousal, DeGennaro and Violani (1988) conducted a study of LEMs in verbal-linguistic and visuospatial questions. They found an effect of difficulty on the rate of eye movements for verballinguistic but not for visuo-spatial questions. Significantly more eye movements were associated with difficult than with easier verbal questions but this dichotomy did not occur for the difficult and easy spatial questions. Ehrlichman et al. (2007) also reported an effect of difficulty in some tasks. In Experiment 1, they found that a harder version of information retrieval (e.g., “Name four birds that cannot fly.”) was associated with significantly higher EMR than an easier version of that task (e.g., “Name four fruits.”). In Experiment 2, an a posteriori analysis based on the mean difficulty ratings revealed significantly higher EMR in the harder version of the mental alphabet task (e.g., “Mentally go through the alphabet and whenever you come to a letter that has a short “e” sound and a straight line say the letter out loud.”) than in the easier version of that task (e.g., “Mentally go through the alphabet and whenever you come to a letter that has a short “e” sound say the letter out loud.”), and in the harder version of the information retrieval fluency task (e.g., “Name as many scientific areas as you can think of until I say stop.”) than in the easier version of that task (e.g., “Name as many modes of transportation as you can until I say stop.”). However, even when the low retrieval tasks were rated as harder than the easy high retrieval tasks, substantially fewer eye movements were made in response to low than high retrieval tasks. Therefore, within-task difficulty had a much smaller effect on eye movements than did the variation in retrieval requirements among tasks. The existing evidence shows that the relationship between task difficulty and EMR is not straightforward. For some tasks, such as those involving verbal processing and mental multiplication (Lorens & Darrow, 1962), task difficulty seems to increase EMR. For other tasks, such as visuo-spatial questions, difficulty seems to have no effect. And for some tasks, especially those involving sustained attention to continuous streams of stimuli (May, Kennedy, Williams, Dunlap, & Brannan, 1990), increased difficulty may be associated with decreased EMR. Therefore, while difficulty may play a role in ocular activation, its effect seems to be modulated by other factors. The possibility that having to speak could increase arousal and consequently the rate of eye movements was examined by Hiscock & Bergstrom (1981). They instructed their participants to respond to verbal and spatial questions either by providing a verbal response or by formulating a covert response and saying “okay” when the response was completed. The EMR was higher for verbal than for spatial questions, and it was higher when a full response was given regardless of the question type. It was therefore proposed that speaking increases arousal which in turn increases eye movements. However, Ehrlichman and Barrett (1983) found that having to respond to questions with multiple sentences or by listing four items has no general effect on EMR and no effect on the difference in EMR between visual and spatial questions. Therefore, although speaking may have a direct and independent effect on ocular motility, it fails to explain the difference between verbal and spatial questions. Klinger, Gregoire, and Barta (1973) reexamined the questions addressed by the early work on somatic markers of mental activity (e.g., Singer & Antrobus, 1965). They analyzed the effect of imaging, suppression, concentration, search, and choice on physiological

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

28

Dragana Micic and Howard Ehrlichman

correlates of mental activity such as rapid eye movements, alpha activity and tonic HR. In the imaging task, participants were required to imagine a person of their choice keeping their mind focused on that image as best as possible while in the suppressing thoughts tasks they were to inhibit the thoughts of that person. In the high concentration tasks participants solved moderately difficult math problems, performed phonological fluency tasks and composed anagrams. In the low concentration tasks they counted by two‟s, and performed addition of two two-digit numbers. The search task involved mental search for three to four things that satisfied a given criterion (e.g., three or four different items of clothing the participant would like to buy with $100). The choice task required participants to choose a preferred activity from a set of four items (e.g., „lead a community‟, „sing‟, „write a diary‟, or „do some sketching‟). High EMR was found in high concentration and choice tasks and low EMR was reported during imagining, low concentration, suppression, and search. Tonic HR accelerated in high and low concentration tasks, tasks that required complex or continuous processing and verbalization, and decelerated in imaging, suppression and search, tasks that required intensive processing and no verbalization. Tonic HR remained relatively unchanged in the choice task. High HR was positively correlated with EMR in all tasks except in the low concentration arithmetic tasks and high concentration problem solving tasks. Despite some agreement in tonic HR and EMR, findings of the Klinger et al. (1973) study were interpreted as evidence that cardiac and ocular activities are generally dissociated. However, we note that tasks used in this study varied both in the need for verbalization during processing and in aspects of processing such as complexity, continuity, and intensity. While all tasks that accelerated HR required verbalization, some required complex and some continuous processing. The finding of high HR in tasks requiring verbalization is in agreement with evidence that need for verbalization may have an effect on general motor activity but not on task-related EMR (Ehrlichman & Barrett, 1983). Accordingly, the finding of low EMR in tasks that required verbalization and continuous processing such as counting by two‟s is consistent with evidence that continuous processing tends to be associated with low EMR (Ehrlichman et al., 2007). This serves as additional evidence that task-determined cognitive activity affects saccadic activity more consistently than the ongoing motor activity. However, when tasks differ only in terms of processing requirements, there seems to be more agreement between HR and EMR. For example, Lacroix and Comper (1979) reported slower HR in imagery questions which are associated with low EMR, and faster HR in verballinguistic questions which are associated with high EMR. We note the presence of that same pattern in the Klinger et al. study – low HR and EMR in imagery and suppression and high HR and EMR in phonological fluency and creation of anagrams. The similarity between EMR and HR has been related to the effect of a central neural mechanism assumed to regulate the attentive state and to support both oculomotor and cardiac functions (Antrobus, 1973). To account for the cardiac pattern in three to five seconds prior to a response, Lacey and Lacey (1964) put forth the intake-rejection hypothesis in which they relate rate of cardiac discharge to attention. According to this hypothesis, when a person focuses on taking in an external stimulus, the heart decelerates and when a person focuses internally and rejects external stimuli, the heart accelerates. We note that while emphasizing the role of intention in detection and responding to external stimuli, Lacey and Lacey recognized that the need for internal focus and rejection of the external stimuli can be modulated by the processing requirements of the cognitive task to be performed (Druckman, & Bjork, 1991, p.220). For example, solving of a mathematical problem would require

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

29

internal focus and rejection of external stimuli and would be accompanied by high HR as shown in the Klinger et al. (1973). Because the arousal hypothesis is based on the assumptions that ocular motility decreases and increases as a function of arousal and that task effects are non specific and thus, should be evident in various physiological markers of activation, it would be reasonable to expect that tasks requirements influence general motor activity of the eye region equally affecting eye movements and blinks. However, this prediction has been met with equivocal empirical support. Antrobus et al. (1964).found significantly higher rates of eye movements and blinks in active thinking („let your thoughts race‟) than in passive thinking („let your thoughts drift‟ and „let your mind go blank‟). Decreases in blinking have been found when mental load is high such as in a mental arithmetic tasks involving addition of series of two-digit numbers (Holland & Tarlow, 1972), during concentrated mental activity such as silent counting (Holland & Tarlow, 1971), and when people are generating a wish (Antrobus et al., 1964). High rates of blinks were found when mental load was low such as in a mental arithmetic tasks requiring addition of series of two-digit numbers and zeros (Holland & Tarlow, 1972), during emotional excitement (Collins, 1962,) and while suppressing a wish (Antrobus et al., 1964). However, Ehrlichman et al. (2007) found no evidence that the rate of blinks was related to the rate of eye movements elicited in response to various requirements for searching through LTM. While high retrieval tasks produced significantly higher EMR (M = 0.88) than low retrieval tasks (M = 0.32), the two types of tasks were accompanied by almost identical rates of blinks (M high retrieval = 0.57; M low retrieval = 0.55). This finding of Ehrlichman et al. (2007) provided evidence that the effect of tasks was specific to eye movements and that changes in the rate of eye movements are unlikely to reflect the effect of general arousal. Although ocular motility may be affected by the state of arousal of an individual engaged in cognitive activity, empirical evidence is suggestive of the presence of at least one additional factor in this relationship. By factoring in that currently unknown variable, the arousal hypothesis could possibly gain the predictive specificity that the variability of EMR across various dimensions of cognitive processing (e.g., cognitive modality, type of stimuli, retrieval requirements) calls for. Relating general arousal to neural mechanisms that modify oculomotor control in various cognitive tasks without a mediating cognitive process and its neuroanatomical substrate cannot be accomplished with the existing arousal account of NVEMs. However, it is also possible, as suggested by Antrobus (1973), that increased physiological arousal may be related to all cognitive processes in which handling of information is dynamic both in terms of operations and cognitive content in a way that makes them empirically indistinguishable. The Cognitive Change Hypothesis In contrast to the arousal hypothesis which considers eye movements a non-specific concomitant of ongoing cognitive activity, the cognitive change hypothesis (Antrobus, 1973) proposes that frequency of eye movements reflects cognitive change present in the ongoing cognitive activity. Cognitive change refers to shifts in cognitive operations (Ehrlichman & Barrett, 1983). This hypothesis is based on the assumption of analogy between the rate of change in internal content of operations and the rate of external sampling, and as such it

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

30

Dragana Micic and Howard Ehrlichman

predicts high EMR in tasks requiring multiple processes performed in sequence and low EMR in tasks requiring only few operations or operations that could be performed in parallel. Antrobus (1973) provided empirical support for the cognitive change hypothesis. He detected high EMR in tasks involving high cognitive change such as moving imagery („visualize a tennis match‟), rapid shifting („let your thoughts race‟) and mind wandering („let your thoughts drift‟), and minimal saccadic activity in two tasks involving minimal cognitive change: concentration („concentrate on one specific thing‟) and mind blank(„let your mind go blank‟). Tasks involving static imagery, concentration and blank mind were accompanied by approximately one-half the rate of the saccadic eye movements found in the high-cognitive change cluster of tasks. Although Antrobus (1973) found both high and low rates of eye movements, he focused only on the low EMR and proposed that the mechanism behind low EMR was eye movement suppression executed as a part of general motor inhibition where the central operator inhibits any process, including eye movements, which might interfere with the central processing function. According to Antrobus, saccadic eye movements represent points in time in which new information either from the environment or from LTM enters the central processor. Because the capacity of the processor as a mediator between cognitive change and eye movements is limited, the cognitive change model can only account for the low frequency of eye movements. We note that, if the cognitive change model is modified to directly relate the rate of cognitive change and the rate of eye movements, this model could explain both increases and decrease in EMR from the baseline without a mediating mechanism such as motor inhibition (Ehrlichman & Barrett, 1983). The findings of high EMR during generation of words based on the letters in the stimulus word (Andreassi, 1973) mental multiplication (Lorens & Darrow, 1962), problem solving and making choices of preferred activity (Klinger et al., 1973) would fit the modified cognitive change model perfectly. Cognitive change also seems applicable to the finding that suppression of a wish can be accompanied by a high rate of saccades (Antrobus et al., 1964; Singer & Antrobus, 1965). Participants in the Singer and Antrobus (1965) study who were not instructed how to suppress their fantasy reported that they suppressed their fantasy by “thinking of other things”. We note that “thinking of other things” may be similar to the active thinking tasks (i.e., rapid shifting and mind wandering) used in the Antrobus (1973) study. Although “thinking of other things” and active thinking .might be associated with a degree of increase in arousal, it is highly likely that frequent shifting of mental content may be an intrinsic quality of both processes. If that were the case, the high rate of eye movements during image suppression would be explained better by the cognitive change than by the arousal hypothesis. Active processes such as mind wandering, rapid shifting and “thinking of things” are likely to operate on information retrieved from LTM. In undirected, unconstrained search that would occur after instructions such as „let your thoughts race‟ or „let your mind wander‟, high rates of cognitive change might reflect the search for and retrieval of a variety of information from numerous locations in LTM. Ehrlichman and Barrett (1983) proposed that high rates of sampling of operations or memory locations typical of random, unconstrained retrieval would elicit high EMR while more constrained or limited search would be associated with low EMR. According to this view, the bidirectional departure of the EMR of verbal and spatial tasks from the baseline EMR could be understood not in terms of visual imagery or verbal processing but in terms of the amount of cognitive change inherent to those tasks. Spatial tasks requiring generation of a visual image supposedly require fewer shifts in content and

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

31

numbers of operation than do verbal question such as interpreting a proverb that relies on retrieval and selection of information from LTM. However, although endowed with impressive descriptive power of the relationship between various tasks and ocular motility, the cognitive change hypothesis is silent with regard to the mechanisms that directly link the rate of eye movements and the rate of internal processes and therefore, cannot explain how the relationship between saccades and cognition comes to exist.

Searching for Information: Constraint Hypothesis and the Long-Term Memory Model

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Constraint Hypothesis The constraint hypothesis proposes that eyes move as a part of an orienting response (Hiscock & Bergstrom, 1981) and are modulated by specific processing requirements of tasks. According to this hypothesis, tasks requiring constrained processing are associated with low EMR while tasks requiring unconstrained processing are associated with high EMR. This relationship occurs because focusing attention on a limited amount of information may have an inhibitory effect on ocular activity (Antrobus, 1973), while accessing information from a broader base of information during extensive searching through LTM may have a facilitatory effect on ocular motility (Ehrlichman & Barrett, 1983). Like Antrobus (1973) and Ehrlichman and Barrett (1983), Bergstrom and Hiscock (1988) proposed that visuospatial and verbal tasks differed not only in cognitive modality but also in the extent of cognitive processing inherent to those tasks. While Antrobus proposed that cognitive modality varied along the dimension of cognitive change, Bergstrom and Hiscock noted that the rate of cognitive change tended to be low during reduced search through LTM. They suggested that reduced LTM search could occur while answering visuo-spatial questions since in such questions all components of the answer would be contained within the visual image once it was formed. In contrast to the spatial questions used in these types of studies the verbal questions required extensive searching through large amount of potentially relevant information stored in memory in order to select information appropriate for the answer. Bergstrom and Hiscock (1988) tested the constraint hypothesis with a set of tasks that systematically manipulated the processing format and imagery requirements of questions. Manipulation of the degree to which questions included visual imagery produced three categories of questions: low-imagery (e.g., “Name a word that rhymes with the following word: liquid.”), moderate-imagery (e.g., “Name a four-letter word that begins with a letter r.”), and high-imagery questions (e.g., “Name two printed capital letters that contain 90 degree angles.”). Within each category questions were subdivided according to the amount of information to be considered during problem solving into constrained questions (e.g., “How many vowels are present in the following word: intransigent?”) and unconstrained questions (e.g., “Name a five-letter word with three consonants and two vowels?”). High and low constraint experimental questions were compared to verbal (i.e., proverb interpretation) and spatial (i.e., spatial orientation) questions traditionally used in eye movement research (i.e., control questions).

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

32

Dragana Micic and Howard Ehrlichman

Spatial questions have been consistently found to produce significantly fewer eye movements than verbal questions and that finding has been attributed to the visual imagery demands of those tasks. The main goal of Bergstrom and Hiscock (1988) was to examine whether the rate of eye movements reflects the requirements for visual imagery or the extent in searching for information pertinent to the answer. The results of their study were consistent with those of the studies employing traditional verbal and spatial questions. Significantly higher EMR was found in low-imagery (M EMR = 0.31) than in moderate (M EMR = 0.20) and high-imagery (M EMR = 0.20) questions. The EMR was significantly higher for unconstrained (M EMR = 0.35) than constrained questions (M EMR = 0.13). The effect of unconstrained questions was most prominent in the low (M EMR = 0.42) and moderateimagery (M EMR = 0.31) categories of questions and was absent in the questions high in their requirement for imagery processing. The authors suggested that within the high-imagery category it may be difficult to create low-constraint tasks. Comparisons of question, reflection and response periods revealed the main effects of imagery (high versus moderate versus low), constraint (constrained versus unconstrained) and epoch (question versus reflection versus response) and significant interactions. The effect of imagery was significant for the reflection (i.e., two seconds after end of question) and response (i.e., one second before to one second after the beginning of response) epochs but not for question presentation. The main effect of epoch was significant: the mean EMR increased from question presentation to response period for both test and control questions. The mean EMR was significantly lower for constrained than for unconstrained questions and for spatial control questions than for the verbal control question but only during the reflection phase. The difference between visual and spatial questions remained constant across periods. Constrained questions were rated less difficult than unconstrained questions in the low- and high-imagery category while the control questions did not differ in difficulty ratings. Mean imagery ratings differed for all three imagery categories and in both constraint categories, thus validating those categories of questions. The findings of Bergstrom and Hiscock (1988) provided strong evidence that differences in EMR may be better explained in terms of the extent of processing required in tasks than in terms of task requirements for visual thinking. The mean EMR of constrained questions was significantly lower than the mean EMR of unconstrained questions, and that difference remained present even after controlling for difficulty and imagery ratings. The processing approach offered by the constraint hypothesis could explain much of the variability present in the empirical evidence. Ocular quiescence typical of the tasks with a strong imagery component does not reflect competition for resources but the level of the constraint inherent to imagery tasks in general: visual tasks require processing of information contained within the visual scene created according to task directions. Consequently, any variability within the imagery category of tasks (e. g., static versus kinetic images) reflects the constraint level of the specific task (e.g., high constraint, low constraint, respectively). Having to consider limited amounts of information can also explain ocular quiescence found in auditory imagery (Hiscock & Bergstrom, 1988; Weitzenhoffer & Brockmeier, 1970) and auditory vigilance tasks (e.g., Ehrlichman et al., 2007; May et al., 1990). Similarly, the need to access large amounts of information and conduct extensive searching through LTM in order to answer a question can account for oculomotor facilitation typical of tasks requiring verbal processing. The constraint hypothesis proposes that oculomotor activation associated with high and low processing constraint occurs as a part of an orienting response. Although this explanation

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

33

may be appropriate for the initial 2-second epoch of the reflective and response phase of the trial, it suggests mediation of the effect of processing requirements of tasks by arousal or attention. With the additional factor, the constraint hypothesis gains parsimony since it can account for both high and low rates of eye movements irrespective of the sensory modality. However, a mediating mechanism may be masking a possible direct effect of cognitive processing on ocular motility. Indeed, the constraint hypothesis offers evidence that some dimension of cognitive processing other than visual thinking can affect ocular motility, but it fails to explain the relationship between cognitive processing and ocular activity since the concept of constraint is confounded with visual imagery. High-imagery tasks were also highly constrained tasks: in questions with a high demand for processing of a visual image, the scope of sampling is limited by the image itself, once created. The confounding effect of visual imagery accounts for two findings that were inconsistent with the predictions that low EMR occurs with the increase in constraint and not with the increase in requirements for visual imagery. The constraint effect was not significant for high imagery questions (M EMR constrained = 0.16; M EMR unconstrained = 0.24) and different levels of imagery had a significant effect on unconstrained questions (M EMR low = 0.49; M EMR moderate = 0.31; M EMR high = 0.24). It is important to note that both questions that varied along imagery and constraint dimensions (i.e., experimental questions) and questions that varied along verbal-spatial dimension (i.e., control questions) differed in their retrieval requirements. All constrained questions and visuo-spatial control questions were probing WM and all unconstrained and verbal control questions were probing LTM. Constrained high-imagery questions asked for various operations to be performed on images: counting physical properties, determining spatial orientation, performing rotation and problem solving. To be completed, such questions required manipulation of the material in WM. In contrast, unconstrained high-imagery questions required reporting of LTM items with specified physical properties: half required naming a common object, and half required naming a letter of an alphabet. These two subsets of unconstrained questions actually were considered to vary in the degree of constraint since there are only 26 letters of the alphabet but an indefinite number of objects to report from and as such they were thought to be the main source of confounding and inconsistencies in the findings. However, the two subsets of tasks really differed in terms of their retrieval requirements. While reporting objects from memory is a high retrieval task, reporting letters from the alphabet is a low retrieval task (cf. Ehrlichman et al., 2007). Since all low constraint tasks required searching through LTM and were associated with high EMR and all high constraint tasks required accessing information in WM and were without any exceptions, associated with a significantly lower EMR, the findings of Bergstrom and Hiscock (1988) are more consistent with the hypothesis that ocular motility reflects retrieval requirements of tasks than with the proposition that saccadic frequency conveys the rate of information sampling. The Long-Term Memory Hypothesis While Antrobus (1973) and Bergstrom and Hiscock (1988) noted that low constraint, or cognitive change, may be related to reduced search through LTM, Ehrlichman et al. (2007) proposed that search through LTM might be the primary modulator of NVEMs. The LTM

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

34

Dragana Micic and Howard Ehrlichman

hypothesis posits that the rate of eye movements reflects the degree to which processing of cognitive tasks requires searching through LTM. The hypothesis is founded on the assumption that LTM retrieval processes and short-term memory (STM) processes (e.g., maintenance, manipulation) converge on a common temporary store capable of communicating with LTM and STM, and operating on information in accord with the processing requirements of tasks. Temporary store that satisfies those requirements is the episodic buffer (Baddeley, 2000) of WM (Baddeley & Hitch, 1974). Ocular motility is proposed to reflect the confluence of the LTM and STM operations: high EMR reflecting the search through LTM and low EMR reflecting maintenance of information within the episodic buffer (Ehrlichman et al., 2007). The LTM hypothesis suggests that maintaining information in the episodic buffer would be associated with low ocular motility dominated by gaze fixations as seen in auditory vigilance tasks (e.g., Ehrlichman et al., 2007) and visuospatial tasks (Ehrlichman et al, 1974; Weiner & Ehrlichman, 1976). In contrast to maintenance of information in auditory vigilance tasks, searching for and bringing information into the buffer from LTM, required by semantic memory and episodic memory tasks, was proposed to be associated with increased ocular motility. Intermediate levels of ocular activity were expected to accompany tasks requiring both maintenance and retrieval, such as rote memory and WM tasks. It was suggested that the EMR of such tasks may be more variable than of tasks requiring maintenance or LTM search because STM is vulnerable to interference and loss of information (Baddeley, 2000). Ehrlichman et al. (2007) tested the LTM hypothesis in three experiments by using a set of tasks manipulating retrieval requirements (high versus low) and task difficulty (high versus low). Manipulation of difficulty served as a test of the arousal hypothesis as one of the alternatives to the LTM hypothesis. The second alternative, the interference hypothesis was tested by manipulating imagery requirements (auditory versus visual) and with eyes closed in Experiment 3. The tasks varied across experiments to accommodate specific goals of each experiment. The goal of Experiment 1 was to examine the effect of LTM search on saccadic frequency and whether or not that effect may be modulated by difficulty and/or imagery. Experiment 2 aimed to examine the possibility that the LTM effect might be an artifact of verbalization, and the possibility that the effect is sensitive to WM and accessibility of items in LTM. Experiment 3 examined whether the pattern of ocular motility found in two previous experiments was preserved when eyes are closed and to examine the oculomotor response to very easy tasks, WM tasks, and LTM tasks tapping into episodic and semantic memory. Data for all three experiments were collected by the means of a video camera connected to computer. The mean EMR of high-retrieval tasks was significantly higher than the mean EMR of the low-retrieval tasks in all three experiments: the effect sizes measured with eta-squared were very large ranging from 0.74 to 0.96. In Experiment 1, the mean EMR of low retrieval tasks ranged from 0.14 for the auditory vigilance task to 0.67 for the number-letter sequencing task while EMR of high-retrieval tasks was similarly high for both information questions and task asking for production of synonyms (M = 0.93; M = 1.01, respectively). The cluster of low-retrieval tasks was much less homogenous: the vigilance task elicited the lowest EMR (M = 0.14), auditory and visual versions of alphabet tasks elicited almost identical EMRs (M auditory alphabet task = 0.44; M visual alphabet task = 0.46) and the number-letter sequencing task elicited the highest EMR (M = 0.67). There was no evidence that difficulty or imagery had an effect on EMR in low and high retrieval tasks.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

35

In Experiment 2, the mean EMR ranged from 0.38 for the auditory vigilance with shadowing task to 0.89 for information fluency task. The mean EMR of the rote sequencing task (0.53) was significantly lower than the mean of the high-retrieval tasks but was not significantly different from the mental alphabet (M = 0.40) and auditory vigilance (M = 0.38) tasks even though the other tasks were rated as more difficult. The mean EMR of WM and rote sequencing tasks was 0.44, supporting the idea that tasks requiring both retrieval and maintenance would show an intermediate rate of ocular activation. This set of findings reinforced the notion that LTM search may be the primary catalyst of the change in the rate of eye movements in non-visual cognition. In Experiment 3, separate analyses were conducted for eyes open and eyes closed condition. In the eye open condition, the highest number of eye movements was observed in the semantic and episodic memory tasks (M EMR semantic = 1.40; M EMR episodic = 1.41).1 The lowest EMR was observed in the Counting task (M = 0.46). The mean EMR of the delayed repetition tasks was 0.89. The two levels of difficulty in this task produced comparable EMR (M 3-word = 0.92; M 5-word = 0.84) despite the five-word condition being rated as significantly more difficult. A no-task waiting period EMR fell in between the EMR of the high and low retrieval tasks (M = 0.95). In the eyes-closed condition, the significant task effect was carried by the difference between the counting task (M EMR = 0.27) and the semantic memory task (M EMR = 0.45). The only factor consistently influencing the rate of eye movements was LTM retrieval; neither visual imagery, nor difficulty, or requirements for overt verbal responding were related to between-task differences in EMR. The Ehrlichman et al. (2007) study provided strong evidence that ocular activity in nonvisual cognitive tasks may be best understood in terms of the degree to which tasks require retrieval from LTM. Over three experiments, relatively low retrieval tasks produced about half the EMR (overall mean = 0.52) of relatively high retrieval tasks (overall mean = 1.08) with very large effect sizes (mean d = 2.21; mean η2 = .89). According to the memory hypothesis, this difference in saccadic frequency is a direct consequence of searching through LTM in high retrieval tasks which is absent in low retrieval tasks. To account for the direct effect of a mental function like searching through LTM on saccadic activity, Ehrlichman et al. proposed that oculomotor and memory system communicate through a neuroanatomical link. Because important evidence of spontaneous saccadic activity in non-visual cognition and of possible direct communication between oculomotor and cognitive system comes from the research conducted in darkness and with closed or covered eyes, we will discuss the most influential studies from this field of research before presenting the neuroanatomical model of NVEMs.

1

Note that the mean EMRs in Experiment 3 were substantially higher overall than in the previous experiments. This difference most likely reflects sampling differences among studies. We have found that individual differences in overall EMR vary considerably, with some participants hardly moving their eyes at all, and others moving their eyes almost constantly. It is notable that despite these large individual differences, within-subject task effects are highly robust regardless of “baseline” rates of eye movements.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

36

Dragana Micic and Howard Ehrlichman

Eye Movements under Closed Eyelids and in Complete Darkness

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Closed Eyes Since the primary function of eye movements is to enable processing of visual information, to validate thought-related ocular activation as a perception-independent phenomenon it was necessary to provide evidence that eye movements found when nonvisual cognitive tasks were performed when eyes are open would also occur in the darkness and when eyes are closed. If the phenomenon was indeed a function of endogenous cognition, the pattern of eye movements associated with non-visual cognitive tasks should be comparable under two extreme levels of luminance and when eyes are open and closed. Consequently, a valid theoretical explanation of such ocular activation could be formulated solely in terms of internal cognitive processing without any regard to visual processes. There is an agreement across studies that spontaneous ocular activity is significantly reduced when eyes are closed (Ehrlichman et al., 2007; Takeda & Yoshimura, 1979; Weitzenhoffer & Brockmeier, 1970) but ceases entirely only in the state of coma (Kojima et al., 1981). Nevertheless, in contrast to the prediction from the visual interference model, eye movements continue to appear when cognitive tasks performed with eyes open are performed with closed eyes. The frequency of ocular activity under closed lids has been found to be sensitive to attentional demands of tasks (Weitzenhoffer & Brockmeier, 1970), processing demands of tasks such as problem solving (Andreassi, 1973) and requirements for the search through LTM (Ehrlichman et al., 2007, Experiment 3). The initial evidence that ocular activity in non-visual tasks performed with closed eyes may be related to the processing requirements of tasks was provided by Amadeo and Shagass (1963) who examined the effect of attention and hypnosis on saccadic activity during periods of cognitive activity and resting. Although the rate of eye movements increased with attention, that finding could not be entirely explained by either attention or hypnosis since the effect of hypnosis on EMR was found to have been modulated by ongoing cognitive activity (i.e., arithmetic task). The only significant difference in their hypnosis experiment was found between tasks (imagery and alertness) and a no-task resting baseline suggesting that ocular activity in non-visual tasks may reflect the ongoing, task-related cognitive processes. Experiment 3 of the Ehrlichman et al. (2007) study confirmed the initial report of task effect on the motility of closed eyes provided by Amadeo and Shagass (1963) and underscored that the difference in EMR across tasks with different retrieval requirements cannot be explained by factors external to task requirements such as visual interference. Although overall EMR was reduced compared to eyes-open condition, the pattern of EMR found when high and low retrieval tasks were performed with open eyes remained unchanged: EMR with eyes closed was high in task requiring search through LTM and low in tasks not requiring such a search. The lack of a change in the relationship between EMR and retrieval demands of tasks under general slowing of saccadic activity due to closing of the eyes attests to the robustness of the task effect on ocular motility and suggests that memoryrelated operation such as search through LTM may be indeed, be important for activation of NVEMs.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

37

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Complete Darkness and in Ganzfeld An alternative to the eyes closed approach to studying NVEMs involves research with eyes open in complete darkness and under a Ganzfeld. This line of research developed to satisfy some of the theoretical and methodological requirements of NVEMs research: first, like the eyes closed approach, it was to test the independence of task-induced eye movements from external visual information, and second, it was to clear the data records of noise caused by the ocular tremor typically present when eyes are kept closed voluntarily. Although the absence of visual stimulation makes the eyes-open in the darkness a practical alternative to eyes closed condition, there is evidence that ocular activity and the corresponding brain activity may differ significantly in the two conditions. Functional MRI records reveal that keeping the eyes open in complete darkness activates oculomotor and attentional systems while closing the eyes activates visual, somatosensory, vestibular and auditory systems. These two patterns of brain activation linked to opening and closing of the eyes suggest two different states of mental activity, an “exteroceptive” state characterized by attention and oculomotor activity and an “interoceptive” state of imagination and multisensory activity, respectively (Marx et al., 2003). Since there is evidence that imagery activates the same structures within the sensory modality active during perception (Ganis, Thompson, Kosslyn, 2004; Kosslyn, Ganis, & Thompson, 2001), this multisensory cortical activation is currently understood as indicating construction of mental imagery. However, the early eye movement research (Totten, 1935) detected saccadic activity of eyes open in the darkness during voluntary creation of visual imagery indicating that visual thinking may not depend on opening or closing of the eyes but on the task at hand. When eyes are open in the darkness while performing visuo-spatial and verbal-linguistic tasks, they move at the comparable rate to that observed when such tasks are performed in a fully lit environment (Ehrlichman & Barrett, 1983). In contrast to the predictions of the interference hypothesis, the difference in EMR between verbal and spatial tasks does not disappear in the absence of environmental visual stimuli. Significantly higher EMR continue to occur in response to verbal than imaginal questions thus, strongly arguing against the visual interference explanation of the relationship between ocular behavior and non-visual cognitive activity. We note, this task-dependent difference in EMR cannot be related to the basic pattern of brain activity associated with opening or closing of the eyes in the darkness. Instead of eyes-open in darkness, Singer and Antrobus (1965) used eyes-open but covered approach to eliminate contamination of their EOG data with ocular tremor. They exposed their participants to a featureless field of vision (a Ganzfeld) by covering their eyes with a device made out of a ping-pong ball cut in half. Exposure to a Ganzfeld eliminates vision while the mind can still be used in an extremely focused way (Singer & Antrobus, 1965). In order to determine to what degree eye movements related to thought suppression are internal or dependent on external visual response, participants were instructed to either create or suppress a fantasy of a person while experiencing either the Ganzfeld or the visual environment. Significantly higher ocular activity during suppression than creation of fantasies was found to be independent of the visual condition, thus arguing against the imagery scanning hypothesis of thought-related eye movements which as previously discussed, attempted to relate creation of visual imagery to saccadic frequency. Taken together, the closed eyes research and research employing either the Ganzfeld or eyes open in the dark provide essential evidence that the frequency of ocular activation during

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

38

Dragana Micic and Howard Ehrlichman

various cognitive tasks does not dependent on the visual processing of the environment or the visual aspects of tasks. In toto, studies exploring non-visual ocular activity provided a compelling body of evidence that eyes move in a reliable and predictable way depending on the processing requirements of tasks. Irrespective of the level of illumination and whether eyes are open or closed, eyes move at a high rate when tasks require searching through LTM and at the low rate when searching for information is not required by tasks. We propose that this dichotomy, explicit in tasks designed to test the effect of retrieval requirements on saccadic frequency is implicit in verbal-linguistic and visuo-spatial tasks and is the underlying cause of the difference in EMR consistently found in studies using that paradigm (e.g., Antrobus et al., 1964; Ehrlichman, 1981; Hiscock & Bergstrom, 1981). While answering verbal-linguistic questions requires searching through LTM, answering visuo-spatial question does not require such a search as it is usually based on examination and manipulation of image once it is formed. To explain the effect of retrieval requirements of cognitive tasks on NVEMs, Ehrlichman et al. (2007) suggested that searching for information within LTM may utilize some of the same neural circuitry as searching for visual information, and that the frequency of saccadic activation in non-visual tasks is related to memory processes of maintenance and retrieval. In contrast to the rough isomorphism between the systems of visual imagery and visual perception postulated by quasi-visual accounts of the saccadic activation in non-visual cognition, Ehrlichman et al. proposed the existence of isomorphism between processes involved in visual perception and memory (i.e., search and maintenance), and considered this functional similarity a probable basis for evolutionary development of a neuroanatomical link between the memory system and the saccadic portion of the oculomotor system.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

The Neuroanatomical Model The anatomical model described by Ehrlichman et al. (2007) suggests that activation of frontal, parietal and temporal memory-related cortices can affect temporal distribution and frequency of saccadic generation via direct and indirect neural connections with the superior colliculus. The model focuses on the indirect connection mediated by transmodal thalamic nuclei which receive signals from the parahippocampal cortex and directly project to the motor portion of the superior colliculus and prefrontal cortical areas involved in the control of saccadic generation (Goldberg, 2000; Munoz & Everling, 2004). The superior colliculus, a mesencephalic convergence center which in its intermediate layer harbors tonic fixation neurons and saccadic burst neurons (Munoz & Everling, 2004), and which relates information about where and when saccades should occur (Munoz, 2002) to the brain-stem gaze circuitry (Sparks, 2002) plays a central role in this model. Functional specialization within the superior colliculus provides for the idea that activation of the fixation zone synchronizes with maintenance of information in WM and activation of the saccadic zone synchronizes with the search for information in LTM. Despite being based on ample evidence of neural communication among cortical areas implicated in memory such as the medial temporal lobe (Schacter & Wagner, 1999), and areas of prefrontal cortices related to control of saccadic eye movements (Cabeza, Dolcos, Graham, & Nyberg, 2002; Kapur et al., 1994; Moscovitch & Winocur, 2002), the original model of Ehrlichman et al. (2007) cannot account for the gaze dynamics seen in tasks with

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

39

high and low retrieval requirements and for the proposed synchronization of gaze patterns and memory functions. While thalamic connection could impinge on saccadic frequency via serial coupling with cortical areas and mesencephalic structures of motor control, a more direct connection of the superior colliculus with cortical areas involved in memory and oculomotor control would provide a better account of task-related patterns of oculomotor activity. Micic (in preparation) improved the explanatory power of the model by considering a connection capable of integrating the outputs of the oculomotor and memory systems to form a single command for the gaze control circuitry in the superior colliculus. The new version of the model, roughly outlined by Micic et al. (submitted) proposes that integration of the output of frontal oculomotor and frontal and temporal memory cortices occurs in the basal ganglia, and that this conglomerate of nuclei acts as the central link between the two systems synchronizing ocular motility and memory related processes. Based on the premise that memory functions of search and maintenance are to a great degree mutually exclusive, and the fact that eyes are either stationary or mobile at any point in time, the model proposes that task-determined outputs of neural circuitry supporting complementary memory functions combine within the basal ganglia into a single motor command to either initiate a saccadic eye movement or render the eyes immobile. Central to this model is the idea that intervals of frequent saccadic eye movements and of ocular quiescence reflect the degree to which a task activates memory functions of LTM search and maintenance, respectively. In a complementary memory function scenario, one gaze tendency will dominate the pattern of ocular activity during activation of the memory function intrinsic to the task at hand. As indicated by empirical evidence, in tasks which require searching through long-term storage of information (e.g., fluency tasks) gaze patterns are dominated by saccadic eye movements, while in WM tasks (e.g., n-back task), eye movements are infrequent and separated by long periods of ocular quiescence( e.g., Ehrlichman, Weiner,& Baker, 1974; Ehrlichman & Weiner, 1976). According to the current knowledge of the anatomy and physiology of the brain, the basal ganglia is the most plausible candidate for a neural system capable of regulating gaze patterns by communicating intrinsic processes of WM and LTM to a common saccadic effector system. The basal ganglia is a general motor control system that uses inhibition and disinhibition to modify the end-point motor output. The effect of the basal ganglia on the saccadic portion of the superior colliculus can be established via three known parallel pathways of control: the direct pathways from the caudate nucleus to substantia nigra pars reticulata, the indirect pathway connecting caudate nucleus and substantia nigra pars reticulata via globus pallidus pars externa and/or the subthalamic nucleus, and the hyperdirect pathway from the subthalamic nucleus to substantia nigra pars reticulata (Hicosaka, 2007). Support for the idea that oculomotor activity resulting from the signals sent from the basal ganglia to the superior colliculus may reflect memory functions such as search through LTM and maintenance in the episodic buffer comes from anatomical evidence that neural projections from cortical regions implicated in both WM and LTM converge on the striatum (nucleus caudatus and globus pallidus). Areas implicated in WM such as the dorso-lateral prefrontal cortex and ventro-lateral prefrontal cortex (Finch, 1996; Postuma & Dagher, 2006) and areas involved in LTM retrieval including entorhinal cortex, perirhinal cortex and parahippocampal cortex (Finch, 1996; Suzuki, 1996) directly project to striatum. Furthermore, there is evidence of interconnections between cortical regions involved in WM

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

40

Dragana Micic and Howard Ehrlichman

and LTM (Takahashi, Ohki, & Kim, 2007) that include direct (e.g., Goldman-Rakic, Selemon, & Schwartz, 1984) and indirect (e.g., Fernandez & Tendolkar, 2001) connections between dorso-lateral prefrontal cortex and parahippocampal cortex and reciprocal connections between the two via the caudate nucleus (Poldrack, & Packard, 2003) and thalamic nuclei (Suzuki, 1996). Parahippocampal cortex is also reciprocally connected with the posterior parietal cortex (Moscovitch & Winocur, 2002). These two posterior cortical areas together with the dorso-lateral prefrontal cortex and the inferior frontal cortex are currently considered the neural substrate of the episodic buffer (Naghavi & Nyberg, 2005). Anatomical support of the idea that neural output of the LTM and WM circuitry can be „translated‟ into NVGPs is further substantiated with physiological evidence that saccades may be related to mnemonic processes (e.g., Sobotka, Zuo, & Ringo, 2002). Physiological evidence also supports the idea that search and maintenance of information are complementary functions by demonstrating the competitive nature of the relationship between portions of the medial temporal lobe and the striatum (e.g., Moody, Bookheimer, Vanek, & Knowlton, 2004). In the rat brain, stimulation of entorhinal neurons (Finch, Gigg, Tan, & Kosoyan, 1995) has an inhibitory effect on the striatum while striatal activation leads to hippocampal inhibition and vice versa (Gabrieli, Brewer, & Poldrack, 1998; Poldrack, & Packard, 2003). Based on the pattern of functional coupling of cortical areas involved in memory and oculomotor control, and on the evidence of the competitive relationship between cortical regions involved in memory, the model proposes striatal inhibition and disinhibition as the most probable mechanism of synchronization of oculomotor activity and memory functions. While striatal inhibition has the capacity to express inactivation of the LTM network through ocular quiescence, striatal disinhibition could express activation of the neural LTM network through saccadic facilitation. According to the model, activation of the search function and therefore, the medial temporal lobe in high retrieval tasks would lead to inhibition of the striatum, which would produce disinhibition of the saccadic neurons in the superior colliculus and consequently, generation of saccades. In contrast, inactivity of the medial temporal lobe in WM tasks would lead to striatal inhibition of the saccadic neurons in superior colliculus and the resultant ocular quiescence. In this scenario, eye movements co-occur with attempts to retrieve items from LTM while gaze fixations associate with processes which include maintenance and manipulation of information in WM. The model can also account for the extreme slowing of ocular motility in the low retrieval tasks with a strong attentional component since in those tasks saccadic suppression may be augmented through possible summation of the inhibitory input from the striatum and from neural structures involved in orienting covert attention. The prospect of memory-dependent activation of the basal ganglia pathways offers a parsimonious, anatomically and physiologically supported account of NVEMs and of the bidirectional effect of retrieval requirements on ocular motility detected in the research on eye movements in non-visual cognition. Although without a doubt a subject to rigorous future empirical inquiry, the putative neural model of NVGPs briefly presented here is an attempt to illustrate the intuitively appealing idea that cognition and motor function may be intimately related. In this model, the existing neural and behavioral evidence of the relationship between oculomotor and memory systems converge on a proposition that gaze patterns associated with non-visual cognition should be regarded as a reflection of multi-system integration in which

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

41

components of the memory system compete and memory and motor systems collaborate with each other.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

The Memory Perspective: Theoretical Implications and Practical Applications Several different theoretical explanations reviewed in this paper attempted to account first, for the presence of eye movements in cognitive activities that do not seem to require them and second, for the variability in the frequency of their occurrence in various cognitive tasks. Empirical evidence collected through different approaches to the phenomenon converge to suggest that NVEMs are indeed related to internal cognitive processes relevant to and activated by cognitive tasks (e.g., Ehrlichman, 1981; Ehrlichman et al, 2007; Bergstrom & Hiscock, 1988; Hiscock & Bergstrom, 1981; Weiner & Ehrlichman, 1976). Currently, the phenomenon of spontaneous saccadic activation in non-visual cognition is best understood as an epiphenomenon of ongoing cognition related to retrieval requirements of cognitive tasks. However, conclusion that NVEMs are of no functional significance to the cognitive activity they occur in is in need of additional evaluation especially since there is evidence that some visual-stimulus independent saccadic eye movements may be functional. Miyauchi et al. (2008) reported that activation of structures responsible for PGO waves (pontine tegmentum, ventroposterior thalamus and primary visual cortex) starts before the occurrence of rapid eye movements and that REM sleep is accompanied by activation of the basal ganglia structures (putamen) and limbic areas (anterior cingulate, parahippocampal gyrus and amygdala). This is an important finding because it suggests that eye movements may trigger activation of memory-related areas in sleep. Similarly, a growing body of evidence indicates the same direction of influence during wakefulness. Several studies (e.g., Christman & Propper, 2001; Christman, Garvey, Propper, & Phaneuf, 2003; Parker, Relph, & Dagnall, 2008) found that a brief session of voluntary saccades facilitates episodic recognition and retrieval. Functionality of saccadic eye movements is also suggested by the students of the “looking at nothing phenomenon” in which eye movements accompany retrieval of previously presented visual and semantic information (e.g., Richardson, Altmann, Spivey, & Hoover, 2009). Nevertheless, despite compelling evidence that spontaneous saccades of this phenomenon are indeed related to memory, their potential functional significance for retrieval is yet to be confirmed (e.g., Ferreira, Apel, & Henderson, 2008; Hoover, & Richardson, 2008). According to the existing body of knowledge, the link between thinking and spontaneous saccadic eye movements seems to be a remnant of an evolutionary history in which the existing neural milieu was functionally adapted and modified through phylogenic development. Indeed, sharing of the existing functional coupling could explain concomitant activation of a phylogenicaly older system that may not be relevant for the tasks at hand (e.g., visual system) alongside activation of the task-recruited by a phylogenicaly younger system (e.g., memory system). Accordingly, activation of the older system in such cases is opportunistic and functionally unrelated to the cognitive tasks that triggered it via anatomical linkage with the functionally important system. Hence, as indicated by the findings of Ehrlichman (1981), Ehrlichman et al (2007), and Micic et al. (submitted), the shared circuitry

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

42

Dragana Micic and Howard Ehrlichman

arrangement seems to lead to parallel activation of both systems so that various types of ocular activation appear in union with various thought processes. Evidence that the tendency toward various patterns of ocular activation in different types of thinking is very strong (Micic et al., submitted, Experiment 3) raises a question of the consequences of the cognition-related changes in ocular dynamics for the concurrent visual processing. It is possible that both eye movement suppression typically occurring when people are engaged in visual thinking (e.g., daydreaming, visual imagery), and facilitation specific to times when people engage in a conversation can hamper visual processing. This possibility makes NVGPs a phenomenon relevant to driving and to a major cause of traffic accidents, failure of the visual scene processing. Studies of driving show evidence of change in saccadic behavior due to non-visual cognitive workload (e.g., Nunes &Recarte, 2002; Recarte & Nunes, 2003) and report that the pattern of eye movements while conversing involves shorter fixation times, an increase in the number of eye movements needed to inspect a visual target (McCarley & Vais, 2004), as well as reduced inspection of the speedometer and the rear view mirrors (Recarte & Nunes, 2000). Although changes in ocular behavior described in the literature on driving may be detrimental for traffic safety, studies of NVEMs suggest that being actively engaged in verbal thinking may be hazardous simply because seemingly random saccades, which tend to occur in large numbers while speaking, may direct the gaze toward various points in the visual field that may not be relevant to driving. Although this repositioning of the gaze may resemble saccadic activity typical of scene evaluation, it may not provide for adequate foveation and efficient visual processing of the circumstantially captured stimuli since it is not exogenously driven. There is strong evidence that vision is suppressed during endogenously driven saccades (e.g., Wallis & Bulthoff, 2000). However, there is also evidence that vision can also be affected when there is minimal saccadic activity such as in mental imagery (e.g., CraverLemley, & Reeves, 1992) in which eyes are predominantly still (e.g., Ehrlichman & Barrett, 1983). Therefore, as a mediator of the effect of cognitive processing on visual processing, non-visual ocular behavior offers a fertile ground for the inquiry and development of safe driving practices. On a more fundamental level, we consider the possibility that NVEMs may be of importance to the medical field where they could be used in conjunction with other clinical measures to identify cognitive function that is preserved but cannot be communicated. The current methods of assessing the level and content of consciousness lead to erroneous evaluations, misdiagnosis, and mistreatment of 40% of patients with altered states of consciousness (e.g., Andrews, Murphy, Munday, & Littlewood, 1996; Childs & Mercer, 1996). Based on the strong behavioral evidence of the association between spontaneous saccadic activity and thinking, we propose that continuous electrooculographic recording of ocular activity under closed lids may improve the diagnosis and management of noncommunicative patients by providing objective and easily obtainable evidence of either endogenous or external stimulus-triggered cognitive activity. This type of record could both compensate for and complement the partial evaluation of cognitive function provided by clinical measures that rely only on responses to external stimuli at the time of evaluation. Clearly, such a proposal is speculative at this time, and further inquiry into the relationship between individual differences, different types of cognitive activity and the rate of eye movements is necessary in order to determine whether non-visual cognition-related ocular

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Non-Visual Cognition

43

motility could be used as a valid and reliable marker of cognitive activity in both normal and altered states of consciousness. We close this review of the history of research of NVEMs with the hope that future studies will examine both various practical applications and underexplored aspects of the phenomenon such as large individual differences, possible linkage to hemisphericity, and functional significance of both eye movements and gaze fixation. The question of functionality remains important since there is no simple explanation as to why in the course of evolution a non-functional behavior (NVEMs) would intermittently be given precedence over functional behavior (visual eye movements) despite possibly detrimental effects of such arrangement on sensory processing and the apparent lack of the benefit for cognitive processing. One possibility is that the cost of shared pathways that was negligent at the point in history in which coupling between oculomotor and memory related areas developed could be efficiently reduced by the means of a controlling mechanism (e.g., attention). A yet unexplored alternative is that the link between two systems does not reflect phylogenic development in which sharing of the circuitry occurs due to isomorphism of processes used by two systems (memory search and visual search, respectively) but due to actual functional significance of the processes of the older system (visual processing) for the processes of the younger system (e.g., creation, storage and retrieval of memory traces) in which case, future explorations of NVGPs might uncover additional ways in which eye movements could reveal the inner workings and organization of the mind.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

References Amadeo, MN. & Shagass, C. (1963). Eye movements, attention and hypnosis. Journal of Nervous and Mental Disease, 136, 130-145. Andreassi, J. I. (1973). Alpha and problem solving: A demonstration. Perceptual and Motor Skills, 36, 905-906. Andrews, K., Murphy, L., Munday, R. & Littlewood, C. (1996). Misdiagnosis of the vegetative state: retrospective study in a rehabilitation unit. British Medical Journal, 313, 13-16. Antrobus, J. S. (1973). Eye movements and nonvisual cognitive tasks. In V. Zikmund, (Ed.), The oculomotor system and brain functions. London: Butterworth. Antrobus, J. S., Antrobus, J. S. & Singer, J. L. (1964). Eye movements accompanying daydreaming, visual imagery, and thought suppression. Journal of Abnormal and Social Psychology, 69, 244-252. Argyle, M. (1967). The psychology of interpersonal behavior. Harmondsworth, UK: Penguin. Argyle, M. & Cook, M. (1976). Gaze and mutual gaze. Cambridge, UK: Cambridge University Press. Argyle ,M. & Dean, J. (1965). Eye contact, Distance and affiliation. Sociometry, 28, 289-304. Argyle, M. & Ingham, R. (1972). Gaze, mutual gaze, and proximity. Semiotica, 6, 32-49. Argyle, M., Lalljee, M. & Cook, M. (1968). The effects of visibility on interaction in dyad. Human relations, 21, 3-17. Argyle, M. & Kendon, A. (1967). The experimental analysis of the social performance. In L. Berkowitz, (Ed.) Advances in Experimental Social Psychology: Vol. 3, (55-98). New York: Academic Press.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

44

Dragana Micic and Howard Ehrlichman

Aserinsky, E. (1967). Physiological activity associated with segments of the rapid eye movement period. Research Publications - Association for Research in Nervous and Mental Disease, 45, 338-350. Aserinsky, E. & Kleitman, N. (1953). Regularly occurring periods of eye motility, and concomitant phenomena. Science, 118, 273-274. Aserinsky, E. & Kleitman, N. (1955). Two types of ocular motility occurring in sleep. Journal of Applied Physiology, 8, 1-10. Baddeley, A. D. (2000). The episodic buffer: a new component of working memory. Trends in Cognitive Science, 4, 417-423. Baddeley, A. D. & Hitch, G. (1974). Working memory. In G. H. Bower, (Ed.), The psychology of learning and motivation: Advances in research and theory, (Vol. 8, 47-89). New York: Academic Press. Bailenson, J. N., Blascovich, J., Beall, A. C. & Loomis, J. M. (2001). Equilibrium theory revisited: Mutual gaze and personal space in virtual environment. Presence, 10, 583-598. Bakan, P. (1969). Hynotizability, laterality of eye movements and functional brain asymmetry. Perceptual and Motor Skills, 28, 927-932. Barret, J. & Ehrlichman, H. (1982). Bilateral hemispheric alpha activity during visual imagery. Neuropsychologia, 20, 703-708. Bavelas, J. B., Coates, L. & Johnson, T. (2002). Listener responses as collaborative process: The role of gaze. Journal of communication, 52, 566-580. Beattie, G. W. (1978a). Floor apportionment and gaze in conversational dyads. British Journal of Social and Clinical Psychology, 17, 7-15. Beattie, G. W. (1978b). Sequential temporal patterns of speech and gaze in dialogue. Semiotica, 23, 29-52. Beattie, G. W. (1981). Interruption in conversational interaction, and its relation to the sex and status of the interactants. An Interdisciplinary Journal of the Language Sciences La Haye, 19, 13-35. Beattie, G. W. & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics, 17, 213-229. Berger, R. J. & Oswald, I. (1962). Eye movements during active and passive dreams. Science, 137, 601. DOI: 10.1126/science.137.3530.601, 12, 29, 2008. Bergstrom, K. J. & Hiscock, M. (1988). Factors influencing ocular motility during the performance of cognitive tasks. Canadian Journal of Psychology, 42, 1-23. Bitterman, M. E. (1960). Toward a comparative psychology of learning. American Psychologist, 11, 704-712. Brandt, S. A, & Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of Cognitive Neuroscience, 9, 27-38. Brooks, L.R. (1968). Spatial and verbal components of the act of recall. Canadian Journal of Psychology, 22, 349-368. Brunner, L. J. (1979). Smiles can be back channels. Journal of Personality and Social Psychology, 37, 728-734. Butterworth, B. (1978). Maxims for studying conversations. Semiotica, 24, 317-340. Cabeza, R., Dolcos, F., Graham, R. & Nyberg, L. (2002). Similarities and differences in the neural correlates of episodic memory retrieval and working memory. NeuroImage, 16, 317-330.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

45

Cegala, D. J., Sokuvitz, S. & Alexander, A. F. (1979). An investigation of gaze and its relation to selected verbal material. Human Communication Research, 5, 99-108. Childs, N. L. & Mercer, W. N. (1996). Misdiagnosis of the persistent vegetative state. Misdiagnosis certainly occurs. British Medical Journal, 313, 13-16. Christman, S. D., Garvey, K. J., Propper, R. E.. & Phaneuf, K. A. (2003). Bilateral eye movements enhance the retrieval of episodic memories. Neuropsychology, 17, 221-229. Christman, S. D. & Propper, R. E. (2001). Superior episodic memory is associated with interhemispheric processing. Neuropsychology, 15, 607-616. Collins, W. E. (1962). Effects of mental set upon vestibular nystagmus. Journal of Experimental Psychology, 63,191-197. Craver-Lemley, C. & Reeves, A. (1992). How visual imagery interferes with vision. Psychological Review, 99, 633-649. Cumming, G. D. (1978). Eye movements and visual perception. In E. C. Carterette & M. P. Friedman (Eds.), Handbook of perception, (221-255). New York: Academic Press. Day, M. E. (1964). An eye movement phenomenon relating to attention, thought and anxiety. Perceptual and Motor Skills, 19, 443-446. DeGennaro, L. & Violani, C. (1988). Reflective lateral eye movements: Individual styles, cognitive and lateralization effects. Neuropsychologia, 26, 727-736. Dement, W. & Kleitman, N. (1957a). Cyclic variations in EEG during sleep and their relation to eye movements, body motility and dreaming. EEG Clinical Neurophysiology, 9, 673690. Dement, W. & Kleitman, N. (1957b). The relation of eye movements during sleep to dream activity: An objective method for the study of dreaming. Journal of Experimental Psychology, 53, 339-346. Doherty-Sneddon, G., Bruce, V., Bonner, L., Longbotham, S. & Doyle, C. (2002). Development of gaze aversion as disengagement from visual information. Developmental Psychology, 38, 438-455. Doherty-Sneddon, G. & Phelps, F. G. (2005). Gaze aversion: A response to cognitive or social difficulty? Memory and Cognition, 33, 727-733. Doherty-Sneddon, G., Riby, D. M., Calderwood, L. & Ainsworth, L. (2009). Stuck on you: face-to-face arousal and gaze aversion in Williams syndrome. Cognitive Neuropsychiatry, 14, 510-523. Druckman, D. & Bjork, R. A. (Eds.). (1991). In the mind’s eye. Commission on Behavioral and Social Sciences and Education. The National Academies Press. Duke, J. D. (1968). Lateral eye movement behavior. The Journal of General Psychology, 78, 189-195. Efran, J. S. (1968). Effects on visual behavior of approbation from persons differing in importance. Journal of Personality and Social Psychology, 10, 21-25. Ehrlichman, H. (1981). From gaze aversion to eye-movement suppression: An investigation of the cognitive interference explanation of gaze patterns during conversation. British Journal of Social Psychology, 20, 233-241. Ehrlichman, H., Antrobus, J. S. & Wiener, M. S. (1985). EEG asymmetry and sleep mentation during REM and NREM. Brain and Cognition, 4, 477-485. Ehrlichman, H. & Barrett, J. (1983). “Random” saccadic eye movements during verballinguistic and visual-imaginal tasks. Acta Psychologica, 53, 9-26.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

46

Dragana Micic and Howard Ehrlichman

Ehrlichman, H., Micic, A., Sousa, A. & Zhu, J. (2007). Looking for answers: Eye movements in non-visual cognitive tasks. Brain and Cognition, 64, 7-20. Ehrlichman, H. & Weinberger, A. (1978). Lateral eye movements and hemispheric asymmetry: A critical review. Psychological Bulletin, 85, 1080-1101. Ehrlichman, H., Weiner, S. L. & Baker, A. H. (1974). Effects of verbal and spatial questions on initial gaze shifts. Neuropsychologia, 12, 265-277. Exline, R. V. (1963). Explorations in the process of person perception: Visual interaction in relationship to competition, sex, and need for affiliation. Journal of Personality, 31, 1-20. Exline, R. V. & Messick, D. (1967). The effects of dependency and social reinforcement upon visual behavior during an interview. British Journal of Social and Clinical Psychology, 6, 256-266. Exline, R. V. & Winters, L. C. (1965). Effects of cognitive difficulty and cognitive style upon eye to eye contact in interviews. Paper presented at the annual meeting of the Eastern Psychological Association, Philadelphia, PA. Exline, R. V., Gray, D. & Schuette, D. (1965). Visual behavior in a dyad as affected by interview content and sex of respondent. Journal of Personality and Social Psychology, 1, 201-209. Exline, R. V., Jones, P. & Maciorowski, K. (1977). Race, affiliation-conflict theory and mutual visual attention during conversation. Farah, M. J. (1988). Is visual imagery really visual? Overlooked evidence from Neuropsychology. Psychological Review, 95, 307-317. Farah, M. J. (1995). The neural bases of mental imagery. In M.S. Gazzaniga (Ed.) The Cognitive Neurosciences (963-975). Cambridge, MA/London: Bradford Books, MIT Press. Farroni, T., Csibra, G., Simioni, F. & Johnson, M. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences, USA, 99, 9602-9605. Fernandez, G. & Tendolkar, I. (2001). Integrated brain activity in medial temporal and prefrontal areas predicts subsequent memory performance: Human declarative memory formation at the system level. Brain Research Bulletin, 55, 1-9. Ferreira, F., Apel, J. & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends in Cognitive Sciences, 12, 405-410. Finch, D. M. (1996). Neurophysiology of converging synaptic inputs from the rat prefrontal cortex, amygdala, midline thalamus, and hippocampal formation onto single neurons of the caudate/putamen and nucleus accumbens. Hippocampus, 6, 495–512. Finch, D. M., Gigg, J., Tan, A. M. & Kosoyan, O. P. (1995). Neurophysiology and neuropharmacology of projections from entorhinal cortex to striatum in the rat. Brain Research, 670, 233–47. Fosse, R., Stickgold, R. & Hobson, J. A. (2004). Brain-mind states: Reciprocal variation in thoughts and hallucinations. Psychological Science, 12, 30-36. Fulton, J. T. (2000). Processes in Animal Vision. Vision Concepts. Retrieved December 20, 2008 from http://www.4colorvision.com Gabrieli, J. D. E., Brewer. J. B. & Poldrack, R. A. (1998). Images of medial temporal lobe functions in human learning and memory. Neurobiology of learning, and memory, 70, 275-283. Galin, D. & Ornstein, R. (197). Individual differences in cognitive style. Reflective eye movements. Neuropsychologia, 12, 367-376.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

47

Ganis, G., Thompson, W. L. & Kosslyn, S. M. (2004). Brain areas underlying mental imagery and visual perception: An fMRI study. Cognitive Brain Research, 20, 226-241. Gazzaniga, M. S. (1995). Principles of human brain organization derived from split-brain studies. Neuron, 14, 217-228. Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1-55. Glenberg, A. M., Schroeder, J. L. & Robertson, D. A. (1998). Averting the gaze disengages the environment and facilitates remembering. Memory & Cognition, 26, 651-658. Goldberg, M. E. (2000). The control of gaze. In J. Butler, & H. Lebowitz, (Eds.) Principles of neural science (782-800). New York: McGraw-Hill. Goldman-Eisler, F. (1967). Sequential temporal patterns and cognitive processes in speech. Acta neurologica et psychiatrica Belgica, 67, 841-851. Goldman-Rakic, P. S., Selemon, L. D. & Schwartz, M. L. (1984). Dual pathways connecting the dorsolateral prefrontal cortex with the hippocampal formation and parahippocampal cortex in the rhesus monkey. Neuroscience, 12, 719-743. Goodale, M. A. & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15, 20-25. Gray, P. H. (1958). Theory and evidence of imprinting in human infants. Journal of Psychology: Interdisciplinary and Applied, 46, 155-166 Griffin, Z. M. & Oppenheimer, D. K. (2006). Speakers gaze at objects while preparing intentionally inaccurate labels for them. Learning, Memory, and Cognition, 32, 943-948. Gur, R. E., Gur, R. C. & Harris, L. J. (1975). Cerebral activation, as measured by subjects‟ lateral eye movements is influenced by experimenter location. Neuropsychologia, 13, 3544. Hebb, D. O. (1968). Concerning imagery. Psychological Review, 75, 466-477. Hess, E. H. (1965). Attitude and pupil size. Scientific American, 212, 46-54. Hikosaka, O. (2007). Basal ganglia mechanisms of reward-oriented eye movements. Annals of the New York Academy of science, 1104, 229-249. Hiscock, M. & Bergstrom, K. J. (1981). Ocular motility as an indicator of verbal and visuospatial processing. Memory and Cognition, 9, 332-338. Holland, M. K. & Tarlow, G. (1971). Holland, M. K. & Tarlow, G. (1972). Blinking and mental load. Psychological Reports, 31, 119-127. Hong, C. C. H., Gillin, J. C, Dow, B. M., Wu, J. & Buchsbaum, M. S. (1995). Localized and lateralized cerebral glucose metabolism associated with eye movements during REM sleep and wakefulness: a positron emission tomography (PET) study. Sleep, 18, 570-580. Hoover, M. A. & Richardson, D. C. (2008). When facts go down the rabbit hole: Contrasting features and objecthood as indexes to memory. Cognition, 108, 533-42. Hufner, K., Stephan, T., Glasauer, S., Kalla, R., Riedel, E. & Deutschlander, A., et al. (2008). Differences in saccade-evoked brain activation patterns with eyes open or closed in complete darkness. Experimental Brain Research, 186, 419-430. Ioannides, A. A., Corsi-Cabrera, M., Fenwick, P. B. C., delRio Portilla, Y., Laskaris, N. & Khurshudyan, A., et al. (2004). MEG tomography of human cortex and brainstem activity in waking and REM sleep saccades. Cerebral cortex, 14, 56-72. Jacobs, L., Feldman, M. & Bender, M. B. (1972). Are the eye movements of dreaming sleep related to visual images of dreams? Psychophysiology, 9, 393-401.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

48

Dragana Micic and Howard Ehrlichman

Janssen, W. H., & Nodine, C. F. (1974). Eye movements and visual imagery in free recall. Acta Psychologica, 38, 267-276. Kapur, S., Craik, F. I. M., Tulving, E., Wilson, A. A., Houle, S. & Brown, G. M. (1994). Neuroanatomical correlates of encoding in episodic memory: Levels of processing effect. Proceedings of the National Academy of Sciences USA, 91, 2008-2011. Kastner, S., Pinsk, M. A., De Weerd, P., Desimone, R. & Ungerleider, L. G. (1999). Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Cortex, 22, 751-761. Kemp, R., McManus, C. & Pigott, T. (1990). Sensitivity to the displacement of facial features in negative and inverted images. Perception, 19, 531-543. Kendon, A. (1967). Some functions of gaze direction in social interaction. Acta Psychologica, 26, 22-63. Kinsbourne, M. (1972). Eye and head turning indicates cerebral localization. Science, 176, 539-541. Klinger, E., Gregoire, K. C. & Barta, S. G. (1973). Physiological correlates of mental activity: Eye movements, alpha, and heart rate during imaging, suppression, concentration, search, and choice. Psychophysiology, 10, 471-477. Kosslyn, S. M., Ganis, G. & Thompson, W. L. (2001), Neural foundations of imagery. Nature Reviews Neuroscience, 2, 635-642. Kocel, K., Galin, D., Ornstein, R. & Merrin, E. L. Lateral eye movement and cognitive mode. Psychonomic Science, 27, 223-224. Kojima, T., Shimazono, Y., Ichise, K., Atsumi, Y, Ando, H. & Ando, K. (1981). Eye movement as an indicator of brain function. Folia Psychiatrica et Neurologica Japonica, 35, 425-436. Kosslyn, S. M., Alpert, N. M., Thompson, W. L., Maljkovic, V., Weise, S. B. & Chabric, C., et al. (1993). Visual mental imagery activates topographically organized visual cortex: PET investigations. Journal of Cognitive Neuroscience, 5, 263-287. Kowler, E. & Steinman, R. M. (1977). The role of small saccades in counting. Vision Research, 17, 141-146. Krauss, R. M., Fussell, S. R. & Chen, Y. (1995). Coordination of perspective in dialogue: Intrapersonal and interpersonal processes. In I., Markova, C. Graumann, & K. Foppa, (Eds.), Mutualities in Dialogue, (124-145). Cambridge, UK: Cambridge University Press. LaBerge, S. (1990). Lucid dreaming: Psychophysiological studies of consciousness during REM sleep. In R. R., Bootzen, J. F. Kihlstrom, & D. L. Schacter, (Eds.). Sleep and Cognition, (109-126). Washington, D.C.: American Psychological Association. LaBerge, S., Nagel, L., Dement, W. & Zarcone, V. (1981). Lucid dreaming verified by volitional communication during REM sleep. Perceptual and Motor Skills, 52, 727-732. Lacey, B. C, & Lacey, J. I. (1964). Cardiac deceleration and simple visual reaction in afixed foreperiod experiment. Paper presented at the meeting of the Society for Psychophysiological Research, Washington, D.C. Lacroix, J. M. & Comper, P. (1979). Lateralization in the electrodermal system as a function of cognitive/hemispheric manipulation. Psychophysiology, 16, 116-130. Laeng, B. & Teodorescu, D. S. (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science: A Multidisciplinary Journal, 26, 207-231.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

49

Lawrence, B. M., Myerson, J., Oonk, H. M. & Abrams, R. A. (2001). The effects of eye and limb movements on working memory. Memory, 9, 433-444. Lion, K. S. & Brockhurst, R. J. (1951). Ocular movements under stress. American Medical Association Archives of Ophthalmology, 46, 315-318. Lorens, S. A. & Darrow, C. W. (1962). Eye movements, EEG, GSR and EKG during mental multiplication. Electroencephalography and Clinical Neurophysiology, 14, 739-746. MacDonald, B. H. & Hiscock, M. (1992). Direction of lateral eye movements as an index of cognitive mode and emotion: a reappraisal. Neuropsychologia, 30, 753-755. Mahl, G. F. (1956). Disturbances and silences in the patient‟s speech in psychotherapy. Journal of Abnormal Psychology, 53, 1-15. Marks, D. F. (1973). Visual imagery differences and eye movements in the recall of pictures. Perception and Psychophysics, 14, 407-412. Marx, E., Stephan, T, Nolte, A., Deutschlander, A., Seelos, K.C., Dieterich, M. & Brandt, T. (2003). Eye closure in darkness animates sensory systems. NeuroImage, 19, 924-934. May, J. G., Kennedy, R. S., Williams, M. C., Dunlap, W. P. & Brannan, J. R. (1990). Eye movement indices of mental workload. Acta Psychologica, 75, 75-89. McCarley, J. S. & Vais, M. J. (2004). Conversation disrupts change detection in complex traffic scenes. Human Factors, 46, 424-436. Meskin, B. B. & Singer, J. L. (1974). Daydreaming, reflective thought, and laterality of eye movements. Journal of Personality and Social Psychology, 30, 64-71. Micic, D. (in preparation). Synchronization of non-visual gaze patterns and processes of longterm memory and working memory: How eye movements communicate endogenous cognitive activity. Micic, D., Ehrlichman, H. & Chen, R. (submitted). Long term memory search triggers saccadic eye movements: Why do we move our eyes while trying to remember? Miyauchi, S., Misaki, M., Kan, S., Fukunaga, T. & Koike, T. (2008). Human brain activity time-locked to rapid eye movements during REM sleep. Experimental Brain Research, 192, 657-667. doi:10.1007/s00221-008-1579-2. Montagnini, A. & Chelazzi, L. (2005). The urgency to look: Prompt saccades to the benefit of perception. Vision Research, 45, 3391-3401. Moody T. D., Bookheimer, S. Y., Vanek, Z. & Knowlton, B. J. (2004). An implicit learning task activates medial temporal lobe in patients with Parkinson‟s disease. Behavioral Neuroscience, 118, 438-442. Moore, C. S. (1903). Control of the memory image. Psychological Review Monograph, 4, 277-306. Moscovitch, M. & Winocur, G. (2002). The frontal cortex and working with memory. In D. T. Stuss, & R. T. Knight, (Eds.), Principles of Frontal Lobe Function (188-209) New York: Oxford University Press. Munoz, D. P. (2002). Commentary: Saccadic eye movements: Overview and neural circuitry. Progress in Brain Research, 140, 89-96. Munoz, D. P. & Everling, S. (2004). Look away: The anti-saccade task and the voluntary control of eye movement. Nature Reviews Neuroscience, 5, 218-228. Naghavi, H. R. & Nyberg, L. (2005). Common fronto-parietal activity in attention, memory, and consciousness: Shared demands on integration? Consciousness and Cognition, 14, 390-425.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

50

Dragana Micic and Howard Ehrlichman

Nunes, L. & Recarte, M. A. (2002). Cognitive demands of hands-free phone conversation while driving. Transportation Research, Part F: Traffic Psychology and Behavior, 5, 133-144. Parker, A., Relph, S. & Dagnall, N. (2008). Effects of bilateral eye movements on the retrieval of item, associative, and contextual information. Neuropsychology, 22, 136-145. Pascual-Leone, A., Nguyet, D., Cohen, L. G., Brasil-Neto, J. P., Cammarota, A. & Hallett, M. (1995). Modulation of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine motor skills. Journal of Neurophysiology, 74, 10371045. Pierrot-Deseilligny, C., Ploner, C. J., Muri, R. M., Gaymard, B. & Rivaud-Pechoux, S. (2002). Effects of cortical lesions on saccadic eye movements in humans. Annals of the New York Academy of Science, 956, 216-229. Poldrack, R. A. & Packard, M. G. (2003). Competition among multiple memory systems: converging evidence from animal and human brain studies. Neuropsychologia, 1497, 1-7. Posner, M. I. (1973). Coordination of internal codes. In W. G. Chase, (Ed.), Visual information processing. New York, Academic Press. Postle, B. (2005). Delay-period activity in prefrontal cortex: one function is sensory gaiting. Journal of Cognitive Neuroscience, 17, 1679-1690. Postle, B. R. & Brush, L. N. (2004). The neural bases of the effects of item-nonspecific proactive interference in working memory. Cognitive Affective Behavioral Neuroscience, 4, 379-392. Postle, B. R., Brush, L. N. & Nick, A.M. (2004). Prefrontal cortex and the mediation of proactive interference in working memory. Cognitive Affective Behavioral Neuroscience, 4, 600-608. Postuma, R. & Dagher, A. (2006). Basal ganglia functional connectivity based on a metaanalysis of 126 positron emission tomography and functional magnetic resonance imaging studies. Cerebral Cortex, 16, 1508-1521. Raine, A. (1991). Are lateral eye movements a valid index of functional hemispheric asymmetries? British Journal of Psychology, 82, 129-135. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372-422. Recarte, M. A. & Nunes, L. M. (2000). Effects of verbal and spatial imagery task on eye fixations while driving. Journal of Experimental Psychology: Applied, 6, 31-43. Recarte, M. A. & Nunes, L. M. (2003). Mental workload while driving: Effects on visual search, discrimination and decision making. Journal of Experimental Psychology: Applied, 9, 119-137. Richardson, D. C., Altmann, G. T. M., Spivey, M. J. & Hoover, M. A. (2009). Much ado about eye movements to nothing: a response to Ferreira et al.: Taking a new look at looking at nothing. Trends in Cognitive Sciences, 13, 235-236. Roffwarg, H. P., Dement, W. C., Muzio, J. N. & Fisher, C. (1962). Dream imagery: Relationship to rapid eye movements of sleep. Archives of General Psychiatry, 7, 235258. Ross, J. & Ma-Wyatt, A. (2003). Saccades actively maintain perceptual continuity. Nature Neuroscience, 7, 65-69. Rutter, D. R., Stephenson, G. M., Ayling, K. & White, P. A. (1978). The timing of looks in dyadic conversation. British Journal of Social and Clinical Psychology, 17, 17-21.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Non-Visual Cognition

51

Sakai, K., Rowe, J. B. & Passingham, R. E. (2002). Active maintenance in prefrontal area 46 creates distractor-resistant memory. Nature Neuroscience, 5, 479-484. Saring, W. & von Caramon, D. (1980). Is there an interaction between Cognitive activity and lateral eye movements? Neuropsychologia, 18, 591-596. Schacter, D. L. & Wagner, A. D. (1999). Medial Temporal Lobe Activations in fMRI and PET Studies of Episodic Encoding and Retrieval. Hippocampus, 9, 7-24. Schwartz, G. E., Davidson, R. J. & Maer, F. (1975). Right hemisphere lateralization for emotion in the human brain: interactions with cognition. Science, 190, 286-288. Singer, J. L. (1975). Navigating the stream of consciousness: Research in daydreaming and related inner experience. American Psychologist, 30, 727-737. Singer, J. L. & Antrobus, J. S. (1965). Eye movements during fantasies. Archives of General Psychiatry, 12, 71-76. Singer, J. L., Greenberg, S. & Antrobus, J. S. (1971). Looking with the mind‟s eye: Experimental studies of ocular motility during daydreaming and mental arithmetic. Transactions of the New York Academy of Sciences, 33, 694-709. Sobotka, S. W., Zuo, W. & Ringo, J. L. (2002). Is the functional connectivity within temporal lobe influenced by saccadic eye movements? Journal of Neurophysiology, 88, 675-1684. Solms, M. (2000). Dreaming and REM are controlled by different brain mechanisms. Behavioral and Brain Sciences, 23, 843-850. Sparks, S. L. (2002). The brainstem control of saccadic eye movements. Nature Reviews Neuroscience, 3, 952–964. Spitz, R. A. (1946). The smiling response: a contribution to the ontogenesis of social relations. Genetic Psychology Monographs, 34, 57-125. Suzuki, W. A. (1996). Neuroanatomy of the monkey entorhinal, perirhinal and parahippocampal cortices: Organization of cortical inputs and interconnections with amygdala and striatum. Seminars in the Neurosciences, 8, 3-12. Takahashi, E., Ohki, K. & Kim, D-S. (2007). Diffusion tensor studies dissociated two frontotemporal pathways in the human memory system. NeuroImage, 34, 827-838. Takeda, M. & Yoshimura, H. (1979). Lateral eye movement while eyes are closed. Perceptual and motor skills, 48, 1227-1231. Totten, E. (1935). Eye movement during visual imagery. Comparative Psychology Monographs, 11 (3). Wallis, G. & Bulthoff, H. (2000). What‟s scene and not seen: Influence of movement and task upon what we see. Visual Cognition, 7, 175-190. Weiner, S. L. & Ehrlichman, H. (1976). Ocular motility and cognitive process. Cognition, 4, 31-43. Weiten, W. & Etaugh, C. (1974). Lateral eye movement as a function of cognitive mode, question sequence, and sex of subject. Perceptual and Motor Skills, 38, 439-444. Weitzenhoffer, A. M. & Brockmeier, J. D. (1970). Attention and eye movements. The Journal of Nervous and mental Disease, 151, 130-142. Williams, E. (1978). Visual interaction and speech patterns: An extension of previous results. British Journal of Social and Clinical Psychology, 17, 101-102. Yang, S. N. & McConkie, G. W. (2001). Eye movements during reading: a theory of saccadic initiation times. Vision Research, 41, 3567-3585. Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

52

Dragana Micic and Howard Ehrlichman

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Young, L. R. & Sheena, D. (1975). Methods and designs: Survey of eye movement recording methods. Behavior Research Methods and Instrumentation, 7, 397-429. Zuber, B. L., Semmlow, J. L. & Stark, L. (1968). Frequency characteristics of the saccadic eye movement. Biophysical Journal, 8, 1298.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 53-66 © 2011 Nova Science Publishers, Inc.

Chapter 2

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations Pasquariello Giulioa, Cesarelli Marioa, Romano Mariaa, Bifulco Paoloa, La Gatta Antoniob and Fratini Antonioa a

Dept. of Biomedical, Electronic and Telecommunication Engineering, University “Federico II” of Naples, Napoli, Italy b Math4Tech Center, University of Ferrara, Ferrara, Italy

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Abstract Congenital Nystagmus (CN) is one of the diseases that can affect binocular vision, reducing the visual quality of a subject. It is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations and, although identified more than forty years ago, its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. The majority of patients affected by CN show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). CN etiology has been related to deficiencies in saccadic, optokinetic, smooth pursuit, and fixation systems as well as in the neural integrator for conjugate horizontal gaze. Although numerous studies have described CN pathophysiology and its relation to the visual system, actually, CN etiology still remains unclear. In recent years, a number of control system models has been developed in order to reproduce CN; results don‟t fully agree on the origin of these involuntary oscillations, but it seems that they are related to an error in „calibration‟ of the eye movement system during fixation. This study aims to present some of the different models of the oculomotor system and discuss their ability in describing CN features extracted by eye movement recordings. Use of those models can improve the knowledge of CN pathogenesis and then could be a support for treatments planning or therapy monitoring.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

54

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al.

Introduction Congenital nystagmus (CN) is an ocular motor disorder that develops at birth or in the first months of life and becomes a lifelong condition. This specific kind of nystagmus consists in involuntary, conjugated, horizontal (rarely vertical or rotatory) rhythmic movements of the eye [1]. CN oscillations can persist also upon closing the eyes, even if they tend to reduce in the absence of visual tasks. Nystagmus can be idiopathic or associated to central nervous system alterations and/or ocular system affections such as acromathopsia, aniridia and congenital cataract. Both nystagmus and associated ocular alterations can be genetically transmitted; CN is also present in most cases of albinism [2]. Pathogenesis of the congenital nystagmus is still unknown; defects have been proposed involving the saccadic, optokinetic, smooth pursuit, and fixation systems as well as the neural integrator for conjugate horizontal gaze. The relationship between eye movement and vision is indeed quite complex, even in healthy subjects: the eyes move continuously to bring the image of the desired target onto a small area of the retina called the “fovea centralis”. Two general classes of eye movements are employed in executing this task: conjugate eye movements and vergence eye movements. The first set of movements can be additionally classified as saccadic and smooth pursuit: 

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.



the function of saccadic eye movements is to direct, as quickly as possible, the gaze on an object in the visual field, so that it may be analyzed in detail; the function of the smooth pursuit system is to maintain a moving object in central view as in the case of tracking a car in its course.

Vergence eye movements, instead, have the function to compensate for visual-depth changes of a given target and they are realised counter-rotating the eyes. Those combined actions are necessary since the resolution of details (visus or visual acuity) decreases sharply while the target image onto the retina moves away from the fovea; moreover the visual acuity is also degraded if the images slip over the fovea at velocities greater than a few degrees per second. Optimal visual performance is therefore attainable only when the target images are held steady on this region; this process is referred to as foveation. In subjects affected by CN, the continuous oscillations reduce the total amount of time in which the eyes are stable on the desired target. In clinical practice, the “foveation time” is often measured, recording eye movements and then using specific software to compute the time intervals (foveation periods) in which the eye position and velocity remain under given thresholds [3,4]. A comparison between the foveation time of a subject affected by CN and a healthy subject can be used as a rough index of the visual quality of the patient; visual acuity was found to be mainly dependent on the duration of the foveation periods [5-7], but the exact repeatability of eye position from cycle to cycle and the retinal image velocities also contribute to its estimation [1,8]. In general, CN patients present a considerable decrease of the visual acuity (image fixation on the retina is obstructed by nystagmus continuous oscillations) and severe postural alterations such as the Anomalous Head Position (AHP) can be present, applied by patients to obtain a better fixation of the target image onto the retina. Indeed, often present in CN are so-called „null zones‟, i.e. particular gaze angles, in which a

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

55

smaller nystagmus amplitude, a longer foveation time and a small variability in the eye position can be recorded; these zones correspond to the positions of maximum visual acuity. CN has been extensively studied phenomenologically, and has attracted systems modellers for 30 years or more. This resulted in a detailed study of the oculomotor system by physiologists in cooperation with engineers and mathematicians in order to develop an accurate model of this system both in healthy subjects and in patients affected by CN. Moreover, several hypotheses were made to identify which component was responsible for the continuous involuntary oscillation showed by CN subjects. To test the validity of a given model, usually the researchers predict what will be his behaviour in various experimental situations or try to verify its ability to accurately describe the modelled phenomenon. In CN examination, this corresponds to the model capacity to accurately describe the features shown by actual eye movement recordings over a large number of subjects.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Congenital Nystagmus In clinical practice, eye movement recordings and estimation of concise parameters, such as amplitude, frequency, direction, foveation periods, etc., are widely employed and are considered a valid support for an accurate diagnosis, for patient follow-up and for therapy evaluation. Current therapies for CN, still debated, aim to increase the patient‟s visual acuity by means of refraction defects correction, drug delivery and ocular muscle surgery [9] and often measure their outcome using parameters derived from the analysis of eye movement recordings. Four main surgical strategies have been advocated in management of congenital nystagmus; Kestenbaum surgery for compensatory head posture with null zone (i.e. translating the null-zone to the straight-ahead position); artificial divergence surgery; maximum recession of horizontal rectus muscles and rectus muscle anterior tenotomy [9,10]. According to bibliography, idiopathic nystagmus can be classified in different categories depending on the characteristics of the oscillations [1,3]. Typically in CN eye movement recordings for each nystagmus cycle it is possible to identify a slow phase, taking the target away from the fovea, and a fast (or slow) return phase. According to the nystagmus waveform characterization by Dell‟Osso [6], if the return phase is slow, the nystagmus cycle is pendular or pseudo-pendular; if the return phase is fast, then the waveform is defined as jerk (unidirectional or bidirectional) or pseudo-cycloid. The major types of congenital nystagmus waveforms are shown in figure 1. In general, CN waveform has an increasing velocity exponential slow phase (figure 1a); however the second most common waveform according to Abadi [1] is the pseudo-cycloid, which is shown in figure 1c. In this curve, it is still possible to identify a fast and a slow phase and the direction of beating, but the fast phase is generally hypometric and the subsequent slow phase resembles a cycloid. In addition, a slow eye movement oscillation can sometimes be found superimposed to nystagmus; our research group called this occurrence Base Line Oscillation (BLO) [11,12].

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

56

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al.

Figure 1. Representation of major types of congenital nystagmus waveforms: (a) Jerk right with increasing velocity slow phases; (b) Pendular nystagmus; (c) Pseudocycloid. Continuous periods of time are depicted in each tracing. Rightward eye movements are up (R), and leftward eye movements

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

are down (L).

Figure 2. A schematic depiction of a jerk nystagmus waveform (bold line) with fast phase pointing to the right; on the picture are depicted various nystagmus features, such as: fast and slow phase components, nystagmus period and amplitude; the grey box on each cycle represents the foveation window. The baseline oscillation is shown as a dashed line, and its amplitude is also shown.

A sample illustration of the most common waveform is presented in figure 2. The figure shows in details a unidirectional jerk nystagmus waveform (pointing to the right), along with several CN parameters, such as: slow and fast phases, nystagmus amplitude and periods, foveation periods duration (foveation windows), Standard Deviation of eye position from cycle to cycle (SDp) and baseline oscillation (BLO). As stated in the introduction, the visual acuity of patients affected by CN is mainly dependent on the duration of the foveation periods, but the exact repeatability of eye position from cycle to cycle and fairly low retinal image velocities also contribute to increase visual acuity.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

57

Many authors analysed the role of the standard deviation of eye position (SDp) during foveations with respect to visual acuity [3,4,11] and our research group, fostered also by a remarkable increase in some CN patients‟ visual acuity obtained with botulinum toxin treatment, tried to characterize in details such foveation variability. A slow sinusoidal-like oscillation of the baseline (baseline oscillation or BLO) was found superimposed to nystagmus waveforms [12-13] and its relation with the SDp was estimated [14].

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

(a)

(b) Figure 3. An example of acquired signals showing the presence of the slow eye movement added up to the nystagmus oscillations.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

58

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

In order to estimate the slow sinusoidal oscillations, a common least mean square (LMS) fitting technique could be used. For each signal block the highest peak of the power spectrum of the eye movement signal in the range 0.1–1.5 Hz was considered as an estimator of the BLO frequency. The high frequency limit result from the lowest frequency commonly associated to nystagmus (accordingly to Bedell and Loshin [15], and Abadi and Dickinson [5]), while the low frequency limit depends on the signal length corresponding to each gaze position (in our tests approximately 10 s). In a case study by Pasquariello et al. [14], carried out on 96 recordings, almost 70% of the recordings had BLO amplitude greater than 1° (appreciatively the fovea angular size); in the remaining 30% the amplitude of the BLO was smaller and didn‟t affect significantly visual acuity. In that study a high correlation coefficient (R2 = 0.78) was also found in the linear regression analysis of BLO and nystagmus amplitude, suggesting a strong level of interdependence between the two. The regression line slope coefficient was about 0.5, which imply that BLO amplitude on average is one half of the correspondent nystagmus amplitude. Specifically, since BLO amplitude resulted directly related to nystagmus amplitude, its presence is particularly evident in the signal tracts away from the null zone (i.e. not in the position in which nystagmus amplitude is lesser). However, the presence of this superimposed oscillation reduces substantially the visus, causing an increase of the Standard Deviation of eye position (SDp) during foveation, which in turn may hamper visual acuity, hence it is worth analysing if any of the current available CN model is able to describe such oscillation.

Figure 4. The relationship between Baseline Oscillation and Nystagmus amplitude. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

59

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Oculomotor System Models In this section, a brief review of the history of the oculomotor system models will take place. The oculomotor subsystems and their connections are primarily described in term of the physiological structures involved in eye movements (cerebellum; lateral geniculate body; semicircular canals; vestibular nuclei; optic nerves; oculomotor, trochlear, and abducens nuclei and nerves; and so on) or using the concepts of the control systems theory. In this second case, the oculomotor controls are always described by their four major subsystems: the saccadic, the smooth pursuit, the vergence, and the vestibular systems; however, even in this perspective, some physiological components (such as the mechanics of the eyeball, the orbit, and extraorbital muscles, the time constants that characterize each subsystem, and so on) have to be considered to realize models which are able to accurately simulate the behaviour of the specific subsystem. One of the first authors which described eye movements as the output of a biological control system, using the terminology of the control systems engineering, was Robinson [16,17]. In his works, the author described the input-output relationships of the oculomotor system, extracting some parameters (e.g. the amount of latency showed by the saccadic and by the smooth pursuit subsystem), and defined the transfer function of the muscle plant. Starting from his modelling of the globe and extraocular muscles and analysing the neural pulse used to control them, he found a large difference in the control signals (i.e. in the efferent nervous activity in the nerve branches to the extraocular muscles) used to produce respectively the saccadic and smooth pursuit movements [16]. Moreover, it is possible to affirm that the normal fixation eye movements are driven by a combination of a burst of neural firing, which drives the eye muscle plant during the fast phase of the eye movements (saccades), and a tonic level of firing, which holds the eye in place during fixation [17]. Further studies, mainly focused on the dynamics of the eye movements, proved that the saccades have a high accuracy in normal subjects, suggesting that the neuron burst signal is under feedback control, as stated by Jurgens et al. [18]. Two alternative feedback models have been proposed to explain the behaviour of fast movements: one based on the estimated eye position by Zee et al. [19], and one on the estimated eye displacement by Van Gisbergen et al. [20]. However, the incidence of these studies on the modelling of oculomotor systems in Congenital Nystagmus was still very low. In particular, a major problem was the lack of an inclusive explanation for the wide variety of waveforms showed by patients affected by CN (ranging from jerk to pendular types) [6]. In the subsequent years, different groups focused their efforts in the analysis of each of the ocular motor subsystems both in healthy subjects and in patients affected by CN, including the optokinetic subsystem [21,22], the saccadic subsystem [23,24], and the smooth pursuit (SP) or vestibular subsystems [21,25,26]. Each one of this subsystem was suggested as the origin of CN, or at least to be severely deficient in CN subjects, but no evidence was found for a last word on the matter. A new development occurred with the work of Optican and Zee [27], who proposed a mechanism to explain the generation of all the CN waveforms. The basis of that model was a gaze-holding network, or neural integrator, that has both position and velocity feedback loops. The signals carried in these loops could arise from

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

60

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al.

either afference or efference signals. In normal subjects, the position feedback would be positive and the velocity feedback would be negative. Both would help to increase the time constant of an imperfect neural integrator in the brainstem. They proposed that in patients with CN the sign of the velocity pathway is reversed, making the neural integrator unstable. This instability could manifest as many different CN waveforms, depending on the direction and velocity of post-saccadic ocular drift and actions of nonlinearities within the position and velocity feedback loops. This model clearly explained the origin of the CN oscillations and it was also able to highlight the prevalence of exponentially increasing slow phases among the different CN waveforms. Since at the time a physiological correlate for the neural integrator was not still located, most of the attention focused back on the analysis of the saccadic system, due to the central role played by the fast phases of the CN waveforms in bringing back the image target onto the fovea. A novel interpretation of the saccadic system abnormalities and their relationship with congenital nystagmus was presented by Broomhead [28]. The authors pointed out that no neurophysiological evidence was found to support the existence of a neural correlate of the estimated eye position. On the other hand, there was strong evidence, from neurophysiological studies of the superior colliculus [29], for a signal related to changes in the target position [30-32]. Hence, the displacement feedback model has to be preferred on the position feedback model. In this second framework, they examined the role of the burst neurons in the generation of the saccades and suggested the presence of separate populations of neurons responsible for leftward and rightward. The left and right populations of burst neurons are assumed to be mutually inhibitory, to ensure that co-contraction does not occur. However, further study on the anatomy of the network that generates saccades found that its essential components are burst neurons and omnipause neurons [33,34]. Burst neurons are subdivided into short-lead burst neurons and long-lead burst neurons. Short lead burst neurons are silent during fixation and eye movements other than saccades. Just prior to and during the saccade they fire rapidly in a burst of activity. Long-lead burst neurons are also silent during steady fixation, but their firing rates increase well before the start of the saccade. Omnipause cells fire continuously except just before and during saccades , when they cease firing. Hence, a plausible physiologically version of the model has to include the role of the pause cells in the inhibition of the burst cell firing at the end of a saccade [35]. The model developed by Laptev et al. demonstrates that, with incorporation of the pause cells, the burst cells develop „on‟ and „off‟ responses more similar to the ones found experimentally by Van Gisbergen et al. [20]. Moreover, in physiological terms, this implies that the saccadic oculomotor system does not present an inherent instability as shown by Broomhead et al. [28], but instead it is inherently stable. However, the bifurcation analysis of the model reveals that there is still a range of underlying instabilities in the model, which can lead to pathological eye movements, like CN oscillation, without any structural damage [35].

Congenital Nystagmus and Oscillopsia Another feature which was widely examined in the development of oculomotor system models applied to CN is the presence of oscillopsia. Since in CN subjects the eyes show continuous involuntary oscillations, do the objects appear to oscillate to them?

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

61

The fairly good visual function present in many CN subjects is paramount to the fact that oscillopsia only rarely accompanies CN, and in the minority of those who have experienced it, oscillopsia occurred only under specific circumstances [1,5,36,37]. This had critical implications for modelling because, for some authors (e.g. Jacobs and Dell‟Osso [38]), it constrains the origin of the oscillations to be within the efference copy loop where they are properly accounted for when calculating the reconstructed target velocity [39]. The brain cannot correctly interpret retinal image motion without knowledge of expected eye motion (derived from efference copy of motor commands). On these assumptions, Jacobs and Dell‟Osso [38] build a model of the oculomotor system (OMS), which consists of three of the major ocular motor subsystems (smooth pursuit, fixation, and saccadic), the common neural integrator, ocular motor neurons, and plant plus their complex interconnections. It makes use of efference copy to recreate target position and velocity, retinal error position and velocity, and eye position and velocity to drive the subsystems and make logical decisions regulating how each function reacts to the changes. The model is based on the observation that CN oscillations show themselves during fixation attempts or pursuit, regardless of whether the target is real or imaginary, and can therefore exist in total darkness, absent of any physical target [40]. It can be exacerbated by stress, anxiety, and other psychological inputs. Conversely, it may damp (or even disappear) when the subject is inattentive. These facts suggested that a variable gain modulates CN. In this model the authors propose that for the pendular waveforms of CN, this gain resides in an internal portion of the smooth pursuit subsystem. Thus, the underlying cause for CN could be an inborn, or developmental, failure of this portion of the smooth pursuit system to calibrate this internal gain. The ocular motor simulations were performed in MATLAB Simulink environment and the analysis was performed in MATLAB environment using a specific software developed at Daroff-Dell‟Osso Ocular Motility Laboratory (OMLAB). In 2008, Wang modified the behavioral OMS model, on the basis of the most recent anatomical findings by Ugolini et al. [41], adding „„fast” and „„slow” motor neuron pathways and a more physiological plant, and used it to simulate and make predictions on the effect of a new surgical procedure (tenotomy) [42]. This model was used with success to characterise some dynamic properties of infantile congenital nystagmus syndrome that affect visual function, such as the saccadic latency, the time to target acquisition after the target jump and the normalized stimulus time within the cycle [42]. Finally, Bedell et al. analysed the amount of motion smear perceived by patients affected by CN, in order to better characterise the clear and stable visual world experienced by them [43]. The authors proposed that the reduction of perceived motion smear, already documented during CN, could result from an influence of extra-retinal eye-movement signals on the temporal response speed of the visual system [44], which they found to be increased for patients affected by CN. A later work from the same author, studied the asymmetry in motion smear perception in CN, verifying a comparable behaviour with the one shown by healthy subjects [45]. The duration of perceived motion smear for stimuli that moved in the same direction as the CN slow phase was approximately equal to the duration of the moving stimulus. This outcome agrees qualitatively with the results of normal observers, who report no reduction of perceived motion smear compared to fixation, when a stimulus moves in the same direction as a smooth pursuit eye movement.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

62

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Complementary Approaches The models of the oculomotor system have usually been described in terms of block diagrams. Apart from the advantage of simplicity of construction, given the ready availability of powerful simulation packages, such models deliver easy-to-understand explanations of some aspects of saccadic disorders in terms of structural damage to the system. However, as stated by Laptev [35], such models include so many offset and slope parameters that it is not feasible to carry out an exhaustive enumeration of all the possible behaviours of the models. This makes it difficult to assess how much the behaviour of a given model depends on the choice of a specific set of parameters. Starting from similar considerations, in 2005 Akman et al. suggested a complementary approach. Assuming that CN can be described as an anomaly of the saccadic system he applied the theory of nonlinear dynamical systems to the analysis of such oculomotor subsystem [46]. On the basis of a previous model developed by Broomhead et al. [28] and of a work by Clement et al. [47], they carried out a bifurcation analysis of the model which highlighted that CN can be modelled as a pathological “braking” saccadic signal, i.e. as the influence of the “off” response of the burst neurons in the overall response of the eyes. The most likely expressions of this behaviour are represented by jerk or pendular waveform. In addition, the transition from one (wave)form to another, often experienced by CN subjects, can be explained by a "gluing bifurcation” in the given model [46]. Moreover, in 2006, the authors enriched their study with the results obtained from the application of time series analysis to eye movement recordings (already performed by Shelhamer [48]; Abadi et al. [49]; Clement et al. [47, 50]). By proposing a generalised model of the unforced oculomotor system and relating it to recorded jerk nystagmus time series using nonlinear dynamics techniques, the authors found further support to the hypothesis that the initial loss of stability in jerk CN is unlikely to originate in the NI. The loss of stability appears instead to be induced by a bifurcation in one of the five oculomotor subsystems, referred to collectively, in the generalised oculomotor system model considered in their works, as the oculomotor command system [51]. A novel approach was proposed by Harris and Berry. during the same years. In 2005 they presented a theoretical study showing that the eye oscillations in CN could develop as an adaptive response to maximize visual contrast with poor foveal function in the infant visuomotor system, at a time of peak neural plasticity. They argued that in a visual system with abnormally poor high spatial frequency sensitivity, image contrast is not only maintained by keeping the image on the fovea (or its remnant) but also by some degree of image motion. Using the calculus of variations, they showed that the optimal trade-off between these conflicting goals is to generate oscillatory eye movements with increasing velocity waveforms, as seen in real CN eye movement recordings [52]. A later study from the same authors, extended this information suggesting a developmental model of congenital nystagmus. The authors proposed that jerk nystagmus is an optimal eye-movement strategy and they supported this hypothesis with the (arguably) statement that it is the most employed by adults; on the contrary, pendular CN is a nonoptimal strategy employed by childrens, due to their immature saccadic systems [53]. Finally Barreiro et al. [54] described a bilateral model of the oculomotor pre-motor network that conforms with the neuroanatomical constraint, able to reproduce CN

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

63

oscillations. In their study the authors provided a dynamical systems description of the connection network between the brainstem and the cerebellum, building a model which preserves bilateral symmetry, but it is not symmetric in the network implementation. Physiologically this corresponds to the observation that brainstem neurons project to cerebellar Purkinje cells on both sides, but Purkinje cells project back to brainstem neurons on the same side only. The model seems to produces the full range of waveform types observed in CN with only minor changes to its connection strengths.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Conclusion The origins of the continuous involuntary oscillations shown by CN subjects is not yet clear; many models have been developed during the past thirty years ascribing those defects to one (or more) of the oculomotor subsystem. All of them rely both on a physiological and control system basis and are able to explain with more or less details many aspects of the CN waveforms observed in clinical practice. However, even if the presence of a certain amount of SDp is well accounted by many researchers, the presence of a slow rhythmical oscillation and the origin of this additional instability in the eye movements has not yet been explained. Akman et al. [46], using dynamical systems analysis to quantify the dynamics of the nystagmus in the region of foveation, found that the state-space fixed point, or steady state, is not unique. Physiologically that means that the control system does not appear to maintain a unique gaze position at the end of each fast phase. Similarly, the theory of the changes in oculomotor neurons firing rate developed by Barreiro et al. [54] seems able to explain such slow variability. Works from these authors seems the more promising in describing the presence of a slow periodic component in CN eye movement recordings, documented in our previous study, even if they have not discussed this aspect in detail. By continuing to investigate the complicated relationships between each subsystem of the oculomotor system and the actual behaviour of the eye movements, the pathophysiology of congenital nystagmus should become clearer. Only with this increased knowledge will new medical and surgical treatments become available to these patients.

References [1] [2] [3] [4]

Abadi, RV; Bjerre, A. Motor and sensory characteristics of infantile nystagmus. Br J Ophthalmol, 2002, 86, 1152-60. Oetting, WS; Summers, CG; King, RA. Albinism and the associated ocular defects. Metab Pediatr Syst Ophthalmol, 1994, 17(1-4), 5-9. Dell'Osso, LF; Van Der Steen, J; Steinman, RM; Collewijn, H. Foveation dynamics in Congenital Nystagmus. I: Fixation, Doc Ophthalmol, 1992, 79, 1-23. Cesarelli, M; Bifulco, P; Loffredo, L; Bracale, M. Relationship between visual acuity and eye position variability during foveation in congenital nystagmus, Doc Ophthalmol, 2000, 101, 59-72.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

64 [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[16] [17] [18] [19] [20] [21] [22] [23] [24]

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al. Abadi, RV; Dickinson, CM. Waveform characteristics in Congenital Nystagmus, Doc Ophthalmol, 1986, 64, 153-67. Dell'Osso, LF; Darof, RB. Congenital Nystagmus waveform and foveation strategy, Doc Ophthalmol, 1975, 39, 155-82. Dickinson, CM; Abadi, RV. The influence of the nystagmoid oscillation on contrast sensitivity in normal observers, Vision Res., 1985, 25, 1089-96. Kommerell, G. Congenital nystagmus: control of slow tracking movements by target offset from the fovea, Graefes Arch Clin Exp Ophthalmol, 1986, 224(3), 295-8. Lee, J. Surgical Management of Nystagmus. Journal Royal Soc Medicine, 2002, 95, 238-41. Rucker, JC. Current treatment of nystagmus. Curr Treat Option N, 2005, 7(1), 69-77. Bifulco, P; Cesarelli, M; Loffredo, L; Sansone, M; Bracale, M. Eye movement baseline oscillation and variability of eye position during foveation in Congenital Nystagmus, Doc Ophthalmol, 2003, 107, 131-136. Cesarelli, M; Bifulco, P; Loffredo, L; Magli, A; Sansone, M; Bracale, M. Eye movement baseline oscillation in Congenital Nystagmus. Proceedings of the WC2003, 2003. Pasquariello, G; Bifulco, P; Cesarelli, M; Romano, M; Fratini, A. Analysis of foveation sequences in Congenital Nystagmus, IFMBE Proceedings, 2008, 20(4), 303-6. Pasquariello, G; Cesarelli, M; Bifulco, P; Fratini, A; La Gatta, A; Romano, M. Characterisation of baseline oscillation in congenital nystagmus eye movement recordings. Biomed Signal Proces, 2009, 4, 102-7. Bedell, HE; Loshin, DS. Interrelations between measures of visual acuity and parameters of eye movement in Congenital Nystagmus. Invest Ophthalmol Vis Sci., 1991, 32, 416-21. Robinson, DA. The Oculomotor Control System: A Review. Proceed of IEEE, 1968, 56(6), 1032-49. Robinson, DA. The mechanics of human saccadic eye movements. J Physiol, 1964, 174, 245-64. Juergens, R; Becker, W; Kornhuber, HH. Natural and drug-induced variation of velocity and duration of human saccadic eye movements: evidence for a control of the neural pulse generator by local feedback. Biol Cybern, 1981, 39, 87-96. Zee, DS; Optican, LM; Cook, JD; Robinson, DA; Engel, WK. Slow saccades in spinocerebellar degeneration. Arch Neurol, 1976, 33, 343-51. Van Gisbergen, JAM; Robinson, DA, Gielen, S. A quantitative analysis of generation of saccadic eye movements by burst neurons. J Neurophysiol, 1981, 45, 417-42. Kommerell, G; Mehdorn, E. Is an optokinetic defect the cause of congenital and latent nystagmus? In G; Lennerstrand, DS; Zee, EL. Keller, (Eds.). Functional basis of ocular motility disorders. Oxford: Pergamon Press, 1982, 159-67. Yee, RD; Baloh, RW; Honrubia, V. Study of congenital nystagmus: Optokinetic nystagmus. Brit J Ophthalmol, 1980, 64(12), 926-32. Abadi, RV; Worfolk, R. Retinal slip velocities in congenital nystagmus. Vision Res., 1989, 29(2), 195-205. Dell'Osso, LF. Saccadic pathology and plasticity. In Traccis, S; Zambarbieri, DI. Movimenti Saccadici. Bologna: Pátron Editor; 1992, 105-123.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye Movements in Congenital Nystagmus and Oculomotor Systems Alterations

65

[25] St. John, R; Fisk, JD; Timney, B; Goodale, MA. Eye movements of human albinos. Am J Optom Phys Opt., 1984, 61, 377-85. [26] Yamazaki, A. Abnormalities of smooth pursuit and vestibular eye movements in congenital jerk nystagmus. In Shimaya K (Ed.), Ophthalmology. Amsterdam: Excerpta Medica, 1979, 1162-65. [27] Optican, LM; Zee, DS. A hypothetical explanation of congenital nystagmus. Biol Cybern. 1984, 50, 119-34. [28] Broomhead, DS; Clement, RA; Muldoon, MR; Whittle, JP; Scallan, C; Abadi, RV. Modelling of congenital nystagmus waveforms produced by saccadic system abnormalities. Biol Cybern, 2000, 82, 391-9. [29] Scudder, CA. A new local feedback model of the saccadic burst generator. J Neurophysiol, 1988, 59, 1455-75. [30] Moschovakis, AK. Neural network simulations of the primate oculomotor system. Biol Cybern, 1994, 70, 291-302. [31] Breznen, B; Gnadt, JW. Analysis of the step response of the saccadic feedback: computational models. Exp Brain Res., 1997, 117, 181-91. [32] Dominey, PF; Schlag, J; Schlag-Rey, M; Arbib, MA. Colliding saccades evoked by frontal eye field stimulation: artifact or evidence for an oculomotor compensatory mechanism under-lying double-step saccades. Biol Cybern, 1997, 76, 41-52. [33] Scudder, CA; Kaneko, CRS; Fuchs, AF. The brainstem burst generator for saccadic eye movements. A modern synthesis. Exp Brain Res., 2002, 142, 439-62. [34] Sparks, DL. The brainstem control of saccadic eye movements. Nat Neurosci Rev., 2002, 3, 952-64. [35] Laptev, D; Akman OE; Clement, RA. Stability of the saccadic oculomotor system, Biol Cybern, 2006, 95, 281-7. [36] Abel, LA; Williams, IM; Levi, L. Intermittent oscillopsia in a case of congenital nystagmus: Dependence upon waveform. Invest Ophthalmol Vis Sci., 1991, 32, 3104-8. [37] Leigh, RJ; Dell‟Osso, LF; Yaniglos, SS; Thurston, SE. Oscillopsia, retinal image stabilization and congenital nystagmus. Invest Ophthalmol Vis Sci., 1988, 29, 279-82. [38] Jacobs, JB; Dell‟Osso, LF. Congenital nystagmus: Hypotheses for its genesis and complex waveforms within a behavioral ocular motor system model. J Vision, 2004, 4, 604-25. [39] Dell‟Osso, LF; Averbuch-Heller, L; Leigh, RJ. Oscillopsia suppression and foveationperiod variation in congenital, latent, and acquired nystagmus. Neuro-Ophthalmology, 1997, 18, 163-83. [40] Dell‟Osso, LF. Fixation characteristics in hereditary congenital nystagmus. Amer J Opt Arch Am A, 1973, 50, 85-90. [41] Ugolini, G.; Klam, F; Doldan Dans, M; Dubayle, D; Brandi, AM; Butter-Ennever, J; et al. Horizontal eye movement networks in primates as revealed by retrograde transneuronal transfer of rabies virus: Difference in monosynaptic input to slow „„slow” and „„fast” abducens motoneurons. J Comp Neurol, 2006, 498(6), 762-85. [42] Wang, ZI; Dell‟Osso, LF. Tenotomy procedure alleviates the „„slow to see” phenomenon in infantile nystagmus syndrome: Model prediction and patient data. Vision Res., 2008, 48, 1409-19. [43] Bedell, HE; Tong, J; Patel, SS; White, JM. Perceptual influences of extra-retinal signals for normal eye movements and infantile nystagmus. In Leigh RJ and Devereaux MW

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

66

[44] [45] [46] [47] [48] [49] [50]

[51]

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[52] [53] [54]

Pasquariello Giulio, Cesarelli Mario, Romano Maria et al. (Eds.), Advances in understanding mechanisms and treatment of infantile forms of nystagmus. London: Oxford University Press, 2008, 11-22. Bedell, HE; Ramamurthy, M; Patel, SS; Subramaniam, S; Vu-Yu, LP; Tong, J. The temporal impulse response function in infantile nystagmus. Vision Res., 2008, 48, 157583. Bedell HE; Tong J. Asymmetrical perception of motion smear in infantile nystagmus. Vision Res., 2009, 49, 262-7. Akman, OE; Broomhead, DS; Abadi, RV; Clement, RA. Eye movement instabilities and nystagmus can be predicted by a nonlinear dynamics model of the saccadic system. Math Biosci., 2005, 51, 661-94. Clement, RA; Whittle, JP; Muldoon, MR; Abadi, RV; Broomhead, DS; Akman, O. Characterisation of congenital nystagmus waveforms in terms of periodic orbits. Vision Res., 2002, 42, 2123-30. Shelhamer, M. On the correlation dimension of optokinetic nystagmus eye movements: Computational parameters, filtering, nonstationarity and surrogate data. Biol Cybern, 1997, 76, 237-50. Abadi, RV; Broomhead, DS; Clement, RA; Whittle, JP; Worfolk, R. Dynamical systems analysis: A new method of analysing congenital nystagmus waveforms. Exp Brain Res., 1997, 117, 355-61. Clement, RA; Abadi, RV; Broomhead, DS; Whittle, JP. (2002b) Periodic forcing of congenital nystagmus. In: S; Boccaletti, BJ; Gluckman, J; Kurths, LM; Pecora, ML. Spano, eds. Experimental Chaos: Proceedings of the 6th Experimental Chaos Conference, Potsdam, Germany, July 2001. AIP. 149-154. Akman, OE; Broomhead, DS; Clement, RA; Abadi, RV. Nonlinear time series analysis of jerk congenital nystagmus. J Comput Neurosci, 2006, 21, 153-70. Harris, C; Berry, D. A developmental model of Infantile Nystagmus. Semin Ophthalmol, 2006, 21, 63-9. Harris, C; Berry, D. A distal model of Congenital Nystagmus as nonlinear adaptive oscillations. Nonlinear Dynam, 2006, 44, 367-80. Barreiro, AK; Bronski, JC; Anastasio, TJ. Bifurcation theory explains waveform variability in a congenital eye movement disorder. J Comput Neurosci, 2009, 26(2), 321-9.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 67-80 © 2011 Nova Science Publishers, Inc.

Chapter 3

Fixational Eye Movements and Ocular Aberrometry Justo Arines Piferrer* Departamento de Física Aplicada, Universidade de Santiago de Compostela, Escola Universitaria de Óptica e Optometría (Campus Vida), Santiago de Compostela, Spain

Abstract

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye movements are a key factor in human vision. During steady state fixation where the visual attention is voluntary centred on a fixation estimulous, the eye shows involuntary small movements that exhibit an erratic trajectory. Different studies suggest that these fixational eye movements are crucial for visual perception and the mantainance of visual attention. However, they exhibit a not so construtive roll in ocular aberrometry when presented during the measurement of ocular aberrations. The objective instruments used to determine the optical quality of the eye includes a fixational estimulous to keep, with the subject collaboration, the eye pupil centred with the instrument axis. Although useful for obtaining a coarse centring, the fixational movements induce an erratic effective lateral displacement of the eye pupil during the ocular inspection, preventing from the necessary fine alignment that imposes the correct estimation of the eye optical aberrations. We will show in this chapter how these involuntary fixational movements influence the estimated ocular aberrations, inducing an erroneous estimation of the statistical properties of the indiviual ocular refractive errors, and therefore, limiting their correction. The chapter will be organized as follows. In section one we will describe the fixational eye movements, their mathematical models, and the factors that affect fixation. We will include at the end of this section a brief description of the main systems used nowadays for measuring the eye movements. In section 2 we will introduce the concept of Ocular Aberrometry, starting with its description and continuing with the presentation of the mathematical representation of ocular aberrations, and the description of the Hartmann-Shack wavefront sensors. Section 3 will be devoted to analyze the influence of the fixational eye movements on the estimation of ocular aberrations and the correction of refractive errors via refractive surgery or customized contact lenses. This analysis will be based on the examination of mathematical models and numerical simulations. In section 4 we will discuss *

E-mail address: [email protected], Phone: 981564488-13509 FAX: +34-981590485. (Corresponding author)

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

68

Justo Arines Piferrer the future prospects that concern the treatment of fixational movements in the framework of ocular aberrometry. Finally in section 5 we present the conclusions.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Introduction Eye movements are a key factor in human vision. During steady state fixation where the visual attention is voluntary centered on a fixational stimulus, the eye shows involuntary small movements that exhibit an erratic trajectory (Barlow, 1952) (Martinez-Conde, Macknik, & Hubel, 2004). Different studies suggest that these fixational eye movements are crucial for visual perception and the maintenance of visual attention (Martinez-Conde, Macknik, & Hubel, 2004). However the complete understanding of their influence on the visual process, origin, and neurophysiology are still under study. The relevance of these works is not limited to the field of neuroscience or psychology, it also spreads over cybernetics (development of artificial eyes), image processing (data compression), signposting, and optometry among others. In this chapter we will focus on a particular subject of optometry, Ocular Aberrometry. I am sure that most of the readers suffered during their lives from an evaluation of their visual skills. In general this evaluation is limited to the control of the visual acuity, ocular movements, and analysis of the field of view. In particular the analysis of the visual acuity is reduced in most cases to answer the question, Could you tell me the line below? However, sometimes and most frequent in the last decade, the examiners do also use in their practice new objective instruments that help them with that part of the visual exam. Aberrometers for measuring the ocular aberrations, Topographers for corneal topography, Biomicroscopes for measuring the eye length before cataract surgery, or Campimeters for evaluating the field of view are some of these new tools that are entering in the common optometric practice. All these instruments present at least one thing in common, the subject must look as steady as possible at a fixational target in order to obtain a coarse centering of the eye pupil with the instrument axis and of course an accurate measurement. Therefore we arrive to an interesting question, the eye moves in order to see the target but on the other hand we need a steady eye for a correct measurement. The reader might by thinking at this moment about what kind of consequences derive from the existence of ocular movements during for example the topographic or aberrometric measurements. Let’s make an example for visualize the problem. Suppose that you are in front of a mirror watching your nose. If your forehead is parallel to the mirror, your nose looks different than if you turn right your head. But it is the same nose¡ What I am trying to show whith this example is that the small turn of your head causes a significant change in the “observed” nose. This is what happens with topographic or aberrometric measurements. Section 2 will show that ocular aberrations are described with respect to a reference frame, and that if the eye turns a bit with respect to it, their representation change significantly. The practical consequence of this fact is that the programmed contact lens fit or refractive surgery would not be as successful as it could be. Another important consequence is that the statistical analysis concerning the individual and poblational measurements would be biased, causing an erroneous interpretation of the properties of the aberrometric data.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Fixational Eye Movements and Ocular Aberrometry

69

1.1. Fixational Eye Movements Fixational eye movements are commonly classified into three main components (Abadi & Gowen, 2004) (Møller, 2006) (Martinez-Conde, Macknik, & Hubel, 2004) (MartinezConde S. , Macknik, Troncoso, & Hubel, 2009): drifts, a slow component with amplitude of 0.02o-0.15o; fast microsaccades, with 25 ms duration, amplitude of 0.22 o -1.11 o and frequency of 0.1-0.5 Hz; and tremors, with very low amplitude (0.001 o -0.008 o) but very high frequency (50-100 Hz). In order to quantify the magnitude of the ocular movement in terms of cones, and have a different approach, one can compute the lateral displacement of the cone mosaic by multiplying the angular rotation in radians by the radius of the eye ball (12.5 mm). By doing this we get that drifts induce approximately a movement of 10-65 µm, microssacades of 90-480 µm, and tremors of 0.5-5 µm. This means that tremors move the image approximately 2 cones at maximum, drifts 5-30 cones and microsaccades 45-200 cones. So, in terms of contribution to visual perception, tremors are supposed to present little influence, being the drifts the most important component for maintaining fixation, and microsaccades fast movements for keeping attention (Martinez-Conde, Macknik, & Hubel, 2004) (Ralf Engbert, 2003). However these considerations are still under revision. What is clearly recognized is that the eye exhibits during steady state fixation an erratic trajectory that shows statistical characteristics of a random walk (Engbert & Kliegl, 2004) (Mergenthaler & Engbert, 2007). In the simplest form, the individual steps of a random walk can be generated by adding a random value to the previous position (see equation 1): xk  xk 1  xk

(1)

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

where xk and xk 1 are the coordinates of the kth and kth-1 positions, and xk is a random Gaussian variable of zero mean and a specified standard deviation (that depends on the process which is being modeled ). In case of drift simulation this value can be set around 1065 µm. The microsaccades can be incorporated in the model through a kind of refixation of the target when decentrations with respect to the first position of the sequence were bigger than approximatelly 90-480 µm. This simple model provides a good aproximation to the eye trajectories and ocular positions that would be measured by current eye-trackers. Bear in mind that no information concerning frequency of occurrence or velocity of the movement was taken into account. This model just simulates the position. More sophisticated models based on neurophysiological parameter can be implemented. One example is the model proposed by Mergenthaler-Engbert (Mergenthaler & Engbert, 2007) which implements a deleyed random walk model for the fixational eye movements.

wi 1  1    wi  i   tanh   wi   xi 1  xi  wi 1  i

(2)

This model incorporates information on the existence of persistent 1    wi , and antipersisten

 tanh  wi   eye movements. Persistent movements are those which are in

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

70

Justo Arines Piferrer

the same direction as the previos one, while antipersistent are in the opposite direction (so they are those involved in refixation). See reference (Mergenthaler & Engbert, 2007) for more details on the physiological origins of this model.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 1. 50 trajectories of translational movements of the centre of the eye entrance pupil simulated with eq.(1). Three trajectories are emphasized in gray.

There is an interesting aspect of fixational eye movements, their properties are affected by endogenous and/or exogenous sources that affect attention (Gowen, Abadi, Poliakoff, Hansen, & Miall, 2007) (Van der Stigchel, Meeter, & Theeuwes, 2006) (Di Russo, Pitzalis, & Spinelli, 2003) (Murakami, Kitaoka, & Ashida, 2006). This means that up to certain level, fixational movements can be manipulated by visual, auditive, or any stimuluos that affect attention. So, amplitud and frequency of fixational movements are expected to be influenced by the target design (Murakami, Kitaoka, & Ashida, 2006). 1.2. Eye-Tracking Systems Eye movements are currently measured by eye-tracking systems. These devices can be used with a chin or forehead rest (head supported) or without any head support. Eye-tracking techniques can be classified as non-imaging or image-based techniques. The non-imaging ones include: electro-oculography and scleral coil methods (Hess, Muri, & Meienberg, 1986) (Robinson, 1963). Image-based methods are the most extended ones. All of them detect the movement of a fixed feature which moves with the eye (Hong & Krishnaswamy, 2006). Depending on the feature which is tracked we can talk about: limbus or pupil tracking; corneal or cristaline reflections (Purkinje images); and retinal tracking. Other high contrast features can be added to the eye through contact lenses. These video-based techniques do also exploit the benefits of near-infrared illumination sources in order to increase the contrast of the tracked feature and not disrupt the vision of the subject. Current video-based eye trackers which are those normally used for viewing studies or refractive surgery, have achieved

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Fixational Eye Movements and Ocular Aberrometry

71

sampling rates of the order of 1-2 kHz and accuracies of 0.005o. This means that they can measure lateral pupil displacements of the order of 1 µm.

2. Ocular Aberrometry The term Ocular Aberrometry references to the discipline devoted to the study of the ocular optical aberrations. We define optical aberration as the distortion of the image formed by an optical system compared with the ideal one. This degradation is caused by the non uniformity of the optical path length followed by the rays entering the different parts of the optical system. These differences are normally quantified in terms of differences of the wavefront (which is the surface defined by the ray’s normals) from a reference wavefront. Nowadays, the most extended way of referencing the optical aberrations is the orthogonal Zernike polynomial basis (Thibos, Applegate, Schwiegerling, & Webb, 2000). The idea is that any ocular wavefront aberration can be described as a linear combination of infinite Zernike polynomials. 

W ( x, y)   ai Zi ( x R , y R ) i 0

(3)

where W(x,y) is the wavefront aberration, ai is the ith modal coefficient and Zi is the ith Zernike polynomial, x-y are the pupil coordinates and R the pupil radius . Ocular aberrations are classified into low order ( i   0,5 ) and high order aberrations ( i   6,  ). Myopia-

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Hyperopia ( Z 4 ), and Astigmatism ( Z 3 and

Z 5 ) are thus low order aberrations, which are, in

terms of visual impact, the most important ones. High order aberrations, although present in our eyes are normally of little magnitude. Thus, the correction of the low order ones allows most people to achieve visual acuities above 1.2. However there are subjects were the high order aberrations are significant and their vision cannot be satisfactorily corrected with spectacles or contact lenses. In these cases the subject can manifest, for example, monocular diplopia or difficulties in the subjective refraction. 2.1. Wavefront Sensing Wavefront sensors are those devices used to measure the wavefront aberrations. In ocular aberrometry the most extended ones are Hartmann-Shack (HS) (Platt & Shack, 2001) and the Laser Ray Tracing (LRT) (Navarro & Losada, 1997) (Moreno-Barriuso & Navarro, 2000). We are going to focus on the former one, but its description is in general valid to the LRT. The Hartmann-Shack is a centroid based gradient wavefront sensor. It consists of a wavefront sampling element (typically a microlens array) and an irradiance detection device (currently a digital camera) which allows for measuring the lateral displacement of the irradiance distribution associated with each sampling element from the one obtained with a reference wavefront. Each lateral displacement is related with the irradiance weighted local

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

72

Justo Arines Piferrer

mean gradient of the wavefront that impinges on each sampling element (Pimot, Rousset, & Fontanella, 1990).

XC 

d 2

 I ( x, y )

xy

W ( x, y ) dxdy x

 I ( x, y)dxdy

xy

(4)

where XC is the centroid displacement across the X coordinate,  is the wavelength, I(x,y) is the irradiance at the microlens and d is the distance between the microlens array and the detection plane. Therefore, thanks to the HS scheme we get a data set of wavefront gradients across the X and Y directions. The question is how to reconstruct the impinging wavefront from these data. The standard procedure is to make a least squares estimation of the Zernike modal coefficients of eq. 3. By doing that, the estimated coefficients can be obtained by a simple multiplication of the vector of wavefront gradients m plus the reconstruction matrix B.

aˆ  Bm B   AT A  AT -1

(5)

where the elements of A are:



 I ( x, y)  Z ( s

Asi 

i

R

, y R )dxdy

As

 I ( x, y)dxdy s

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

x

  x si s  Ns with    y si s  Ns

As

(6)

subscript s corresponds with the sth microlens, and Ns is the total number of microlens.

Figure 2. Scheme of a Hartmann-Shack wavefront sensor. Plane XY corresponds with the microlens array, and the plane UV is the detection plane. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Fixational Eye Movements and Ocular Aberrometry

73

The estimated coefficients depend therefore on the quality of the measurements and on the reconstruction matrix. Even in the case of error free measurements the estimated coefficients might not be equal to the wavefront coefficients due to the modal crosscoupling and aliasing presented in the reconstruction matrix. See references (Herrmann, 1981) (Soloviev & Vdovin, 2005), for more details on this subject.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 3. Wavefront sensor reference frame (WSRF), and Eye pupil reference frame (ERF).

There is an additional question that has to do with the definition of the x and y coordinates. As can be derived from eq. 5-6, the values of the estimated modal coefficients depend on the coordinates values. So the question is, where should we place the origin of coordinates? Traditionally the reference frame has been placed in the center of the microlens array, and named the wavefront sensor’s reference frame (WSRF). Thus the estimated coefficients are obtained for representing in a good fashion the wavefront aberration as seen by the sensor. But if what we are trying is to measure the aberration and understand its origin, what we have to do is to estimate the wavefront with respect to a reference frame defined over the aberrated element. This means that in ocular aberrometry the coordinates should be defined with respect to the center of the eye pupil reference frame (EPRF). The small fixational eye movements that occur during steady state fixation of a fixational target induce an effective lateral displacement of the eye pupil over its plane. They might also cause small rotations around the visual axis. But, under these assumptions we can relate the WSRF and EPRF by a linear transformation of coordinates:

L(r ') = M   ( r - d )

(7)

where L represents the linear operator, r’ is the vector of coordinates in the WSRF, M() is the rotation matrix, r is the vector of coordinates in the EPRF and d is the translation vector. It can be demonstrated (Arines, Prado, Bará, & Acosta, 2008) that when the relation between the WSRF and the EPRF is linear, the estimated coefficients can be related by the transformation matrix S, by the equation aˆ = Saˆ ' . The elements of S can be obtained by the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

74

Justo Arines Piferrer

projection of each Zernike polynomial expressed in the WSRF with respect to the ones represented in the EPRF (Arines, Prado, Bará, & Acosta, 2008).

Skj 

1 gR

 Z (r' )Z j

k

(r )d 2 r

GR

(8)

where gR is the eye pupil area. This definition of the elements of S is useful for obtaining the modal coefficients in the EPRF from those of the WSRF. And so, it allows for correcting the estimated coefficients from the error induced by the ocular movement.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

3. Influence of Fixational Eye Movements on Ocular Aberrometry Since the development of objective ocular aberrometer different researching groups have analyzed the origins of the dynamical behavior of the ocular aberrations. During these years scientists identified several sources: tear film, crystalline lens, cardiopulmonary rhythm, retinal pulsation... All these factors affect the optics of the eye. Cardiopulmonary rhythm induces a contraction of the ciliary muscles and thus a change in the crystalline curvature. Retinal pulsation affects the eye’s axial length, and thus the amount of defocus. So, scientist looked forward sources that affect the optical behavior of the eye. Fixational eye movements were not included into this group. The orbit of the eye presents a degree of rigidity that prevents corneal deformation from these small ocular movements. So, its influence was initially neglected. Even though some works analyzed its influence and concluded that they have nearly no influence on the dynamics of the ocular aberrations (Hofer, Artal, Singer, Aragón, & Williams, 2001). And in fact since eye movements do not modify the optics of the eye, they cannot contribute to the dynamics of the ocular aberration. However they contribute to the measured ocular aberrations (notice the distinction between measured and actual aberrations). This contribution is consequence of estimating the modal coefficients with respect to the WSRF instead of using the EPRF. A recent work of Arines et.al., studied the influence of the fixational eye movements on the estimated modal coefficients. The authors simulated static ocular aberrations moving over the WSRF following real and simulated fixational eye movements, in order to evaluate the magnitude of the influence of the ocular movements on the mean and variance of the estimated modal coefficients. Figure 4 shows the absolute values of the estimated coefficients for one of the ocular trajectories represented in figure 1. The black line represents the original static aberration, the dark-grey and the light-grey the estimated coefficients in the WSRF and EPRF at each of the positions along the pupil trajectory. The black (WSRF) and white (EPRF) dots represents the mean value of the estimated modal coefficients. A magnified section of the graph is superimposed in order to show the bias in the estimation of the mean value in the WSRF and ERF.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Fixational Eye Movements and Ocular Aberrometry

75

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 4. Absolute values of the estimated coefficients for one of the ocular trajectories. i is the Zernike modal order. The graph shows the values obtained for each of the 50 positions along the pupil trajectory as well as coefficient mean value. The dark grey lines show the results for the WSRF, the light grey lines for the ERF. Black dots represent WSRF mean values, white dots ERF mean values. The black line shows the original static coefficients. See (Arines, Pailos, Prado, & Bará, 2009) for more details.

Figure 5. Standard deviation of the coefficients estimated in the WSRF (black) and ERF (gray). See (Arines, Pailos, Prado, & Bará, 2009) for more details.

We can see in fig. 4 that eye movements cause the estimated Zernike coefficients to differ significantly from their static values, the differences in absolute value being significantly higher for the coefficients estimated with respect to the WSRF. Besides, one can observe that the coefficients of 9th order are the same in both reference frames. This fact is a consequence of the superior triangular form of matrix S, see (Arines, Prado, Bará, & Acosta, 2008). Additional consequences of the form of S are that the magnitude of the estimation error, and the difference between estimating the wavefront in the WSRF or EPRF would increase as the magnitude of the high order aberrations does. Concerning the variability of the estimated coefficients induced by ocular movements we show in figure 5 the standard deviations of the estimated coefficients with respect to the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

76

Justo Arines Piferrer

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

wavefront sensor (in black) and the eye (in gray) reference frames, for the same ocular trajectory used in figure 4. We can observe that the standard deviation decreases with increasing Zernike mode index for the coefficients estimated in the WSRF, while for those computed with respect to the EPRF the standard deviation shows little change. The work of Arines et.al., analyzed also the relation between the mean trajectory decentration rj and the root mean square error of the estimated wavefront. They found a direct relation between both magnitudes when using the WSRF, being different for the case of EPRF. Figure 6 presents their results. So we have seen that the movement of the eye induces uncertainty and variability to the estimated coefficients. This uncertainty and variability depends on the magnitude of the high order aberrations and on the trajectory followed by the eye pupil. Additionally, we showed that the bias and variability of the estimated coefficients can be reduced by estimating them with respect to the eye’s pupil reference frame (EPRF). This can be done by defining the coordinates used to build the least squeares reconstruction matrix with respect to the EPRF or by correcting the coefficients estimated in the WSRF by its multiplication plus the transformation matrix S.

Figure 6. Residual root mean-square-error, rms W Wˆ , versus mean trajectory decentration, rj for the coefficients obtained in the WSRF (fig.6a) and ERF (fig.6b). The solid black line refers to the case of eye A and the dotted black line to the eye B. See (Arines, Pailos, Prado, & Bará, 2009) for more details.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Fixational Eye Movements and Ocular Aberrometry

77

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

3.1. Refractive Surgery and Customized Contact Lenses The bias induced by fixational eye movements on aberrometric data has a direct consequence on the design of the ablation pattern in refractive surgery or the customized contact lenses. In both cases the correction achieved is reduced by the aforementioned bias. Of course this reduction depends on the fixational characteristics of the subjects and on his optical aberrations. The presence of fixational eye movements during refractive surgery has motivated different studies. Since the 90’s scientists have been aware of the relevance of the correct centering of the cornea with the axis of the ablating laser. First studies used postoperative corneal topographic data to evaluate the decentration of the ablated profile (Klyce & Smolek, 1993) (Amano, Tanaka, & Shimizu, 1994). This kind of data was also correlated with visual acuity performance (Azar & Yeh, 1997) showing a direct correlation between the acuity reduction and ablation decentration. The decentration found through topographic or aberrometric data is compatible with the magnitude and characteristics of fixational eye movements. Different works reference mean ablation decentration of the order of 0.30.2 mm. The identification of ablation decentration as a significant cause of failure of the refractive surgery motivated the development and inclusion of eye-trackers in the ablation process (Gobbi, y otros, 1995). First eye-trackers introduced in refractive surgery systems presented an accuracy of 0.1 mm and latency of the order of 100 ms. Their inclusion supposed a significant improvement, although it wasn’t as high as expected. Theoretical studies conclude that in order to achieve a high degree of correction the alignment accuracy should be below 0.7 mm for pupils of 7.0 mm diameter and below 0.2 mm for pupils of 3 mm of diameter (Bueeler, Mrochen, & Seiler, 2003), and the correction latency well below 100 ms (Bueeler & Mrochen, 2005). The relevance of fixational eye movements is so high that the last generation of refractive surgery units includes eye-tracking systems, being their accuracy and speed one of the main characteristics that experts take into account.

4. Future Prospects We have shown in previous section the influence of fixational eye movements on the estimated modal coefficients. However, although the use of eye-trackers in refractive surgery is out of doubt, its inclusion in ocular wavefront sensors is still about to arrive. Recent studies suggest the relevance of tracking the eye pupil during the aberrometric measurements in order to reduce the bias and variability induced by the eye movements (Arines, Pailos, Prado, & Bará, 2009). We think that in next years ocular wavefront sensors would incorporate eyetrackers in order to avoid this error and thus increase the accuracy of the aberrometric measurements. The impact of this improvement would benefit not only the refractive correction of individual subjects, but also our understanding of the real statistical properties of the dynamics of the ocular aberrations, and the development of more accurate statistical models for proper simulations of ocular aberrations, and development of more reliable modal estimation algorithms.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

78

Justo Arines Piferrer

There is another question that is still to be answered; which is the optimum fixational target? Scientists have been using during these years different targets to keep fixation during the aberrometric measurements, malt cross, circular spots or small letters are some of them. It is well known that fixational movements depend on attention, visual fading, illusory movement… These questions were not considered for the design of the targets used in current researching and commercial wavefront sensors. Future development of optimum targets would contribute to the reduction of the fixational eye movements and therefore to the improvement of the aberrometric measurements and refractive surgery.

5. Conclusion

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

We have shown in this chapter the interconnection between fixational eye movements and ocular aberrometry. The ocular movements occurring during wavefront measurements induce significant amounts of bias and variability that affect both statistical studies of the dynamics of ocular aberrations and measurement of individual ocular aberrations, limiting in this last case the performance of their correction through refractive surgery or customized contact lenses. The error induced by ocular movements suggests the introduction of eye-tracking systems in ocular wavefront sensors in order to measure and correct their effects. Additionally we think that ocular aberrometry would be benefited by the development of more sophisticated fixational targets, designed to minimize the fixational movements. I want to acknowledge financial support from the Isidro Parga Pondal Programme 2009 (Xunta de Galicia, Spain). This work has been supported by the Spanish MICINN, grant FIS2008-03884 and FIS2008-00697.

References Abadi, R. & Gowen, E. (2004). Characteristics of saccadic intrusions. Vision Res., 44, 26752690. Amano, S., Tanaka, S. & Shimizu, K. (1994). Topographical evaluation of centration of excimer laser myopic photorefractive keratectomy. J Cataract Refract Surg., 20(6), 616619. Arines, J., Pailos, E., Prado, P. & Bará, S. (2009). The contribution of the fixational eye movements to the variability of the measured ocular aberrat. Ophthalmic and Physiological Optics, 29, 281-287. Arines, J., Prado, P., Bará, S. & Acosta, E. (2008). Equivalence of least-squares estimation of eye aberrations in linearly transformed reference frames. Optics Communications, 281, 2716-2721. Azar, D. T. & Yeh, P. C. (1997). Corneal topographic evaluation of decentration in photorefractive keratectomy: treatment displacement vs intraoperative drift. Am J Ophthalmol., 124(3), 312-320. Barlow, H. (1952). Eye movements during fixation. J. Physiol, 290-306.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Fixational Eye Movements and Ocular Aberrometry

79

Bueeler, M. & Mrochen, M. (2005). Simulation of Eye-tracker Latency, Spot Size, and Ablation Pulse Depth on the Correction of Higher Order Wavefront Aberrations With Scanning Spot Laser Systems. Journal of Refractive Surgery, 21, 28-36. Bueeler, M., Mrochen, M. & Seiler, T. (2003). Maximum permissible lateral decentration in aberration-sensing and wavefront-guided corneal ablation. Journal of Cataract & Refractive Surgery, 29(2), 257-263. Di Russo, F., Pitzalis, S. & Spinelli, D. (2003). Fixation stability and saccadic latency in elite shooters. Vision Research, 43, 1837-1845. Engbert, R. & Kliegl, R. (2004). Microsaccades Keep the Eyes’ Balance During Fixation. Psychological Science, 15(6), 431-436. Gobbi, P., Carones, F., Brancato, R., Carena, M., Fortini, A. & Scagliotti, F., y otros. (1995). Automatic eye tracker for excimer laser photorefractive keratectomy. J Refract Surg., 11(3), S337-S342. Gowen, R. E., Abadi, R., Poliakoff, E., Hansen, P. & Miall, R. (2007). Modulation of saccadic intrusions by exogenous and endogenous attention. Brain Research, 1141, 154167. Herrmann, J. (1981). Cross coupling and aliasing in modal wave-front estimation. J. Opt. Soc. Am. A., 71(8), 989-992. Hess, C. W., Muri, R. & Meienberg, O. (1986). Recording of horizontal saccadic eye movements: methodological. Neuro-Ophthalmology, 6, 264-272. Hofer, H., Artal, P., Singer, B., Aragón, J. L. & Williams, D. R. (2001). Dynamics of the eye’s wave aberration. J. Opt. Soc. Am., 18, 497-506. Hong, H. & Krishnaswamy, P. (2006). Video-based eyetracking methods and algorithms in head-mounted displays. Optics Express, 14(10), 4328-4350. Klyce, S. D. & Smolek, M. K. (1993). Corneal topography of excimer laser photorefractive keratectomy. J Cataract Refract Surg., 19, 122-130. Martinez-Conde, S., Macknik, S. L., Troncoso, X. G. & Hubel, D. H. (2009). Microsaccades: a neurophysiological analysis. Trends in neurosciences, 32(9), 463-475. Martinez-Conde, S., Macknik, S. & Hubel, D. (2004). The role of fixational eye movements in visual perception. Nat Rev Neurosci, 5(3), 229-40. Mergenthaler, K. & Engbert, R. (2007). Modeling the control of fixational eye movements with neurophysiological delays. Phys Rev Lett., 98(13), 138104.1-138104.4. Møller, F. L. (2006). The contribution of microsaccades and drifts in the maintenance of binocular steady fixation. Graefe's Arch. Clin. Exp.Ophthalmol., 244, 465-471. Moreno-Barriuso, E. & Navarro, R. (2000). Laser Ray Tracing versus Hartmann–Shack sensor for measuring optical aberrations in the human eye. J. Opt. Soc. Am. A, 17, 974985. Murakami, I., Kitaoka, A. & Ashida, H. (2006). A positive correlation between fixation instability and the strength of illusory motion in a static display. Vision Research, 46, 2421-2431. Navarro, R. & Losada, M. (1997). Aberrations and relative efficiency of light pencils in the living human eye. Opt. Vis. Sci., 74(7), 540-547. Pimot, J., Rousset, G. & Fontanella, J. (1990). Deconvolution from wavefront sensing: a new technique for compensating turbulence degraded image. J. Opt. Soc. Am, 7(9), 15981608.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

80

Justo Arines Piferrer

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Platt, B. & Shack, R. (2001). History and Principles of Shack-Hartmann Wavefront Sensing. Journal of Refractive Surgery, 17, S573-S577. Ralf Engbert, R. K. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43, 1035-1045. Robinson, D. A. (1963). A method of measuring eye movements using a scleral search coil in a magnetic field. IEEE Trans. Biomed. Electron., 10, 137-145. Soloviev, O. & Vdovin, G. (2005). Hartmann-Shack test with random masks for modal wavefront reconstruction. Optics Express, 13(23), 9570-9584. Thibos, L. N., Applegate, R. A., Schwiegerling, J. T. & Webb, R. (2000). Standards for reporting the optical aberrations of eyes. En V. Lakshminarayanan, (Ed.), Trends in Optics and Photonics: Vision Science and Its Applications. OSA Technical Digest Series. 35, págs. 232-244. Washington, D. C.: Optical Society of America. Van der Stigchel, S., Meeter, M. & Theeuwes, J. (2006). Eye movement trajectories and what they tell us. Neuroscience and Biobehavioral Reviews, 30, 666-679.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 81-89 © 2011 Nova Science Publishers, Inc.

Chapter 4

Eye-Movement Patterns in Hemispatial Neglect Sergio Chieffi1,*, Alessandro Iavarone2,3, Andrea Viggiano4, Giovanni Messina1, Marcellino Monda1 and Sergio Carlomagno5 1

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Department of Experimental Medicine, Section of Physiology, Second University of Naples 2 Neurological and Stroke Unit, CTO Hospital, Naples, Italy 3 Department of Relational Sciences, University of Naples “Federico II” 4 Department of Study of Institutions and Territorial Systems, University of Naples “Parthenope”, Italy 5 Department of Psychology, University of Trieste, Italy

Summary Hemispatial neglect is usually defined as failure to attend to the contralesional side of space. An approach to the study of neglect has been that to evaluate eye movement patterns of neglect patients, assuming that eye movements are a valid indicator of direction of their spatial attention. Eye movements have been analyzed while patients, simultaneously, performed different kinds of tasks, such as line bisection, visual search, text reading, scenes and face viewing. Overall, these studies showed that in neglect patients visual fixations and attention are oriented preferentially towards the ipsilesional side and there was a marked lack of active exploration of the contralesional side.

Patients with neglect typically fail to report or respond to stimuli located contralateral to the lesion. In the acute phase, neglect patients may present a more or less complete deviation of their eyes and head to the side of the lesion. If the examiner speaks to him from the left side, the patient responds to the opposite side [17]. Even in the absence of hemiplegia or *

E-mail address: [email protected]. Corresponding author: Prof. Sergio Chieffi, Seconda Università degli Studi di Napoli, Dipartimento di Medicina Sperimentale, Via Costantinopoli 16, 80138 Napoli, Italy.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

82

Sergio Chieffi, Alessandro Iavarone, Andrea Viggiano et al.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

severe hemiparesis, the patient does not tend to use the left limbs (Motor Neglect) [15]. In some cases, the patient does not move the left arm on request of the examiner, but he may move it to perform semiautomatic activities as the use of a handkerchief [7]. If the patient is able to carry out autonomously basic activities, he does not seem to explore and notice all that is in the left side of his body or the environment. The presence of neglect can be demonstrated by the use of a number of tests. Among the simplest tests there is the Albert test [1] that consists of marking with a pencil lines drawn on a sheet. In this case, omissions or abnormal latencies can be observed in the compromised side of the space. If the subject with neglect is invited by the examiner to point out the central point of a horizontal line, he generally points out on the right of the true centre. In drawing elementary figures, like a daisy or a quadrant of a clock, he omits the petals and the hours on the left side. The neglect is more frequent and severe following a lesion in the hemisphere opposite to that in which the language is represented, in particular at the level of the inferior parietal lobule [7]. However, are well known cases in which the lesion is apparently confined to other regions, for example at level of the frontal lobe and subcortical structures such as the thalamus and basal ganglia [7]. The main interpretations of neglect can be grouped into three categories [7]: (1) interpretations based on elementary levels of central nervous activity. In this case, there would be a reduced or interfered transmission of sensory information directed to damaged hemisphere [5,16]; (2) representative interpretations. These derive from the observation that the neglect does not manifest only as a defect of perception and exploration but also as unilateral deficit of mental representation [2,10,11,22,44]; (3) attentional interpretations. Kinsbourne [38,39] postulated the existence of two antagonist attentional vectors, one of which, depending from the activity of the left hemisphere, directs the attention toward the right side of the egocentric space; the other one, depending from the activity of the right hemisphere, directs the attention toward the opposite side. In physiological conditions, the vector depending on the left cerebral hemisphere prevails on the antagonist vector, so that the right side of the space results privileged. In case of right hemisphere lesion, attention is strongly biased in favour of the right side. The model of Heilman et al. [23] postulates that the attentional system of the right hemisphere refers to the whole space, while that of the left hemisphere only to the right half of the egocentric space. Therefore, a lesion of the right hemisphere would deprive the patient of the ability to turn the attention toward the left half of the space. When we interact with the external world, a quantity and variety of stimuli are offered to our vision. However, only a small part of this enormous amount of information reaches our consciousness and influences our behaviour. In this complex process of filtering and selection, both top-down and bottom-up attentional mechanisms are involved. Top-down processes consist of concept-driven encoding and processing of stimuli, i.e. based on context, belief, desires, knowledge etc. Conversely, bottom-up processes encompass data- or stimulusdriven mechanisms that are based on structural features of the stimuli such as colour, brighteness, orientation, etc.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye-Movement Patterns in Hemispatial Neglect

83

The pattern of eye movements is often taken as a robust and transparent index of the distribution of attention since eye movement behaviour is considered to parallel that of underlying spatial attention [18,20,25,31,40,41,53,54]. The study of the distribution of eye movements through a detailed oculographic analysis may gain insight into the nature of the attentional deficit underlying the visuospatial behaviour of neglect. A number of studies have been performed to characterize the pattern of eye movements in patients with neglect while they simultaneously performed a specific task, e.g. line bisection task [3,27,28,29,37], visual search [13], text reading [34,45], scenes and face viewing [35,48]. Overall, these studies have shown the presence of an abnormal pattern characterized by fewer contralesional than ipsilesional saccades and prolonged search times for ipsilesional targets. Further, neglect patients are slower to initiate leftward saccades and generally adopt a rightward (rather than leftward) position for starting their visual exploration [14,18,27,28,30,52]. Hornak [26] and Karnath‟s group [32,33,34] studied space exploration of neglect patients in complete darkness and observed that their ocular exploration was confined almost entirely to the right side of the midsagittal plane and, moreover, the patients explored more on the right side than did the control subjects. These findings provided support for Kinsbourne‟s view [38,39] in that the mean percentage of fixations made by the neglect patients fell off gradually from right to left. The relationship between the pattern of ocular exploration and attention distribution in visual search tasks was further studied by Behrmann et al. [4] and Karnath et al. [36]. Behrmann et al. [4] found that, relative to normal subjects and to patients with hemianopia without neglect, patients with left neglect started their search to the right of the midline and made significantly more fixations and longer fixations on the ipsilesional right side. Although the attentional deficit showed a left-right gradient, the peak of the maximum fixations was not on the extreme right, as might be predicted by a strict gradient account, but at a medium eccentricity. Behrmann et al. [4] suggested that their observations were consistent with the view that the midsagittal plane of the viewer (neglect patients) is redirected rightwards. Similar results were obtained by Karnath et al. [36] who observed a deviations of patients‟ gaze towards the ipsilesional side, but not towards the extreme ipsilesional right side. The authors [36] suggested that in neglect patients the whole frame of exploratory behaviour is rotated around an earth-vertical body axis to a new equilibrium on the right. Another interesting observation made by Karnath et al. [36] was the presence of a marked discrepancy between the neglect patients‟ orientation of gaze when they spontaneously explored the surroundings compared with their ability to direct the gaze when explicitly instructed. Indeed, neglect patients were able to direct the gaze as far to the left and right side as the controls when this was requested by the experimenter. This finding ruled out that the deviation toward the ipsilesional side of spontaneous exploratory movements might depend on the existence of basic oculomotor deficits. This aspect was further investigated by Niemeier and Karnath [47]. In their study, the authors [47] compared identical saccades performed either (i) when saccade targets were deliberately selected, "voluntary" saccades, and (ii) when saccade targets suddenly appeared, "stimulus-driven" saccades. In the first task (voluntary saccades), subjects freely searched for the letter A in random array of letters. The subject‟s scan paths in performing this task were recorded and used to generate the stimulus-driven task. In the second task (stimulus-driven saccades), participants viewed the same display but had to follow a red square, which took exactly the same path as the fixations made by subjects in the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

84

Sergio Chieffi, Alessandro Iavarone, Andrea Viggiano et al.

previous task. The results showed that (i) in voluntary saccades condition, neglect patients tended to search on the right side of array and ignore the left, but there was no major discrepancy between leftward and rightward saccade amplitude and frequency; (ii) in stimulus-driven saccades condition, saccades were of reduced amplitude, particularly when leftward. The authors [47] concluded that contralesional saccadic deficit in neglect may involve primarily reflexive rather than volitional eye movements. Neglect patients often refixate stimuli on their ipsilesional side [4]. Mannan et al. [42] examined whether the neglect patients were unaware of re-fixate locations they have previously fixed. The study involved neglect patients whose lesions were mapped with highresolution MRI. The authors [42] asked patients to click a response button only when they judged they were fixating a target for the very first time. In this way, “re-clicking” on previously found targets would indicate that patients erroneously respond to these as new discoveries. The authors [42] found that neglect patients with damage involving the right intraparietal sulcus or right inferior frontal lobe “re-clicked” on previously found targets on the right. For the intraparietal sulcus patients, the probability of erroneous re-clicks on an old target increased with time since first discovering. This behaviour is consistent with a deficit in remembering target locations or in remapping and updating representations of target locations across saccades. For the frontal patients the probability of erroneous re-clicks was independent of search time. This finding suggests that frontal patients may show a „perseverative‟ type of behaviour. Another important line of research is represented by studies that have analysed eye movements in neglect patients while they were performing line bisection task. Line bisection test is a task seemingly simple and widely used in clinical and research settings. However, the significance of errors made by patients with neglect in bisection task, and their significance in the context of neglect syndrome, is not yet fully understood. For example, an intriguing finding is the observation of a relative lack of correspondence between the degree of deficit observed in bisection task and that observed in other tasks, such as target cancellation test [6,21,50] and drawing test [9,49]. Ishiai et al. [27,28] compared the pattern of eye movements when patients with neglect freely explored a horizontal line [27] and when they explored the line to locate and mark with a pencil the centre (bisection task) [28]. In their first study, Ishiai et al. [27] analysed eyefixation patterns in homonymous hemianopic patients with or without unilateral spatial neglect, and in control subjects, while viewing freely and carefully simple patterns, namely a horizontal line and a rectangle. The results showed that the pattern of eye movements differed among the three groups: (i) control subjects looked mainly at the centre of each pattern and at both sides almost equally; (ii) hemianopic patients without unilateral spatial neglect used the strategy to look at the hemianopic side of the pattern longer, in order to compensate for their visual field deficit; (iii) hemianopic patients with unilateral spatial neglect showed no tendency to look at the hemianopic side of the pattern longer, in other words, they lacked the compensatory eye-fixation pattern for their left homonymous hemianopia. In a subsequent study, Ishiai et al. [28] investigated whether neglect subjects bisected the line segment perceived in the seeing right visual field or if they neglected the left side even in this segment. In order to answer to this question, Ishiai et al. [28] studied the relationship between the subjective midpoint and the actually perceived extent observing the eye-fixation pattern and marking procedure. The results showed that: (i) normal controls showed a tendency to look to the right in the line bisection test and scarcely searched to the right or left

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Eye-Movement Patterns in Hemispatial Neglect

85

endpoint of the line; (ii) both left and right hemianopic patients without unilateral spatial neglect shifted their gaze to search to the endpoint of the line on the hemianopic side and bisected the line correctly; (iii) in the case of left hemianopia with unilateral spatial neglect, the amount of searches to the right, that is, to the side of the normal field, was almost the same as that in normal controls. However, left hemianopic patients with unilateral spatial neglect showed no searches to the left. Once they fixated a certain point on the right part of the line, they persisted with this point and marked the subjective midpoint there. Taking left homonymous hemianopia into account, the subjective midpoint appeared to be marked, not at the centre of the line segment perceived in the seeing right visual field, but at the leftmost point of it. Ishiai et al. [28] interpreted their results by applying the concept of completion to unilateral spatial neglect on the line bisection test. The authors [28] hypothesized that left hemianopic patients with unilateral spatial neglect see a completed image of line extending equally to either side of the point where they are going to mark the subjective midpoint. More precisely, they [28] hypothesized: (i) the visual input about the right part of the line is projected to the left hemisphere from the right visual field; (ii) the damaged right hemisphere uses the visual input, probably via the corpus callosum, and constructs a totalized image of the line extending equally to either side of the fixation point; (iii) the right hemisphere completes the line, using the visual input relating to the right part of the line perceived by the left hemisphere. Barton et al. [3] found more heterogeneous patterns of ocular searching as compared with the study of Ishiai et al. [28]. The authors [3] required patients to examine the entire line and then to touch the midpoint. The results showed a broad distribution of fixation peaks in the ipsilateral hemispace. Further, it was not uncommon that patients searched leftward beyond the point where the subjective midpoint was later placed. The differences between the results obtained by Barton et al.‟s [3] and those obtained by Ishiai et al. [28] might partly reflect the differences in the instruction between the two studies. Barton et al. [3] asked, explicitly, the patients to examine the entire line, Ishiai et al. [28] to bisect the line. Kim et al. [37] studied search patterns using a line bisection task in a patient with left premotor-intentional neglect and control subjects. Each trial began with the presentation on a screen of a number followed by a line. The number was presented at different location on the screen. Subjects were instructed to look at this number (initial position). Subsequently, when the line was presented, they were to look at the whole line, to both ends of the line, and then look at the center of the line. Independently of initial position, most normal subjects in most trials initially oriented to the left of the line and then scanned rightward. The initial orienting to the left side might depend on both right-hemisphere attentional and intentional dominance [24,51] and reading habit. Indeed, subjects were readers of European languages and read from left to right. In this case, the left-right scan patterns might be a learned response. Following rightward scan, normal subjects looked leftward to the center of the line. Conversely, in the patients with neglect, independently of initial position, eye movements were only directed to the right end of the line. He did not perform any leftward eye movement back to the center of the line. Kim et al. [37] hypothesized that the defective line exploration in neglect patient might be attributed to a directional ocular motor intentional deficit. This deficit might be related to: (i) an inability to activate the gaze systems that mediate leftward movement, (ii) an inability to reduce or inhibit the activation of the systems that mediate rightward gaze, or (iii) a combination of activator and disengagement failures.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

86

Sergio Chieffi, Alessandro Iavarone, Andrea Viggiano et al.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

The magnitude of rightward error in neglect patients increases as a function of stimulus length [8,12,46]. Marshall and Halligan [43] hypothesized that this pattern of errors might reflect the direction of attentional approach to the subjective midpoint. Neglect patients should approach the subjective midpoint from the right side because of an initial rightward shift of attention [19]. The way in which the bisection test is usually performed might induce rightward shift of attention. Generally, the examiner presents on the table top, one after another, the sheets of paper with printed the lines to be bisect. This modality of presentation could influence the starting point of eye movement and then patient‟s performance. To elucidate how neglect patients approach the subjective midpoint for lines of various lengths, Ishiai et al. [29] analysed movements of eye fixation from the time before line presentation. Lines appeared across the centre of a monitor. The results showed that the fixation immediately before line presentation was located on average near the centre of the lines. In most of trials, patients approached the subjective midpoint directly from the left side, starting from the fixation before line presentation. Further, Ishiai et al. [29] observed that the subjective midpoint frequently deviated leftward on the “attended” segment between the leftmost point of fixation and the right endpoint, whereas it was displaced rightward on the total extent. These findings contradicted the hypothesis of Marshall and Halligan [43]. Indeed, Ishiai et al. [29] did not find any predictive relationship between direction of final scan and direction of bisection error, nor observed a prevalence of right-to-left overt scans amongst neglect patients making bisection responses. In conclusion, the analysis of eye movement patterns has proved to be an important instrument to investigate and elucidate the mechanisms underlying neglect. This approach, as said above, is based on the idea that eye movement behaviour parallels that of underlying spatial attention. However, in evaluating and interpreting the data collected it is necessary to consider two critical aspects: (i) neglect is a heterogeneous syndrome with a variety of behavioural features induced by different mechanisms; (ii) eye-fixation patterns of patients with unilateral neglect may vary widely with tasks, stimuli, and instructions.

References [1] [2] [3] [4] [5] [6] [7]

Albert, ML. A simple test of visual neglect. Neurology, 1973, 23, 658-64. Barbut, D; Gazzaniga, MS. Disturbances in conceptual space involving language and speech. Brain, 1987, 110, 1487-96. Barton, JJ; Behrmann, M; Black, S. Ocular search during line bisection. The effects of hemi-neglect and hemianopia. Brain, 1998, 121, 1117-31. Behrmann, M; Watt, S; Black, SE; Barton, JJ. Impaired visual search in patients with unilateral neglect: an oculographic analysis. Neuropsychologia, 1997, 35, 1445-58. Bender, MB. Extinction and other patterns of sensory inattention. In: EA; Weinstein, RP. Friedland, editors. Hemi-Inattention and Hemisphere Specialization. New York: Raven Press, 1977, 107-110. Binder, J; Marshall, R; Lazar, R; Benjamin, J; Mohr, JP. Distinct syndromes of hemineglect. Arch Neurol, 1992, 49, 1187-94. Bisiach, E. Negligenza spaziale unilaterale e altri disordini unilaterali di rappresentazione. In Denes G; Pizzamiglio L, editors. Manuale di neuropsicologia. Normalità e patologia dei processi cognitivi. Bologna: Zanichelli, 1990, 639-61.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Movement Patterns in Hemispatial Neglect [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[20] [21] [22] [23] [24] [25] [26] [27] [28]

87

Bisiach, E; Bulgarelli, C; Sterzi, R; Vallar, G. Line bisection and cognitive plasticity of unilateral neglect of space. Brain Cogn, 1983, 2, 32-8. Bisiach, E; Capitani, E; Colombo, A; Spinnler, H. Halving a horizontal segment: a study on hemisphere-damaged patients with cerebral focal lesions. Schweiz Arch Neurol Neurochir Psychiatr, 1976, 118, 199-206. Bisiach, E; Capitani, E; Luzzatti, C; Perani, D. Brain and conscious representation of outside reality. Neuropsychologia, 1981, 19, 543-51. Bisiach, E; Luzzatti, C. Unilateral neglect of representational space. Cortex, 1978, 14, 129-33. Butter, CM; Mark, VW; Heilman, KM. An experimental analysis of factors underlying neglect in line bisection. J Neurol Neurosurg Psychiatry, 1988, 51, 1581-3. Chédru, F. Space representation in unilateral spatial neglect. J Neurol Neurosurg Psychiatry, 1976, 39, 1057-61. Chédru, F; Leblanc, M; Lhermitte, F. Visual searching in normal and brain-damaged subjects (contribution to the study of unilateral inattention). Cortex, 1973, 9, 94-111. Critchley, M. The Parietal Lobes. London: Edward Arnold, 1953. Denny-Brown, D; Meyer, JS; Horenstein, S. The significance of perceptual rivalry resulting from parietal lesion. Brain, 1952, 75, 433-71. De Renzi, E. Disorders of space exploration and cognition. New. York: Wiley, 1982. Gainotti, G. The role of spontaneous eye movements in orienting attention and in unilateral neglect. In: Robertson IH, Marshall JC, editors. Unilateral Neglect: Clinical and Experimental Studies. Hove (UK): Lawrence Erlbaum Associates, 1993, 107-122. Gainotti, G; D'Erme, P; Bartolomeo, P. Early orientation of attention toward the half space ipsilateral to the lesion in patients with unilateral brain damage. J Neurol Neurosurg Psychiatry, 1991, 54, 1082-9. Girotti, F; Casazza, M; Musicco, M; Avanzini, G. Oculomotor disorders in cortical lesions in man: the role of unilateral neglect. Neuropsychologia, 1983, 21, 543-53. Halligan, PW; Marshall, JC; Left visuo-spatial neglect: a meaningless entity? Cortex, 1992, 28, 525-35. Halsband, U; Gruhn, S; Ettlinger, G. Unilateral spatial neglect and defective performance in one half of space. Int J Neurosci, 1985, 28, 173-95. Heilman, KM; Bowers, D; Valenstein, E; Watson, RT. Hemispace and hemispatial neglect. In M Jeannerod, editor. Neurophysiological and Neuropsychological Aspects of Spatial Neglect. Amsterdam: Elsevier, 1987, 115-150. Heilman, KM; Van Den Abell, T. Right hemisphere dominance for attention: the mechanism underlying hemispheric asymmetries of inattention (neglect). Neurology, 1980, 30, 327-30. Hoffman, JE. Visual attention and eye movements. In: H. Pashler H, editor. Attention. London: University College London Press, 1998, 119-154. Hornak, J. Perceptual completion in patients with drawing neglect: eye-movement and tachistoscopic investigations. Neuropsychologia, 1995, 33, 305-25. Ishiai, S; Furukawa, T; Tsukagoshi, H. Eye-fixation patterns in homonymous hemianopia and unilateral spatial neglect. Neuropsychologia, 1987, 25, 675-9. Ishiai, S; Furukawa, T; Tsukagoshi H. Visuospatial processes of line bisection and the mechanisms underlying unilateral spatial neglect. Brain, 1989, 112, 1485-502.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

88

Sergio Chieffi, Alessandro Iavarone, Andrea Viggiano et al.

[29] Ishiai, S; Koyama, Y; Seki, K; Hayashi, K; Izumi, Y. Approaches to subjective midpoint of horizontal lines in unilateral spatial neglect. Cortex, 2006, 42, 685-91. [30] Ishiai, S; Sugishita, M; Mitani, K; Ishizawa, M. Leftward search in left unilateral spatial neglect. J Neurol Neurosurg Psychiatry, 1992, 55, 40-4. [31] Jahnke, MT; Denzler, P; Liebelt, B; Reichert, H; Mauritz, KH. Eye movements and fixation characteristics in perception of stationary scenes: normal subjects as compared with patients with visual neglect or hemianopia. Eur J Neurol European Journal of Neurology, 1995, 2, 275-95. [32] Karnath, HO. Spatial orientation and the representation of space with parietal lobe lesions. Philos Trans R Soc Lond B Biol Sci., 1997, 352, 1411-9. [33] Karnath, HO; Fetter, M. Ocular space exploration in the dark and its relation to subjective and objective body orientation in neglect patients with parietal lesions. Neuropsychologia, 1995, 33, 371-7. [34] Karnath, HO; Fetter, M; Dichgans J. Ocular exploration of space as a function of neck proprioceptive and vestibular input--observations in normal subjects and patients with spatial neglect after parietal lesions. Exp Brain Res., 1996, 109, 333-42. [35] Karnath, HO; Huber, W. Abnormal eye movement behaviour during text reading in neglect syndrome: a case study. Neuropsychologia, 1992, 30, 593-8. [36] Karnath, HO; Niemeier, M; Dichgans, J. Space exploration in neglect. Brain, 1998, 121, 2357-67. [37] Kim, M; Anderson, JM; Heilman, KM. Search patterns using the line bisection test for neglect. Neurology, 1997, 49, 936-40. [38] Kinsbourne, M. Mechanism of unilateral neglect. In: Jeannerod M, editor, Neurophysiological and Neuropsychological Aspects of Spatial Neglect. Amsterdam: Elsevier, 1987, 69-86. [39] Kinsbourne, M. Orientational bias model of unilateral neglect: Evidence from attentional gradients in hemispace. In: Robertson IH, Marshall JC, editors. Unilateral Neglect: Clinical and Experimental Studies. Hove (UK): Lawrence Erlbaum Associates, 1993, 63-86. [40] Kowler, E; Anderson, E; Dosher, B; Blaser, E. The role of attention in the programming of saccades. Vision Res., 1995, 35, 1897-916. [41] Kustov, AA; Robinson, DL. Shared neural control of attentional shifts and eye movements. Nature, 1996, 384, 74-7. [42] Mannan, SK; Mort, DJ; Hodgson, TL; Driver, J; Kennard, C; Husain, M. Revisiting previously searched locations in visual neglect: role of right parietal and frontal lesions in misjudging old locations as new. J Cogn Neurosci, 2005, 17, 340-54. [43] Marshall, JC; Halligan, PW. When right goes left: an investigation of line bisection in a case of visual neglect. Cortex, 1989, 25, 503-15. [44] Meador, KJ; Loring, DW; Bowers, D; Heilman, KM. Remote memory and neglect syndrome. Neurology, 1987, 37, 522-6. [45] Meienberg, O; Harrer, M; Wehren, C. Oculographic diagnosis of hemineglect in patients with homonymous hemianopia. J Neurol, 1986, 233, 97-101. [46] Nichelli, P; Rinaldi, M; Cubelli, R. Selective spatial attention and length representation in normal subjects and in patients with unilateral spatial neglect. Brain Cogn., 1989, 9, 57-70.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Movement Patterns in Hemispatial Neglect

89

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[47] Niemeier, M; Karnath, HO. Stimulus-driven and voluntary saccades are coded in different coordinate systems. Curr Biol., 2003, 13, 585-9. [48] Rizzo, M; Hurtig, R. Looking but not seeing: attention, perception, and eye movements in simultanagnosia. Neurology, 1987, 37, 1642-8. [49] Schenkenberg, T; Bradford, DC; Ajax, ET. Line bisection and unilateral visual neglect in patients with neurologic impairment. Neurology, 1980, 30, 509-17. [50] Schubert, F; Spatt, J. Double dissociations between neglect tests: possible relation to lesion site. Eur Neurol, 2001, 45, 160-4. [51] Verfaellie, M; Bowers, D; Heilman, KM. Hemispheric asymmetries in mediating intention, but not selective attention. Neuropsychologia, 1988, 26, 521-31. [52] Walker, R; Findlay, JM; Young, AW; Welch, J. Disentangling neglect and hemianopia. Neuropsychologia, 1991, 29, 1019-27. [53] Williams, DE; Reingold, EM; Moscovitch, M; Behrmann, M. Patterns of eye movements during parallel and serial visual search tasks. Can J Exp Psychol., 1997, 51, 151-64. [54] Zelinsky, GJ; Sheinberg, DL. Eye movements during parallel-serial visual search. J Exp Psychol Hum Percept Perform, 1997, 23, 244-62.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 91-102 © 2011 Nova Science Publishers, Inc.

Chapter 5

Eye-Gaze Input System Based on Image Analysis under Natural Light Kiyohiko Abe1, Shoichi Ohi2 and Minoru Ohyama2 1

Kanto Gakuin University, Japan Tokyo Denki University, Japan

2

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Abstract Eye-gaze input system has been reported as a novel human-machine interface. The operation of this system requires only the user’s eye movement. Using this system, many communication aid systems have been developed for people with severe physical disabilities, such as patients with amyotrophic lateral sclerosis (ALS). Eye-gaze input systems commonly employ a non-contact type eye-gaze detection method for which infrared or natural light can be used as a light source. The detection method that uses infrared light can detect eye-gaze with a high degree of accuracy. However, it requires a high-cost device. The detection method that uses natural light requires ordinary devices, such as a home video camera and a personal computer; therefore, the system that uses this method is cost-effective. However, the systems that operate under natural light often have low accuracy. As a result, they are capable of classifying only a few indicators for eye-gaze input. We have developed an eye-gaze input system for people with severe physical disabilities, such as ALS patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal gaze positions through a simple image analysis and does not require special image processing units or sensors. This eye-gaze input system also compensates for measurement errors caused by head movements; in other words, it can detect eye-gaze with a high degree of accuracy. In this chapter, we present our eye-gaze input system and its new method for eye-gaze detection.

1. Introduction Recently, eye-gaze input systems have been developed as a novel human-machine interface [1,2,3,4,5,6,7,8,9]. Their operation requires only the user’s eye movement. Using

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

92

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

such systems, many communication aids have been developed for people with severe physical disabilities, such as patients with amyotrophic lateral sclerosis (ALS). Eye-gaze input systems commonly employ non-contact eye-gaze detection for which infrared or natural light (an incandescent or fluorescent lamp) can be used as a light source. Detection based on infrared light can detect eye-gaze with a high degree of accuracy [1,2,3] but requires an expensive device. Detection based on natural light uses ordinary devices and is therefore cost-effective [4,5]. We have developed an eye-gaze input system for people with severe physical disabilities [8,9]. This system uses a personal computer and a home video camera to detect eye-gaze under natural light. The camera, for example a DV camera, can easily be connected to the PC by the IEEE1394 interface. The images taken by the camera can be analyzed in real time using the DirectShow library from Microsoft. The system detects both vertical and horizontal eye-gaze positions through a simple image analysis and does not require special image-processing units or sensors. The image analysis is based on the limbus tracking method, where eye-gaze is detected using the difference in reflectance between the iris and the sclera. This method is robust against noise, for example, light fluctuation, in an image of the eye. It also compensates for measurement errors caused by head movements; in other words, it can detect eye-gaze with a high degree of accuracy. In addition, this system is cost-effective and versatile; it can be easily customized. It can also be used under natural light and is therefore suitable for personal use.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

2. Current Situation for Eye-Gaze Input In a general eye-gaze input system, the icons displayed on the PC monitor are selected by the user gazing at them, as shown in Figure 1. These icons are called “indicators” and are assigned to characters or functions of the application program. The eye-gaze input must detect the user’s gaze in order to classify the selected indicator. Many eye-gaze detection methods have been studied. Several systems use the EOG method for eye-gaze detection [7]. This method detects eye-gaze by the difference in the electrical potential between the cornea and the retina. It is a contact method that uses electrodes pasted around the eye. It is costeffective, but some users find that long-term use of the electrodes is uncomfortable. Therefore, many systems detect eye-gaze using non-contact methods [1,2,3,4,5,6,8,9]. Specifically, the user’s gaze is detected by analyzing eye images (images of the eye and its surrounding skin) captured by a video camera. To classify the indicators, most conventional systems use special devices such as infrared light [1,2,3] or multi-cameras [6]. In order to be suitable for personal use, the system should be inexpensive and user-friendly. Therefore, a simple system using a single camera under natural light is desirable [4,5]. However, natural-light systems often have low accuracy and are capable of classifying only a few indicators [4]. It is difficult for users to perform a task with many functions, such as text input. To solve these problems, a simple eye-gaze input system that can classify many indicators is needed.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Gaze Input System Based on Image Analysis under Natural Light

93

3. Eye-Gaze Detection by Image Analysis Eye-gaze is defined as a unit vector in a three-dimensional coordinate space. The origin of this unit vector is the center of the eyeball. Generally, the user’s gaze is detected on a twodimensional plate. It has horizontal and vertical components. The method of iris tracking is the most popular for eye-gaze detection under natural light using image analysis [4,5,6]. However, it is difficult to distinguish the iris and the sclera (the white part of the eye) using image analysis, because the edge between the iris and the sclera changes smoothly. In addition, if a large part of the iris is hidden by the upper and lower eyelids, the measurement errors increase, because the obscuring of the iris by the eyelids causes estimation errors in the iris extraction. To resolve these issues, we propose a new imageanalysis method for detecting eye-gaze in both the horizontal and the vertical direction. Horizontal gaze is detected by a limbus tracking method that is extended using image analysis, and vertical gaze is detected by the change in the light intensity distributions associated with eye movement in the eye image. We now present an overview of our eye-gaze input detection method. 3.1. Horizontal Gaze Detection

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

An overview of the proposed horizontal gaze detection method is shown in Figure 2. The method detects horizontal gaze using image analysis based on the limbus tracking method [8,9]. The difference in reflectance between the iris and the sclera is used: the gaze is estimated by the difference in the integral value of the light intensity in area A and area B, as shown in Figure 2.

Figure 1. Overview of eye-gaze input.

The center of eye

Area A

Area B

Fig.2 Detection of horizontal gaze Figure 2. Detection of horizontal gaze.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

94

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

We define this differential value to be the eye-gaze value; it gives a value for the horizontal gaze. The relation between the eye-gaze value and the angle of sight is nearly proportional. Therefore, the system can be calibrated using this relation. This method is simple and robust against noise in the eye image such as a fluctuation in the light. Therefore, the system can detect horizontal gaze rapidly with a high sampling rate. 3.2. Vertical Gaze Detection Vertical gaze detection requires a different process because most of the sclera is hidden by the transformation of the eyelid shape as the eye moves. If the measurement area is defined appropriately, vertical gaze can also be detected using the limbus tracking method, but we use another method. We pay attention to the vertical movement of the iris. The light-intensity distribution in the eye image changes with iris movement. Vertical gaze can be detected using this change [9]. The system stores vertically aligned images of the eye gazing at the indicators. The lightintensity distributions (the results of a one-dimensional projection) are calculated from these eye images as reference data. The user’s vertical gaze can be detected by pattern matching based on these reference data. An overview of the method is shown in Figure 3. This figure illustrates the detection of the three gaze directions: top, center, and bottom. The wave patterns at the right of the eye illustration show the light-intensity distribution. This method can classify five to seven vertical gaze directions with increasing reference data.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

3.3. Compensation for Head Movement In our eye-gaze detection method, the video camera records an image of the user’s eye from a distant location (the distance between user and camera is approximately 70 cm) and then this image is enlarged. The user’s head movement induces a large error in the measurements. Therefore, the system must compensate for such movement. We compensated for head movement by tracing the location of an inner corner within the eye. This tracking method is based on image analysis and executed in real time. The open-eye area (eye shape) can be estimated from the eye image. If the inner-corner point of the eye shape is extracted by binarization, the location of the inner-corner of the eye is determined. The distribution of light intensity

Center

Top

Bottom

Figure Detectionof ofvertical vertical gaze gaze. Fig.33.Detection

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Gaze Input System Based on Image Analysis under Natural Light

95

In our method, the open-eye area is extracted by using the skin-color information. The skin-color threshold is determined by a graph of the color-difference signal ratio of each pixel (Cr/Cb) that is calculated from the YCbCr image transformed from the RGB image. This graph has two peaks indicating the skin area and the open-eye area [10]. The minimum Cr/Cb value between the two peaks is designated as the threshold for open-eye area extraction. A sample graph of the color-difference signal ratio is shown in Figure 4. 12000

Threshold

8000 6000 4000

Eye

2000

Skin

1.22

1.17

1.12

1.07

1.02

0.97

0.92

0

0.87

Number of pixels

10000

Color-difference signal ratio (Cr/Cb)

Our method can extract the open-eye area almost completely. However, the results 0 0 1 0 0 eye, sometimes leave deficits around the corner of the because for some subjects the Cr/Cb 12000 value around the corner of the eye is similar to the value on the skin. To solve this problem, 0 0 0 3 0 10000 we have developed a method for open-eye extraction without deficits by combining two 1 Threshold 3 0 -3 -1 extraction results. One 8000 is based on a binarized image using color information. The other is 0 information, 0 -3 0 which includes in the extraction 0 based on a binarized image 6000 using light-intensity result the area around the corner of the eye. 0 We 0 extract 0 0 the -1 location of the inner corner of the 4000 Eye above, and Skinan eye image that enhances the edge eye using the open-eye area image described 2000 of the inner-corner of the eye. The enhanced image is processed by a special differential filter. 0 Fig.5 Operator for inner-corner of eye extraction Using these two images, this method almost totally extracts the location of the inner corner of the eye. Hence, user head movement is compensated for in the eye-gaze detection. The operator for the differential filterColor-difference is shown in Figure Figure 6 shows an extraction result for signal5.ratio (Cr/Cb ) the open-eye area, an eye image enhanced at the inner-corner of the eye, and a extraction of the inner-corner of the eye. Fig.4 Graph of color-difference signal ratio 0

0

0

0

0

3

0

0

0

1

3

0

-3 -1

0

0

0

-3

0

0

0

1.22

1

1.17

1.07

1.12

1.02

0.97

0.92

0.87

Number of pixels

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Graph of color-differencesignal signalratio. ratio Figure 4.Fig.4 Graph of color-difference

0

0 -1

Fig.5 Operator for Figure 5. Operator for inner-corner inner-cornerofofeye eyeextraction extraction.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

96

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

(a) Image of open eye area

(b) Image of enhanced inner-corner of eye

(c) Detected inner-corner of eye

Figure 6. Images of inner-corner-of-eye extraction. Fig.6 Images of inner-corner-of-eye extraction

Fig.7 Appearance of proposed system

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 7. Appearance of proposed system.

4. Proposed Eye-Gaze Input System and Its Evaluation We developed a new eye-gaze input system using the methods discussed in Section 3. This system has 27 indicators. It comprises a personal computer, a home video camera, and an IEEE1394 interface for image capture from the camera. The computer runs image analysis software in Windows XP for eye-gaze detection. This system does not require an exclusive device for image processing. It is illustrated in Figure 7. The indicators are displayed on the PC monitor, as shown in Figure 8(a) and (b). The indicators shown in Figure 8(a) are the indicators for eye-gaze measurement. The indicators shown in Figure 8(b) are the indicators for calibration. The size of one indicator is 1.5° (angle of sight). The horizontal distances between indicators are 3° (Indicators 1) or 12° (Indicators 2), and the vertical distance between indicators is 10°, when the distance between user and PC monitor is approximately 70 cm. Before using the system, users must calibrate it by gazing at each indicator in Indicators 2. After the calibration, Indicators 1 are displayed on the PC monitor. If the calibration is successful, the user’s eye-gaze can be measured, and the success rate of the eye-gaze selection is estimated.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Gaze Input System Based on Image Analysis under Natural Light

Indicators 1

97

Indicators 2

Figure 8. Indicators for experimental system.

The evaluation experiments were conducted with 10 subjects. The subjects gazed at each indicator of Indicators 1, shown in Figure 8(a). The sample results of the horizontal and vertical gaze measurement are shown in Figures 9 and 10, respectively.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Top

Center

Bottom

Figure 9. Characteristics of horizontal gaze.

Figure 10. Characteristics of vertical gaze. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

98

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

In Figure 9, the X axis indicates the number of indicators and the Y axis indicates the eye-gaze value. The experimental data include noise such as blinking, therefore the success rates are not estimated accurately. We filter the data to decrease the noise to plot Figure 9. First, the median of 30 gaze measurements for a specific indicator is found. Second, the average of 7 measurements around the median is calculated. The resulting average is defined to be the horizontal representative value of gazing at that indicator. From Figure 9, it is evident that the horizontal data show linearity, and thus these plots are approximated as a primary expression. By using this relation, our eye-gaze input system can be calibrated easily. The horizontal measurement data from the three directions, top, center, and bottom indicate a similar tendency. Therefore, if the system is calibrated by this relation, the horizontal gaze of users can be estimated. In Figure 10, the X axis indicates the sampling point (interval: 120 ms) and the Y axis indicates the residuals between the measurement data and the reference data. There are 810 samples. The noise is removed from Figure 10 to 790 samples are plotted. From Figure 10, it is evident that the minimum-residual data of the three directions (top, center, and bottom) indicate the gaze direction of the users. For example, the user gazed at the “top” indicators in approximately the first 270 samples. For these data, the residuals for the “top” indicators have the minimum values. Our proposed system classifies the vertical gaze direction first. It then detects the horizontal gaze using the primary expression estimated by calibration. The process described above can estimate the indicator gazed at by the user. It classifies this indicator as one of the 27 indicators (3 rows and 9 columns) with a high accuracy. The success rate averaged over the 10 subjects is approximately 86%.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

5. Applications 5.1. Platform for Eye-Gaze Input We developed a new platform for eye-gaze input based on the method of eye-gaze detection described above. This platform can execute many application programs such as communication assistance, support for collecting information, or environmental control. The user can choose the application. The hardware configuration of the platform is as for the experimental system described in Section 4. It comprises a personal computer, a home video camera, and an IEEE1394 interface for image capture from the camera. The computer runs image analysis software in Windows XP for eye-gaze detection. The processing time for detection is approximately 200 ms (5 frames/s) using a PC (Pentium 4, clock frequency 2.2 GHz). The sampling rate is suitable for practical use. It is slightly lower than the rate of the experimental system described in Section 4, because the platform has to synchronize the application programs with the gaze detection. The indicators are displayed on the PC monitor, and users can select each indicator by gaze. We have confirmed that the system requires a degree of accuracy of approximately 90% for general use. To achieve this level of accuracy, our platform has about 10 indicators. The center of the display shows the workspace, used to display a window of the application program.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Gaze Input System Based on Image Analysis under Natural Light

99

Eye-gaze input systems for people with severe physical disabilities have to be customized depending on the users’ circumstances. We therefore developed the software as a clientserver system. The server handles the eye-gaze detection. The client handles the application programs and the interface for displaying the indicators. With this design, developers do not need to comprehend the details of the eye-gaze input system completely in order to add their own applications to the system. In addition, users can switch among different applications. This design increases user convenience. An overview of the platform is shown in Figure 11.

Figure 11. Overview of proposed platform.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

5.2. Text Input by Eye-Gaze We designed indicators for text input by eye-gaze, considering the success rate of gaze selection of our proposed system [9]. There are 12 indicators (2 rows, 6 columns). However, around 60 indicators are required to input Japanese text (our mother tongue). Twelve indicators are also insufficient for English text input, because the English language contains upper- and lower-case letters and symbols. To solve this problem, we designed a new interface. In this interface, users can select any characters (English or Japanese) by choosing the indicator group. An overview of the interface is shown in Figure 12. This interface requires two selections: one for character input and another for character group selection, for example, “group A to E.” Alphabets and symbols (“etc.” in Figure 12) require two selections, however, commonly used characters (for example, “space”) require just one selection. To input character “C,” the user first selects the indicator for “group A to E” then the indicator for “C.” Japanese can be input in the same way.

Figure 12. Interface for text input by eye-gaze.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

100

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

The input decision requires the detection of not only the location of the user’s gaze but also the user’s selection command. Selection can be performed either with a blink or using eye fixations (measuring how long the eye fixates on a target such as an indicator). Blinking is a physiological phenomenon and so we consider the use of eye fixations. If the user gazes at an indicator for more than a defined period of time, that indicator is selected. We detect eye fixations using information on the user’s eye-movement history [9]. The eye-movement history of a user is the gaze-state history. The gaze state has two settings: initial and continuous. If the user gazes at an indicator for more than a defined period of time, the gaze state changes to the initial state. When the initial state is complete, the state changes to continuous state. An overview of this method is shown in Figure 13. This method measures the gaze-state history from the two gaze states (initial and continuous). The indicators assigned for the majority of the time in each state are extracted as candidate input. If these two values are equal, the candidate input is decided. In Figure 13, the user is gazing at the indicator for “3.” The user’s gaze has noise such as blinking and involuntary eye movement. However, the user’s input decision is extracted correctly. Users can input text at the rate of 16.2 characters/min by eye-gaze [9]. This rate is comparable to that of a conventional infrared eye-gaze input system without requiring word prediction or special devices [2,3,6].

Figure 13. Input decision based on eye-movement history.

5.3. Personal Computer Operation Support by Eye-Gaze If users can operate general Windows functions by eye-gaze, they can operate commonly used application programs, such as e-mail and web browsers. Users can also input text to these applications using the interface for text input described above. Such applications are normally operated by keyboard or mouse, especially the latter. When an eye-gaze input system is used, the functions of the application programs must be assigned to indicators. We have extended our system to general Windows functions. Many guidelines have been proposed for the development of application programs for the disabled. To satisfy these guidelines, we assign Windows functions to indicators. These functions are: cursor control, execution of application programs, the use of short-cut keys (copy, cut and paste), and the selection of items from a menu bar. Hence, commercial applications can be used with our system. The Windows functions are categorized, and the user can switch indicator group, as shown in Figure 14. The “Main operation screen” has

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Eye-Gaze Input System Based on Image Analysis under Natural Light

101

.

indicators for cursor operation, object selection, decision input (enter), etc. The “Extended operation screen” has indicators for window switching and closing, activation of the desktop, mouse operation, etc. Using these indicators, users can operate all Windows functions.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 14. Interface for Windows operation.

To increase user-friendliness, the arrangements of the indicators should be customized for frequently used applications. We designed support systems for web browsing and television control. The web browsing system analyzes the location of selectable objects on the web page, such as hyperlinks, radio buttons, and edit boxes. The system stores the locations of these objects so the mouse cursor skips to the object of candidate input. This enables faster web browsing. The television function is incorporated in a PC for eye-gaze input. When users view television using this system, the indicators for television control are hidden so that users can view programs without hindrance from displayed indicators. Our eye-gaze input system is versatile and easy to customize because it has a simple hardware configuration.

6. Conclusion We present a new eye-gaze input system that can be used under natural light. This system detects both vertical and horizontal gaze positions through a simple image analysis and does not require special image processing units or sensors. It has sufficient accuracy for practical application and uses many (27) indicators. As an application of our system, we designed the arrangement of the indicators focusing on text input. We developed English or Japanese text input and a Windows operation support system. Disabled users can thus operate commercial applications using eye-gaze. We also developed web browser and television control support systems. These applications are helpful for increasing the quality of life of disabled users. We believe that our system is suitable for use at home; however, clinical experiments have not yet been conducted to validate the actual usability of this system. In the future, we will work with engineers and researchers in clinical practice to develop a new eye-gaze input system that is more comfortable to use.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

102

Kiyohiko Abe, Shoichi Ohi and Minoru Ohyama

References

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[1]

Huchinson, TE; White, KP; Martin, WN; Jr; Reichert, KC; Frey, LA. Human-computer Interaction using Eye-gaze Input. IEEE Trans. Systems, Man, and Cybernetics, 1989, Vol.19, No.7, 1527-1534. [2] Ward, DJ; MacKay, DJC. Fast Hands-free Writing by Gaze Direction. Nature, 2002, Vol.418, 838. [3] Hansen, JP; Torning, K; Johansen, AS; Itoh, K; Aoki, H. Gaze Typing Compared with Input by Head and Hand. Proc. of the Eye Tracking Research and Applications Symposium on Eye Tracking Research and Applications, 2004, 131-138. [4] Corno, F; Farinetti, L; Signorile, I. A Cost-effective Solution for Eye-gaze Assistive Technology. Proc. IEEE International Conf. on Multimedia and Expo, 2002, Vol.2, 433-436. [5] Kim, KN; Ramakrishna, RS. Vision-based Eye-gaze Tracking for Human Computer Interface. Proc. IEEE International Conf. on Systems, Man and Cybernetics, 1999, Vol.2, 324-329. [6] Wang, JG; Sung, E. Study on Eye gaze Estimation. IEEE Trans. on Systems, Man and Cybernetics, 2002, Vol.32, No.3, 332-350. [7] Gips, J; DiMattia, P; Curran, FX; Olivieri, P. Using EagleEyes - an Electrodes Based Device for Controlling the Computer with Your Eyes - to Help People with Special Needs. Proc. 5th International Conf. on Computers Helping People with Special Needs, 1996, 77-83. [8] Abe, K; Ohi, S; Ohyama, M. An Eye-gaze Input System based on the Limbus Tracking Method by Image Analysis for Seriously Physically Handicapped People, Adjunct Proc. 7th ERCIM Workshop “User Interface for All,” 2002, 185-186. [9] Abe, K; Ohi, S; Ohyama, M. An Eye-Gaze Input System Using Information on Eye Movement History. Proc. 12th International Conference on Human-Computer Interaction, 2007, Vol. 6, 721-729. [10] Abe, K; Ohi, S; Ohyama, M. Automatic Method for Measuring Eye Blinks Using Split-Interlaced Images. Proc. 13th International Conference on Human-Computer Interaction, 2009, vol.1, 3-11.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN: 978-1-61728-110-5 Editor: Dominic P. Anderson, pp. 103-117 © 2011 Nova Science Publishers, Inc.

Chapter 6

What We See and Where We Look: Bottom-Up and Top-Down Control of Eye Gaze Laura Perez Zapata, Maria Sole Puig and Hans Supèr* University of Barcelona (UB), Barcelona, Spain

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Abstract We perceive the world by continually making saccadic eye movements. The primary role of saccadic eye movements is to bring the visual signals on the central part of the retina (fovea) where visual processing is superior and provides best visual capacities. It is therefore that the visual signals at the foveal region will be more likely perceived consciously. Thus on the hand visual signals at the fovea guide saccades and on the other hand they are used to create our conscious perception of the visual environment. Are the visual signals that guide the saccade the same as the ones that produce our perception? Current research shows that saccade and perceptual signals are closely related and favor the idea that the same visual signal guides saccades and gives rise to perception. Thus we see where we look at. However, on the basis of visual stimuli that highlight the discrepancy between visual (bottom-up) and perceptual (topdown) information, we can isolate to some extent signals that control saccade behavior from the ones that give (or do not give) rise to perceptual awareness. The findings of these studies show a distinction between signals for saccade guidance and for perception. In these cases, there is a discrepancy between where we look at and what we see. In this chapter we will briefly give an overview of the bottom-up and top-down signals that control saccade guidance and perception and discuss whether they are the same or not.

* E-mail address: [email protected], Http: www.icrea.es / www.visca.cat. Tel: +34 933 125 158, Fax: +34 934 021 363. Institute for Brain, Cognition and Behavior (IR3C) & Dept Basic Psychology, Faculty of Psychology, ICREA, University of Barcelona (UB), Pg. Vall d´Hebron 171, 08035 Barcelona, Spain. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

104

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Saccadic Eye Movements and Perception Vision enables us to interact with our surrounding environment by identifying objects and guiding body movements. To perceive the visual world we continuously scan it by making fast ballistic-like eye movements, called saccadic eye movements or saccades. One of the main functions of these saccadic movements is to bring the relevant visual information on the fovea. The fovea is the central part of the retina and has the highest density of photoreceptor cells. It gives therefore rise to the best visual capacities. After each saccade we need to maintain our gaze more or less stable for a brief period, in the order of a few hundred of milliseconds. During the fixation, the visual signals entering the fovea are processed up to a perceptual level by the visual brain, and will eventually lead to a conscious experience. Besides processing the foveal information, the visual system needs to extract a visual target in the more peripheral regions for directing the next saccadic eye movement. This information will subsequently fall after the saccade on the foveal part of the retina. Then the question arises whether the visual signals that are used to guide the saccade are the same as the ones that will be perceived consciously once we fixate them. To have an answer to this question many studies have registered eye movements while observers where instructed to gaze at particular regions or objects within an image. The analysis of the scan path of the eyes will give an impression of the important visual signals and how they are used for guiding saccades. The (verbal) report of the observer will make clear whether the fixated visual signals have reached conscious perception or not. Here we will discuss the two types of signals that are important in directing eye gaze, namely bottomup and top-down information. Bottom-up signals carry elementary visual information, like color, contrast and motion but also more complex objects and are conveyed by feedforward pathways from the retina toward the motor centers. Top-down signals are perceptual or cognitive signals that derive from higher (visual) areas of the cerebral cortex and that are fed back to lower areas and motor regions.

Bottom-Up Control on Saccadic Behavior Numerous studies have shown that saccadic eye movements are guided by the physical attributes of visual stimuli. The findings of these investigations led to the general conclusion that the salient properties of stimuli attract the eyes irrespective of the identification of these properties by the subject (Lindauer & Lindauer, 1970; Brigner & Deni, 1990; Masson et al., 1997). This bottom-up view has largely been studied with attentional paradigms, which show that attention and saccadic eye movements are directed to the stimulus regardless of the task of the observer (Theeuwes et al., 1999, Theeuwes, 1992, 1994; Itti & Koch, 2000; Nothdurft, 2002; Parkhurst & Niebur, 2003). Thus, stimulus contrast defined by luminance, motion, size, orientation, etc. attract our attention and define our saccadic targets and subsequently the perception of the stimulus. This large body of research demonstrates that the sensory and not the perceived signals control eye movements. Such bottom-up account can explain also the „gap effect‟ (Saslow, 1967). If a fixation point is extinguished briefly before the target appears in the periphery, leaving a „gap‟ period where nothing is visible, then onset latencies of saccades to the target are much shorter than when, in the otherwise equivalent situation, the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

What We See and Where We Look

105

fixation point remains visible until some time after the appearance of the target. Explanations of this „gap effect‟ come mainly from bottom-up paradigms, where regardless participants‟ task stimuli can attract their attention and so control their gaze.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Top-Down Control on Saccadic Behavior Besides bottom-up processes, top-down effects have a strong impact on the control of oculomotor behavior. Top-down signals derive from higher areas of the visual cortex, e.g. frontal eye fields. About half a century ago, Yarbus (1967) found evidence for cognitive control of eye movements, i.e. how goals or cognitive factors modulate observers scan paths exploring an image. For instance, he demonstrated that the scan-path of observers exploring an image greatly depends on memory or task demands. Since then a huge number of researchers have investigated the top-down processes involved in the programming of eye movements. It has been shown that saccades are more likely to be directed to a non-target sharing a feature with the target as compared with other non-targets sharing no features with the target (Findlay, 1997). Evidence for top-down influences in saccade target selection comes also from studies showing that eye movements are task dependent and that fixations are made on behavioral relevant objects (Hayhoe et al., 2003; Hayhoe and Ballard, 2005). For example, when driving a car the eyes are directed specifically towards relevant stimuli, like traffic signs. Furthermore, studies showing that spatial memory aids target selection supports a role of top-down process in saccade guidance (Aivar et. al., 2005). Moreover, research in natural environments show that a fixation at a point that is not visually salient is better for the spatial-temporal demands of the task (Hayhoe and Ballard, 2005). This finding indicates that top-down processes guide eye movement behavior. Furthermore learning the relevant elements of the task modifies what and when to fixate. Land and MacLeod (2000) have investigated eye movement patterns in novice and skilled cricket players. They found that more skilled batsmen arrived at the bounce point about 100 ms earlier than novice players. Their saccades were always preceded by a fixation on the ball as it leaves the bowler‟s hand. They concluded that eye movement patterns are shaped by learning internal models of the dynamic properties of the world. A role of attentional top-down control in gaze control is also evident from patients suffering brain lesions. For instance, Zangemeister and co-workers (1995) recorded gaze movements in patients with hemianopic visual field defects. They observed short-term adaptation, i.e. training effects of eye movement strategies to improve the initially deficient results on the side of the blind hemifield with respect to the relative difficulty of the specific task. These results add new evidence for the top-down control of the human scan path even in hemianopic patients. Another top-down factor that can influence the pattern of ocular movements during searching for objects is related to culture. Many studies have shown cultural differences, e.g. between Westerners and East Asians in perceptual tasks and the focus where to attend to (the focal object or the context). It appears East Asians are more adept to perceive details than Western people. Chua, Boland and Nisbett (2005) investigated if these differences could be explained from different patterns of ocular movements between these two groups using photographs of natural environments with an object surrounded of a complex background. They found that Americans fixate sooner and longer at the focal object, whereas the Chinese make more saccades to the background than the Westerners. The

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

106

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

difference in scanning the visual environment between these two cultures not only results in perceptual differences but also could lead to distinct judgment and memory. Thus top-down effects are important for oculomotor behavior where perception rather the physical elements of the objects are saccade targets.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Integration of Bottom-up and Top-down Information Recently it has been proposed both kinds of processing (bottom-up and top-down) are not completely different systems using separate neural circuits, and integrative theories have been proposed. Results from attentional paradigms indicate that these two kinds of processing are time dependent. Van Zoest & Donk (2004) conducted four experiments to study the role of stimulus-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple non-targets and an irrelevant stimulus, i.e. a distractor. They modified the saliency of distractor relative to target (they could be equally salient or not). Their results suggest that attention and oculomotor response could be driven by stimuli properties (bottom-up) in rapid saccades but slower ones were directed by goals (top-down). That is latency in saccades generation will predict if eye movements are programmed in a stimulus driven way or in a goal driven way (Van Zoest & Donk, 2004, 2006). These authors proposed the hypothesis that bottom-up and top-down processing are independent and acting in different time windows. This notion has been adopted by different theories on visual selection (Kastner & Ungerleider, 2000; Cave & Wolfe, 1990). The idea of independent programming has been questioned, and it has been suggested that stimulus driven and goal driven signals are integrated at a common site by dynamic competition (Kopecz, 1995; Trappenberg et al., 2001; Godijn & Theeuwes, 2002). It is proposed that in the oculomotor system the superior colliculus, a sub-cortical neural structure that integrates visual and motor signals and plays a crucial role in the programming of saccades, integrates stimulus and goal driven information. This is not completely different from the temporal hypothesis but it implies an assumption of both processes taking place at different stages in time (Mulckhuyse et al., 2008). Also there is evidence for a first top-down guidance of eye movements followed by a second processing stage that involves bottom-up signals. Jacob & Hochstein (2009) working with the Identity Search Task (consisting in finding two exactly identical cards displayed on the computer screen with the trick that two pairs are identical and observers are un-informed about this fact) computed the slope of fixations (which indicates the probability that the next fixation will be in a pair card) and found a bifurcation point. This is interpreted that both potentially targets attract the same number of fixations in a first stage. In a second stage (preconscious one) there is a perceptual representation that makes observers to fixate for larger proportion of times at a particular target. In a third stage (conscious one) this target will be fixated and a behavioral response will be made. This finding agrees with the Reverse Hierarchies Theory in the visual system (Hochstein & Ahissar, 2002) that postulates that early high-level perception makes a first approximation of the visual scene or stimulus but the visual system returns to low levels to confirm or correct this guess.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

What We See and Where We Look

107

Reading is another field where evidence supporting both theories has been found. Lexical reading (word identification) has been an important source of evidence for bottom-up processing. On the one hand, it has been shown that the duration of a fixation on a word is affected by its basic lexical properties such as frequency and length. Higher frequency words are fixated shorter than lower frequency ones and, the longer the word, the more likely a reader is to re-fixate it (increasing gaze durations) (Rayner et al., 1996). On the other hand, results supporting top-down strategies have been observed. It has been found eye movement behavior is affected not only by the characteristics of the words being fixated, but also by the relationship between the fixated word and the meaning of the preceding text (Morris, 1994). A study by Rayner (Rayner et al., 1996) has shown that when readers are fixating a word, they make a longer saccade from the word when it is the last word in the clause than when it is not. This suggests that the process of „wrapping up‟ a linguistic constituent to determine its meaning affects the subsequent saccade into a new linguistic constituent. This result is striking as it indicates that higher level as well as lower level linguistic factors impact upon guidance during reading.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure-Ground Segregation and Oculomotor Behavior We perceive the visual world as a set of meaningful and coherent objects with their surrounding backgrounds. Objects in a clustered scene though are not isolated; many stimuli overlap or are occluded by others. This figure-ground phenomenon refers to humans' ability to separate elements based upon contrast and is the foundation of perception. Although the neural mechanisms underlying figure-ground segregation are not clear, there is an accepted view that suggests that low-level areas extract simple features of an object and that the grouping of these features into objects occurs in the subsequent higher-level areas of the visual system. Recent findings, however, demonstrate the existence of high-level operations like figure-ground perception in lower-level visual areas (Lamme et. al., 2000). For example, response modulations related to figure-ground segregation are observed in the primary visual cortex when an animal perceives a figure and absent when an animal fails to perceive a figure (Supèr, Spekreijse, & Lamme, 2001). Evidence from lesions in humans and monkeys suggest that perceptual segregation of occluded or overlapping objects involves extra-striate visual cortex. Larsson et al. (2002), conclude that discrimination of overlapping shapes involves a region of extra-striate visual cortex in monkeys located in the left lateral occipital cortex and that this region may correspond to human V4v. There also exist differences in the processing of local and global forms between hemispheres. Some evidence (Fink et. al., 1997) showed a relative hemispheric specialization for global and local processing (using Navon‟s letters) in accordance with previous neuropsychological studies. Thus this means that figure-ground segregation is mediated by bottom-up (feedforward) and top-down (feedback) neural connections, i.e. recurrent processing between low and high level visual areas. Surprisingly, the strength (and not the latency) of the figure-ground signal shows a clear relation to oculomotor behavior. Findings from a typical detection task shows that stronger figure-ground modulation leads to shorter saccadic reaction times, whereas no difference in the onset of modulation between early and late saccades is found (Supèr et al., 2003, 2004). A

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

108

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

similar observation is made for figure-ground textures that differ in saliency (Supèr et al., 2001). Modulation is stronger for perceptual salient textures than for low salient ones, and strong figure-ground modulation correspond to rapid reaction times whereas weak modulation is seen for longer reaction times. In addition, for an identical stimulus the amount of figureground modulation varies over time (Supèr et al., 2003b; Supèr and Lamme, 2007), where the variability predicts the saccadic reaction time to that stimulus. Furthermore, the planning and execution of actions are not necessarily dependent on conscious awareness of the visual stimulus (Binsted et al., 2007), which fits well with the idea that the oculomotor system uses figure-ground signal as the guidance information (Supèr, 2006). Figure-ground activity neither reflects the sensory stimulus nor the motor signal but reflects an intermediate stage in the process of visuomotor transformation. That is to say figure-ground activity is present whether or not a saccade is made towards the figure location and figure-ground activity can be present whether or not the figure is perceived consciously (Supèr et al., 2001). A corresponding finding comes from a study where presenting a masking stimulus 100 ms after a visual cue that caused a lack of perception of the cue also led to an inability to perform the correct eye movements (Lamme et al., 2002; Lalli et al., 2006). Thus, the variability in the strength of figure-ground responses in V1 correlates with saccadic reaction times suggesting that this perceptual signal is used for the subsequent behavioral response. Figure-ground modulation in the primary visual cortex is observed in upper and lower cortical layers and can in principle provide stimulus information to the dorsal and to the ventral stream. This means that the same neural signal of the stimulus can be used for perception and action.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Ambiguous Figures As said earlier, stimuli driven and goal driven processes control our gaze. These two types of (segregated) mechanisms also determine our conscious experience of the visual world. Consequently, oculomotor system is intimately linked to the neural circuits that process visual information into a perceptual experience. Yet the visual information and the perceptual experience of that information are not always matching. There are visual stimuli which magnify the discrepancy between information coming into our visual sensors (bottomup processing) and the perception we can built from it (top-down processing). Examples are ambiguous figures such as, Necker cube (Necker, 1942) or Vase-faces (Rubin, 1915) that give a bi-stable percept (fig 1). These reversible figures are ambiguous visual patterns that support at least two markedly different perceptual organizations. During a period of continuous viewing, observers‟ conscious experience fluctuates, alternating between the possible interpretations, switching. On the one hand, these kinds of stimuli (ambiguous figures) can give us an idea how oculo-motor behavior is modulated by bi-stable perceptions, or on the other hand, how switching perceptions are modulated by eye movements. Both questions have been studied because these images provide a reliable behavioral indicator for a mental event unaccompanied by changes in external stimulation. Several studies have shown by compensating for small fixational eye movements (Scotto et. al., 1990) or using after-images (Blake et. al., 1971) that eye movements are not necessary for perceptual shifts to occur. These results indicate that there is a cognitive process that by itself is enough for bi-stability to take place indicating a separation of perception and oculomotor mechanisms. However, other evidence suggests that when eye movements are

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

What We See and Where We Look

109

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

allowed some interaction between eye movements and perceptual alternations can be observed. An early study (Glen, 1940) showed that, examining three different viewing conditions (speed-up alternation rate, natural viewing, and slow-down alternation rate), for the speed-up alternation condition participants could increase the number of perceptual alternations. However, in this case also the number of eye movements was increased relative to the number of eye movements in natural viewing condition. In the slow-down condition the reverse results was found. Based on these observations it was concluded that there is some interdependence between eye movements and the voluntary control of perceptual alternations. However, the precise interaction between these two processes remained unclear. More recent research has shown that no single eye movement indicator identifies a unique moment when a perceptual switch occurs, instead several eye movements indicators (fixation density, microsaccades and saccades) can be combined to obtain a detailed image of the time course of a perceptual switch as a slow process with cascaded stages (Ito et al., 2003). Studies show that the average eye position of most subjects is an extreme value at about the time when the observers‟ perception switches in free viewing conditions. Comparing the results of bi-stable percept with the ones obtained with a non-ambiguous cube, it could be inferred that the polarity of the extreme corresponds to the percept that the subject had before the switch. This indicates a bi-directional coupling between eye position and perceptual switching, that is, after perceptual switching eye position changes towards a location consistent with the newly establish percept (Einhäuser et. al., 2004).

Figure 1. Ambiguous figures. These images are able to elicit two different perceptual experiences, but only one of them will be activated each time. (A) „„Necker cube‟‟ according to Necker (1832). The Necker cube has the property of eliciting the perceptual experience of a cube receding in depth (if one fixates in the upper left corner of the front side) or a perceptual cube that seems coming out of the screen (when one fixates on the lower right back corner of the cube). (B) Rubin‟s vase (1915). This image also elicits two different perceptual experiences, related with the part that is considered figure or ground. That is, when you consider the black part of the image as the figure what you see is a vase, but if the black part is considered as the background and the white part as the figure what you see two faces gazing mutually.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

110

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

Simple Embedded Figures

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

To fully understand the neural mechanisms of bottom-up and top-down control for oculomotor behavior, studying eye movements while viewing more complex figures is needed. Possible useful stimuli are simple figures embedded in complex ones (see fig. 2). To detect the embedded figure the observer typically scans the images with his eyes and compares the sensory data with the memorized image of the simple figure. One can then extract the fixation periods which provides local feature information that needs to be integrated into the global shape of the figure. So in these tasks, bottom-up information about local features and topdown information of the global figure shape needs to be compared for solving the task. We tested subjects in a Group Embedded Figures Test (GEFT), (Witkin et al., 1971). The subjects had to detect whether a simple outline of a figure was embedded or not in the complex one (fig. 2). By doing the task subjects scanned the complex stimuli for detecting the embedded figure. However, comparing the scan paths, i.e. the number, frequency and positions of fixations, showed no differences between correct and incorrect judgments. So, one could conclude that eye movements (fixations) where not controlled by top-down signals because whether the outline was noticed or not, or was present or absent, did not make a difference. In contrast, eye scan path is likely determined by bottom-up information, which by itself was not predictive for the subject‟s outcome. Arguably, the integration of locally bottom-up sensory signals and global top-down information occurs at a higher decision stage, beyond the oculomotor system. So, correct integration of bottom-up and top-down signals can still lead to an incorrect response when the processing at the decision stage fails.

Figure 2. Scan path of a subject performing an embedded figure test.A: The simple small figure on the top-left is embedded in the lower right complex figure. See the thick outline that illustrates this. B: Scan path when viewing the stimulus. The thin traces represent the actual movement trajectory of the left eye. Straight lines are saccades and asterisks are fixations.

Individuals with atypical visual-spatial perception have shown particular performance in such experimental visual tasks, like the EFT. For example, individuals with autism show a better performance on the EFT test. Although the underlying neural mechanisms are not clear, these results have been related with the weak central coherence theory that proposes that

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

What We See and Where We Look

111

individuals with autism have a bias towards processing local information, and thus fail to integrate parts to form a global whole (Happé and Frith 2006). A term used in cognitive psychology is the cognitive style, which describes how an individual person thinks, perceives and remembers information. Witkin related this concept with vision, and field-independence versus field-dependence is probably the most well-known style. It refers to a tendency to approach the environment in an analytical (field-independent), as opposed to global (fielddependent) fashion. Therefore, with the same scene, field-dependent observers achieve poor performance segregating simple figures than field-independent ones. It can be speculated that the differences in visual performance between these two types of perceptual style is caused by (or alternatively, results in) a difference in scan path of the eye movements. We tested this using a modified version of the Group Embedded Figures Test (GEFT). First we selected field-independent and dependent subjects and tested them in a computerized GEFT task. We found that field-independent subjects have, besides an improved detection performance in this task, also a different scan path. These subjects apparently use a different, a more efficient, strategy to detect visual information. Thus, it appears that field-independent persons have a view on the world that allows them to detect the details of the environment. Our results support the idea that cognitive processes can influence the scan paths exploring the same image, and field-independent group can benefit from a better top-down strategy.

Geometrical Illusions and 3D Representations

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Using another set of visual images, visual illusions, we can dissociate bottom-up and topdown processes. In these geometrical illusions the visual experience of the stimulus (phenomenological construct) is different than the physical parameters. Such opticalgeometric illusions involve systematic distortions of the size and shape of one or more of the elements within a visual array. Some examples of such visual illusions are given in figure 3.

Figure 3 Geometrical illusions A: „„Müller-Lyer‟‟ illusion where the central part perceptually seems longer in the wings-out (top stimuli) condition than in wings-in (central one) and control (lower one) stimuli. (B) Ponzo Illusion where both horizontal lines have the same size, but derived from the context (inclined lines around central ones), the upper line seems longer than lower one. (C) Brentano Illusion is an illusion derived from Müller Lyer one. There are two possible configurations shown; one with the wings-in part at the top and one with wings-out part at the top.

Studies using these geometrical illusions usually yield as conclusion that eye movements (saccades) are modulated by perceptual signals. That is saccades respond to the perceived sizes and not to the physical ones. For example, experiments involving the Müller-Lyer (ML) illusion (fig. 3a) suggest that an `overshooting' bias exists for eye movements toward the

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

112

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

wings-out ML endpoints and an „undershooting‟ bias for movements toward the wings-in ML endpoints (Binsted & Elliott, 1999; Bernardis et. al., 2005; De Grave et. al., 2006). Indeed, some studies (McCarley et. al., 2008; DiGirolamo et. al., 2008) found that the ML illusion had larger effects on perceptual judgments and on voluntary saccades (eye movements generated in the absence of an exogenous visual cue) than on reflexive saccades (movements generated in response to a transient visual signal). These findings are interpreted as evidence for the postulated segregation of visual pathways (Goodale & Milner, 1992). One of these pathways, the ventral path, relates to perceptive processes and the other (parietal path) to oculomotor programming (McCarley & Grant, 2008). One possible hypothesis to explain these observations is that the illusion arises at the primary visual cortex (Weinder & Fink, 2007) and influence saccade programming via projections from the cortex to superior colliculus. Some studies (DeGrave et. al., 2006b) that give support to this hypothesis found, working with Brentano Illusion (fig. 3c), that eye movements are influenced by visual illusions only if the action is guided by the attribute that is fooled by the illusion. But, an alternative explanation has been proposed that is not based on perception. These studies show that saccades towards complex stimuli are known to be pulled towards the “centre of gravity” of the stimulus configuration (He & Kowler, 1989; Vishwanath & Kowler, 2004). From this explanation saccades towards a winds-in stimulus might end closer to the center of the figure whereas the ones to a winds-out stimulus are terminate further away. This difference derives from the different centers of gravity of both stimuli. Both explanations fit with a top-down control of saccade programming, i.e. the perceptual construct must be obtained to control the direction for programming the saccade. Evidence supporting that perceptual information only guides eye movements in a late stage of processing comes from experiments dealing with Hollow-Face Illusion (Hoffman & Sebald, 2007). When observers look at the inside of a face mask from a certain distance, the mask often appears as a convex hemispherical face. Recording observers‟ eye movements photo-electrically while a mask rotated around a vertical axis which went through the tip of the nose showed that the illusory perception can modulate the observers‟ vergence eye movements. But it has to be said that in this experiment only late vergence components have been recorded. The authors postulated a first vergence eye movements‟ synchronization to the veridical binocular disparity and becomes prompted by the illusory distance of the tip of the nose as soon as the reactivation of face knowledge (top-down information) gives rise to the illusory convex face. Other researchers have investigated the induction of eye movements without the perception of a stimulus (Masson et. al., 1997; Teichert et. al., 2008; Wismeijer et. al., 2008). Masson et al. (1997) for example, briefly presented his subjects with two target stimuli, one for each eye, with crossed and uncrossed disparities. The stimuli always elicited vergence movements in accordance with the given disparities, even if the stimuli were too different to be fused, to create a stable percept. This direct initiation of the vergence movements by local disparities irrespective of subjective perception, provide evidence that eye movements are programmed in order to perceive the stimulus, and makes it very unlikely that eye movements are susceptible to cognitive top-down influences; in contrast to the conclusion drawn from the ML studies. To further investigate the bottom-up and top-down dilemma, we carried out an experiment in which observers were instructed to perform voluntary saccades between two targets in a textured stimulus, based on Ponzo illusion (fig. 3b). Our aim was answer the question whether saccade programming is governed by physical target location (bottom-up)

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

What We See and Where We Look

113

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

and/or by the perceived target location (top-down) location (Perez et al., 2010). We used induced depth stimuli where a target is perceived to be further away than it in reality is (fig. 4). Participants had to fixate in one of two points, in an induced near or far plane performing a saccade to the other one every 2 seconds. By studying the differences between the expected physical saccades amplitudes and the physical ones we tested whether saccades are directed towards the perceptual or to the physical location (fig. 4b). The results showed that at the onset of fixation, the eye positions are directed towards the perceived target location (fig. 4c). Only after fixation (>~200 ms), the visual system was able to correct this „depth illusion‟ by a fixational saccade and vergence eye movements. Thus eye movements (saccades and vergence) are guided by the perceived stimulus and during fixation eye movement signals are guided by the physical stimulus.

Figure 4. Eye movements in physical versus perceived distance. A: Observers were asked to perform a saccade from the lower point to the upper one and while fixating to answer by a button press if they were fixating in a near or far plane. B: Hypothesis behind the experimental setting. Saccade angles would be smaller if observers where programming their eye movements to the perceptual plane (grey lines) compared if they were directed to the physical plane (screen computer, black lines). C: Comparison of eye movements performed when observers where making a saccade from a perceptual far to a near plane (black line) or from a near to a far plane (grey line). One can see differences between both stimuli after saccade onset; these differences are in the expected direction if eye movements are guided by a perceptual construct. During the response period these differences in eye positions disappeared so the fixation position was corrected to the physical location.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

114

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

Conclusion and Further Research In this chapter we have given a brief overview of the bottom-up and top-down signals that control saccade guidance and perception and have discussed how eye movements depend on these factors. After reviewing several studies one can conclude that both processes are significantly involved in oculomotor behavior. However, their exact roles remain illusive. Different theories are postulated to explain the bottom-up and top-down control. These conclusions are based on data obtained coming from different experimental designs and tasks. So to be able to develop a coherent and integrative model for oculomotor control a more controlled use of experimental factors, like task and stimulus is needed.

References [1] [2] [3] [4]

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

[5] [6] [7] [8] [9] [10] [11] [12] [13]

Aivar, M. P., Hayhoe, M. M. & Chizk, C. L. (2005) Spatial memory and saccadic targeting in a natural task. Journal of Vision, 5(3), 177–193. Bernardis, P., Knox, P. & Bruno, N. How does action resist the visual illusion? Uncorrected oculomotor information does not account for accurate pointing in peripersonal space. Experimental Brain Research, 162(2), 133-144. Binsted, G, & Elliott, D. (1999). The Muller-Lyer illusion as a perturbation to the saccadic system. Human Movement Science, 18, 103-117. Binsted, G., Brownell, K., Vorontsova, Z., Heath, M. & Saucier, D. (2007) Visuomotor system uses target features unavailable to conscious awareness. Proceedings of the National Academy of Sciences of the United States of America,104(31), 12669-12672. Blake, R. R., Fox, R., & McIntyre, C. (1971) Stochastic properties of stabilized-image binocular rivalry alterations. Journal of Experimental Psychology, 88(3), 327-32. Brigner, W. L. & Deni, J. R., (1990) Depth reversals with an equiluminant Necker cube. Perception and Motor Skills, 70 (3 Pt 2), 1088. Cave, K. R., & Wolfe, J. M. (1990) Modeling the role of parallel processing in visual search. Cognitive Psychology, 22, 225-271. Chua, H. F., Boland, J. E. & Nisbett, R. E. (2005) Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences of the United States of America, 102(35), 12629-12633. De Grave D.D.J., Franz, V. H, Gegenfurtner, K.R. (2006). The influence of the Brentano illusion on eye and hand movement. Journal of Vision, 6(7), 727-738. De Grave, D. D. J., Smeets, J. B. J & Brener, E. (2006b) Why are saccades influenced by the Brentano Illusion? Experimental Brain Research, 175, 177-182. Digirolamo, G. J., McCarley, J. S., Kramer, A. F & Griffin, H. J.(2008) Voluntary and reflexive eye movements to illusory lengths. Visual Cognition, 16(1), 68-69. Einhäuser, W., Martin, K. A. C. & König, P. (2004) Are switches in perception of the Necker cube related to eye position? European Journal of Neurosciences, 20(10), 28112818. Findlay, J. M. (1997) Saccade target selection during visual search. Vision Research, 37(5), 617-631.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

What We See and Where We Look

115

[14] Fink, G. R., Halligan, P. W., Marshall, J. C., Frith, C. D., Frackowiak, R. S. & Dolan, R. J. (1997) Neural mechanisms involved in the processing of global and local aspects of hierarchically organized visual stimuli. Brain, 120(10), 1779-1791. [15] Glen, R. S. (1940) Ocular movements in reversibility of perspective. Journal of general Psychology,23, 243-281. [16] Godijn, R. & Theeuwes, J. (2002) Programming of endogenous and exogenous saccades: evidence for a competitive integration model. Journal of Experimental Psychology Human Perception and Performance, 28(5),1039-1054. [17] Goodale and Milner (1992) Separate visual pathways for perception and action. Trend in neuroscience, 15(1):20-5. [18] Happé, F. & Frith, U. (2006) The weak coherence account: detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25. [19] Hayhoe, M. M. & Ballard, D. (2005) Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4), 188-194. [20] Hayhoe, M. M., Shrivastava, A., Mruczek, R. & Pelz, J. B. (2003) Visual memory and motor planning in a natural task. Journal of Vision, 3(1), 49-63. [21] He, P. Y. & Kowler, E. (1989) The role of location probability in the programming of saccades: implications for „center-of-gravity‟ tendencies. Vision Research, 29(9): 1165-81. [22] Hochstein, S. & Ahissar, M. (2002) View from the top: hierarchies and reverse hierarchies in the visual system. Neuron, 36(5), 791-804. [23] Hoffman, J. & Sebald, A. (2007) Eye vergence is susceptible to the hollow-face illusion. Perception, 36, 461-470. [24] Ito, J., Nikolaev, A. R., Luman, M., Aukes, M. F., Nakatani, C. & van Leeuwen, C. (2003) Perceptual switching, eye movements, and the bus paradox. Perception, 32(6), 6681-6698. [25] Itti, L. & Koch, C. (2000) A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10-12), 1489-506. [26] Jacob, M. & Hochstein, S. (2009) Comparing eye movements to detected vs. Undetected target stimuli in an Identity Search task. Journal of Vision, 9(5), 201-216. [27] Kastner, S., & Ungerleider, L. G. (2000). Mechanisms of visual attention in the human cortex. Annual Review of Neuroscience, 23, 315-341. [28] Kopecz, K. (1995) Saccadic reaction times in gap/overlap paradigms: a model based on integration of intentional and visual information on neural, dynamic fields. Vision Research, 35(20), 2911-2925. [29] Lalli, S., Hussain, Z., Ayub, A., Charco, R. Q., Bodis-Wollner, I. & Amassian, V. E. (2006) Role of calcarine cortex (V1) in perception of visual cues for saccades. Clinical Neurophysiology, 117(9), 2030-2038. [30] Lamme, V. A., Supèr, H., Landman, R., Roelfsema, P. R. & Spekreijse, H. (2000) The role of primary visual cortex (V1) in visual awareness.Vision Research,40 (10-12),1507-1521. [31] Lamme, V. A., Zipser, K. & Spekreijse, H. (2002) Masking interrupts figure-ground signals in V1. Journal of Cognitive Neurosciences, 14(7), 1044-1053. [32] Land, M.F. & McLeod, P. (2000) From eye movements to actions: how batsmen hit the ball. Nature Neuroscience, 3, 1340-1345

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

116

Laura Perez Zapata, Maria Sole Puig and Hans Supèr

[33] Larsson, J., Amunts, K., Gulyás, B., Malikovic, A., Zilles, K. & Roland, P. E. (2002). Perceptual segregation of overlapping shapes activates posterior extrastriate visual cortex in man. Experimental Brain Research, 143(1),1-10. [34] Lindauer, M. S. & Lindauer, J. G. (1970) Brightness differences and the perception of figure-ground. Journal of experimental psychology, 84(2), 291-295. [35] Masson, G. S., Busettini, & Miles, F. A. (1997) Vergence eye movements in response to binocular disparity without depth perception. Nature, 18(6648), 283-286. [36] McCarley, J. & Grant, C. (2008) State-trace analysis of the effects of a visual illusion on saccade amplitudes and perceptual judgments. Psychonomic Bulletin & Review, 15(5), 1088-1014. [37] Morris, R. K. (1994) Lexical and message-level sentence context effects on fixation times in reading. Journal of Experimental Learning, Memory and Cognition, 20(1), 92103. [38] Mulckhuyse, M., Zoest, W. & Theeuwes, J. (2008) Capture of the eyes by relevant and irrelevant onsets. Experimental Brain research, 186(2), 225-235. [39] Nothdurft, H. C. (2002) Latency effects in orientation popout. Vision Research, 42(19), 2259-2277. [40] Parkhurst, D. J. & Niebur, E. (2003) Scene content selected by active vision. Spatial Vision, 16(2), 125-54. [41] Pérez, L., Aznar-Casanova, J. A. & Sùper, H. (2010) Dissociation of eye movement signals and perception during fixation. 2010 VSS Annual Meeting in Naples, Florida [42] Rayner, K., Kambe, G. & Duffy, S. A. (1996) Eye movement control in reading: a comparison of two types of models. Journal of Experimental Psychology: Human Perception and Performance, 22, 1188-1200. [43] Saslow, M. G. (1967) Effects of components of displacement-step stimuli upon latency for saccadic eye movement. Journal of Optometry Society of America, 57(8), 10241029. [44] Scotto, M. A., Oliva, G. A. & Tuccio, M. T. (1990) Eye movements and reversal rates of ambiguous patterns. Perception and Motor Skills, 70 (3 Pt 2), 1059-1073. [45] Supèr, H., Spekreijse, H. & Lamme, V. A.(2001) Two distinct modes of sensory processing observed in monkey primary visual cortex (V1). Nature Neuroscience, 4(3), 304-310. [46] Supèr, H., Spekreijse, H. & Lamme, V. A.(2003) Figure-ground activity in primary visual cortex (V1) of the monkey matches the speed of behavioural response. Neuroscience Letters, 344(2), 75-78. [47] Supèr, H., Spekreijse, H. & Lamme, V. A.(2003b) Internal state of monkey primary visual cortex (V1) predicts figure-ground perception. Journal of Neuroscience, 23(8), 3407-3414. [48] Supèr, H., Van der Togt, C., Spekreijse, H. & Lamme, V. A. (2004) Correspondence of presaccadic activity in the monkey primary visual cortex with saccadic eye movements. Proceedings of the National Academy of Sciences of the United States of America,101(9), 3230-3235. [49] Supèr, H. (2006) Figure-ground activity in V1 and guidance of saccadic eye movements. J Physiol Paris 100:63-69. [50] Supèr, H & Lamme, V. A. (2007) Altered figure-ground perception in monkeys with extra-striate lesion. Neuropsychologia, 45(14), 3329-3334.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

What We See and Where We Look

117

[51] Theeuwes, J. (1992) Perceptual selectivity for color and form. Perception & Psychophysics, 51(6), 599-606. [52] Theeuwes, J. (1994) Endogenous and exogenous control of visual selection. Perception, 23(4), 429-40. [53] Theeuwes, J. , Kramer, A. F. , Hahn, S., Irwin, D. E. & Zelinsky, G. J.(1999) Influence of attentional capture on oculomotor control. Journal of Experimental Psychology: Human Perception & Performance, 25(6), 1595-1608. [54] Teichert, T., Klingenhoefer, S., Wachtler, T. & Bremmer F. (2008) Depth perception during saccades. Journal of vision, 8(14), 27-1-13. [55] Trappenberg, T. P., Dorris, M. C., Muñoz, D. P. & Klein, R. M. (2001) A model of saccade initiation based on the competitive integration of exogenous and endogenous signals in the superior colliculus. Journal of Cognitive Neurosciences, 13(2), 256-271. [56] Van Zoest, W. & Donk, M. (2004). Bottom-up and top-down control in visual search. Perception, 33(8), 927-937. [57] Van Zoest, W. & Donk, M. (2006) Saccadic target selection as a function of time. Spatial Vision, 19(1), 61-76. [58] Vishwanath, D. & Kowler, E. (2004). Saccadic localization in the presence of cues to three-dimensional shape. Journal of Vision, 25, 4(6), 445-58. [59] Weinder, R. & Fink, G. R. (2007). The neural mechanisms underlying the Müller-Lyer illusion and its interaction with visuospatial judgements. Cerebral Cortex, 17(4), 878884. [60] Wismeijer, D. A., Van Ee, R. & Erkelens, C. J.(2008). Depth cues, rather than perceived depth, govern vergence. Experimental Brain Research, 184(1), 61-70. [61] Witkin, H. A., Oltman, P. K., Raskin, E., & Karp, S. A. (1971). Group embedded figures test manual. Palo Alto, CA: Consulting Psychology Press. [62] Yarbus, A. L. (1967) Eye Movements and Vision. Plenum Press. [63] Zangemeister, W. H., Oechsner, U. & Freksa, C. (1995). Short-term adaptation of eye movements in patients with visual hemifield defects indicates high level control of human scanpath. Optometry and Vision Science, 72(7), 467-77.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

In: Eye Movement: Theory, Interpretation, and Disorders ISBN 978-1-61728-110-5 c 2011 Nova Science Publishers, Inc. Editor: Dominic P. Anderson, pp. 119–160

Chapter 7

Characterizing Eye Movements for Performance Evaluation of Software Review Hidetake Uwano∗

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Nara National College of Technology

Abstract In this chapter, we introduce research on the analysis of reading strategy in software review by characterizing a developer’s eye movements. Software review is a technique to improve the quality of software documents (such as source code or requirements specification) and to detect defects by reading the documents. In software review, difference between individuals is more dominant than review techniques and other factors. Hence, understanding the factor of individual differences between a high-performance reviewer and a low-performance reviewer is necessary to develop practical support and training methods. This research reveals the factors affecting review performance from an analysis of the reading procedure in software review. Measuring eye movements on each line and in a document allow us a correlation analysis between reading procedure and review performance. In this research, eye movements are classified into two types: Eye movements between lines and eye movements between documents. Two experiments analyzed the relationship between the type of eye movements and review performance. In the first experiment, eye movements between lines of source code were recorded. As a result, a particular pattern of eye movements, called a scan, in the subject’s eye movements was identified. Quantitative analysis showed that reviewers who did not spend enough time on the scan took more time on average to find defects. ∗ E-mail

address: [email protected]

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

120

Hidetake Uwano These results depict how the line-wise reading procedure affects review performance. The results suggest that a more concrete direction of reading improves review performance. In the second experiment, eye movements between multiple documents (software requirements specifications, detailed design document, source code, etc.) were recorded. Results of the experiments showed that reviewers who concentrated their eye movements on high-level documents (software requirements specifications and detailed design document) found more defects in the review target document efficiently. Especially, in code review, reviewers who balanced their reading time between software requirements specifications and detailed design document found more defects than reviewers who concentrated on the detailed design document. These results are good evidence to encourage developers to read high-level documents in review.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Keywords: Software Engineering, Software Review, Human Factor

1.

Introduction

1.1.

Background

Improvement of software quality has become extremely important today because of the increase in large-scale software systems. Software defects (i.e. bugs and failures) in large-scale systems such as banking systems cause serious economic and social damage to system users. The defects especially in power plants, airplanes, and train systems cause fatal accidents [26]. The literature reported some serious crises caused by software defects such as the breakdown of the train station ticket examination system and engine trouble in running automobiles [17]. Elimination of defects in software before delivery is a necessary activity in software development organizations. In most software projects, defects in a system are detected by testing and are removed by debugging the system. However, due to growth in the size of recent software systems, defect detection requires massive costs. In addition, many defects usually remain in delivered software, and this increases the maintenance cost for fixing defects in the field. Since the size of today’s software systems keeps growing, effective/efficient defect detection techniques in software development are indispensable. 1.2.

Quality Improvement Activities in Software Development

Many of the techniques to improve software quality have been proposed in the field of software engineering. Software review is one of the most used techniques in software development organizations. This software review is a peer review of software system documents such as source code or requirements specifications. This review is intended to find and fix defects (i.e., bugs) overlooked in early development phases, thus, improving overall system quality [5]. Hundreds of studies are performed to improve the software review performance [21]. In particular, studies about software review techniques are most performed. In Chapter 2., a literature review of the software review is presented. Software review can apply in the early phase of the development; hence, the software

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Characterizing Eye Movements for Performance Evaluation of Software Review 121 review is a more effective technique than testing [28, 14, 20, 46]. Especially in developing large-scale software applications, the software review is vital, since it is quite expensive to fix the defects in later integration and testing stages. A study shows that review and its variants such as walk-through and inspection can discover 50 to 70 percent of defects in software products [46]. On the other hand, in much of the literature on software review individual differences of the developer are considered to heavily affect the effectiveness of software reliability [4, 5, 7, 11, 12, 27, 30, 38]. As Bucher described in his paper: “the prime factor affecting the reliability of software is the selection, motivation and management of the personnel who design and maintain it” [7]. Several books have described the human issue as a more important factor than technical issues such as tools, development techniques, program languages, and processes [5, 12]. Research that analyzed the project data of software development showed how the individual differences of the developer vary from 5 times to 28 times [4, 38]. In the quality improvement activities in software development, the performance of the developer is also affected by individual differences. Myers indicated that there is a tremendous amount of variability in individual performance at inspections (a sort of software review) and software testing [30]. For example, one subject found nine defects at the inspection; on the other hand, another subject who inspected the same document found only three defects. Laitenberger et al. also showed in their paper many differences in the performance of the individuals [22]. In their experiment, the number of defects, which the subject found, varied from zero to eighteen. While much of the literature mentions the importance of individual differences in software development, only a few studies examine a factor of the differences. Analyzing the factor of individual differences from an empirical evaluation is an important way to improve software development effectiveness. That is, analyzing experienced developer’s activities leads us to establish a novel development technique that encourages other developers to perform similar activities for performance improvement. Also, analyzing inexperienced developer’s activities advances the training method of a novice developer. Hence, analyzing the factor of the individual differences from developer’s activities is a fruitful research topic. 1.3.

Characterizing Individual Differences in Software Review

This chapter clarifies the factor of individual differences on one of the quality improvement activities, software review1 . In the review, a reviewer reads the document, understands the structure and/or functions of the system, then detects and fixes defects if any. The software review is an off-line task conducted by human reviewers without executing the system. Basically, software review is a human-centered activity, hence impact of the individual differences on the review is quite dominant. To characterize the developer’s performance in the review, we propose using the eye movements of the reviewer. The way of reading software documents (i.e., reading strategy) should vary among different reviewers. The reading strategies are indicated by the eye 1 Hereafter,

the word ”review” indicate software review, inspection, walkthrough and/or other reading tech-

niques. Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

122

Hidetake Uwano

movements of the reviewers. Thus, we consider that the eye movements can be used as a powerful metric to characterize the performance in the software review. In this chapter, two experiments to measure reviewer’s eye movements on software review are described. To record the reviewer’s eye movements, gaze-based review evaluation system was created. Using the system, the way of reading documents was analyzed from two different viewpoints: eye movements between lines and eye movements between documents.

2.

Software Review

Software review is a technique to improve the quality of software documents and detect defects (i.e. bugs or faults) by reading the documents [5]. In software review, a developer reads software requirements specifications, designs documents, source code, and other documents to understand the system functions and structures, and then detects defects from the documents. Defect detection by review can be performed in the early phases of software development without an implemented system; therefore, rework costs can be reduced [23]. Especially in large-scale projects, defect detection and defect correction consume huge resources, and defect detection by review is necessary. A study shows that review and its variants such as walkthrough and inspection can discover 50 to 70 percent of defects in software product [46].

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

2.1.

Target Document

In software development, several software documents are created in each development process. Basically, the developer can review every document that was created at each process. IEEE Standard lists 37 software documents as the review target [18]. Here, we describe three documents used mainly in studies of software review. • Software Requirements Specifications A software requirements specification (SRS) is a description of the purpose and environment for software under development. The SRS fully describes what the software will do and how it will be expected to perform. Software review is the most common ways of validating an SRS [33]. • Software Design Document A software design document (SDD) is a description of a software product that a software designer writes in order to give software development teams an overall guidance of the architecture of the software project. • Source Code In the software development, source code is implemented after the requirement specification phase and design phase. Source code is frequently used as a review target in the literature about software review. Several studies showed that code review was significantly more effective than software testing for defects detection [28, 9, 20].

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 123 2.2.

Review Technique

Several methodologies that can be used for software review have been proposed so far. The idea behind these methods is to propose a certain criteria for reading the documents. A review without any reading criteria is called Ad-Hoc Reading (AHR). AHR offers very little reading support since a software product is simply given to inspectors without any direction or guidelines on how to proceed through it and what to look for [21]. In this case, defect detection performance fully depends on the skill, the knowledge, and the experience of an inspector who may compensate for the lack of reading support. Checklist-Based Reading (CBR) [14] introduces a checklist with which the reviewers check typical mistakes in the document. This technique has been a commonly used in software review since the 1970s. Checklists are based on a set of specific questions that are intended to guide the reviewer during review. The checklist assists the reviewer in remembering which aspects are to be checked, but offers little guidance on what specifically to do. A method, called Perspective-Based Reading (PBR), is used when the reviewers read the document from several different viewpoints, such as the designer’s viewpoint, the programmers and the testers [41]. To examine a document from a particular viewpoint, PBR technique provides guidance for the inspector in the form of a PBR scenario on how to read and examine the document.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

2.3.

Impact of Review Techniques and Individual Differences

To evaluate the performance of review techniques, hundreds of empirical studies have been conducted [9]. Some empirical reports have shown that CBR, which is the most used method in the software industries, is not more efficient than AHR[35, 34]. Some studies show PBR achieved a slightly better performance than CBR and AHR [1, 35, 34, 40, 43]. On the other hand, Halling et al. [16] report an opposite observation that CBR is better than PBR. In their work, reviewers who use a checklist are more effective regarding all defects in the document than reviewers who use a viewpoint. Several case studies have shown that these methods had no significant difference [15, 24, 29, 37, 39]. The reason why the results vary among the empirical studies is that the performance of individual reviewers is more dominant than the review technique itself. This is because the review task involves many human factors. Thelin et al. [43] compared the effectiveness between CBR and Usage-Based Reading (UBR); one of the reading technique similar to PBR. The result shows that the effectiveness of UBR is 1.2–1.5 times better than CBR on average. On the other hand, the result also describes the defect detection rate of reviewers who used same review technique varies among the individuals (0% – 50%). That is, large differences of individuals in the same review technique come from more detailed reviewer’s activities, not from abstract reading guidelines. Foe example, PBR instructs reviewers to use perspectives of stakeholders such as user, developer, tester, and so on. Each perspective gives scenarios describing how the reviewer reads the software documents. However, this scenario shows only guidelines or criteria, and a more detailed/concrete reading procedure is not described. Hence, each reviewer reads the documents in their own way. The results of previous study showed that the individual differences

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

124

Hidetake Uwano

of detailed reading procedure are a major factor of review performance. Analyzing the detailed reading procedure in the review from empirical evaluation is an important way to improve software development effectiveness. That is, analyzing experienced developer’s activities leads us to establish a novel development technique that encourages other developers to perform similar activities for performance improvement. Also, analyzing inexperienced developer’s activities advances a training method for the novice developer. Hence, analyzing the factor of individual differences from the developer’s activities is a fruitful research activity. 2.4.

Section Summary

This section discussed review target documents and review techniques. The software review is performed on a number of software documents, with different techniques, such as CBR and PBR. This chapter also described the impact of individual differences in the software review. From the viewpoint of review performance, individual differences are more dominant than technique-wise differences.

3.

Eye Movement in Software Review

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

To characterize the reviewer’s performance in an objective way, we propose to use eye movements of the reviewer. The way of reading software documents (i.e., reading strategy) should vary among different reviewers. The reading strategy is indicated by the eye movements of the reviewers. Thus, we consider that the eye movements can be used as a powerful metric to characterize performance in the software review. Advantages of using eye movements to evaluate software review are as follows: • Measurement of detailed review activity Generally, most knowledge of the reviewer is difficult to express in words; hence, a common analysis method such as an interview could not extract the reviewer’s characteristics. Detailed analysis of eye movements will capture the reviewer’s knowledge from their reading procedure. • Easy to measure The second advantage in adopting the eye movements is that the eye movements provide us a data without any training of the reviewers. A related work, a method called think-aloud protocol [13], tapes the audio and video characteristics of subjects to record their intellectual activities. However, compared to the think-aloud protocol, the eye movements do not impose training or expensive preparation upon the reviewers. • Quantitative evaluation Using the eye movements allows us to observe the reviewer’s reading procedure with quantitative data. Quantitative analysis of eye movements and review performance data (i.e. the number of defect detection, detection time per defect) will reveal the correlation between reading procedure and review performance.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 125

Start

int functionA( … ){ while( … ){ functionB( … ); } return … ; }

Review from the top of source code Review along program control flow

void functionB( … ){ … } int main( … ) { … if( … ){ … functionA( … ); }else{ … } }

Start

Figure 1. Reading procedure between lines.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

3.1.

Measurement Perspectives

We chose two measurement perspectives for a structured analysis of eye movements in Software Review: (1) Eye movement between lines, and (2) Eye movement between documents. 3.1.1. Eye Movement between Lines

One of the measurement perspectives is eye movements between lines in a document. The software documents are not read as ordinary documents such as newspapers and stories. For instance, let us consider two kinds of software documents: source code and software requirements specifications. The source code has a control flow (branches, loops, function calls, etc.), which defines the execution order among program statements. The reviewer often reads the code according to the control flow, in order to simulate exactly how the program works. On the other hand, the reviewer who reads the source code without the control flow (i.e. reads from the top of the file) requires more review time to understand the program. Figure 1 describes an example of different reading procedures, review along program control flow, and review from the top of source code. The software requirements specification (SRS) is typically structured, where a require-

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

126

Hidetake Uwano

ment contains several sub-requirements. Each requirement is written in a labeled paragraph, the set of lines. If a requirement R depends on other requirements R1 and R2 , R refers R1 and R2 by their labels. Hence, when the reviewer reads the document, he/she frequently jumps from one requirement to another by traversing the labels. As seen in above examples, a primary construct of a software document is a statement. Thus, it is reasonable to consider that the reviewer reads the document in units of lines. Therefore, reading procedure between lines in a document affects review performance. In the first experiment reported in next Section, we evaluate the relationship between review performance and reading procedure between lines.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

3.1.2. Eye Movement between Documents

Another measurement perspective is an eye movement between documents. According to Wiegers, reviewer uses not only the target document but also other relevant documents, such as a high-level document [46]. The reviewer reads the high-level document to confirm the target document correctly contains the system requirements. For example, in source code review, reviewers read source code as well as the requirements specification and design document to understand system structures, functions, and data structures. Figure 2 describes the relationship between source code and other documents at source code review. Source code has several blocks of functions, methods, classes, etc. The reviewer reads all the blocks to understand the program entirely (e.g. through Function A to Function C) and tries to find any defect during program understanding. In addition, the reviewer reads related blocks in the system requirements specification as well as in the detailed design document (e.g. Requirement A and Design A) to find any inconsistencies among different levels of documents. This activity is a “comparison” rather than an “understanding.” As in other relevant documents, the checklist in CBR is also used in the review [21, 37]. In the review using these relevant documents, the reviewer reads the target document and relevant documents alternately to follow these checkpoints. In such multi-document review, the reading procedure between documents affects the defect detection performance. In the second experiment reported in Section 5., we evaluate the relationship between review performance and the reading procedure between documents. 3.2.

Related Work

Eye movements have been used often for the purpose of evaluating human performance, especially in cognitive science. Law et al. [25] analyzed eye movements of experts and novices in a laparoscopic surgical training environment. This study showed that experts tend to watch the affected parts more than the tool in their hands, compared with novices. Kasarskis et al. [19] investigated eye movements of pilots in a landing task using a flight simulator. In this study, novices tended to concentrate more on watching the altimeter than the experts did, while the experts watched the airspeed. In the field of software engineering, research exists regarding to the exploitation of eye

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 127 Function A

Compare

Design A Design B

Function B Function C Source Code

Compare Compare

Design C Detailed Design Document Requirement A Requirement B Requirement C

System Requirements Specification

Figure 2. Source code review with multiple documents.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

movements, for the purpose of, for instance, monitoring an online debugging processes [42, 44], usability evaluation [6, 31], human interface [36, 47]. Stein and Brennan evaluated the usefulness of eye movement information for the support of the debugging process [42]. They captured the eye movements of experienced developers while the developers detected a defect from a source code. The authors then compared the defect detection time of the developers who read the source code with the visualized eye movements of an expert and a developer without visualized eye movements. The results showed that visualized eye movements of experts accelerate the defect detection of other developers. As far as we know, no research has directly applied the eye movements to evaluate the performance of the software review. Few studies have conducted an analysis of a software developer’s eye movements. Crosby et al. [10] and Bednarik et al. [2] analyzed developer’s eye movements in program understanding. In these studies, the source code and visualized program behavior were displayed to the developer. The authors confirmed that eye movements are useful in revealing differences between the expert’s and novice’s program reading behavior. While these studies focused on effective program understanding where executable programs are available, we focus on document review where related upstream documents are available, but executable programs are not.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

128 3.3.

Hidetake Uwano Section Summary

This section showed our proposed method for the evaluation of the reading procedure of the reviewers. Using the eye movements of the reviewer, the reading procedures are evaluated from two measurement perspectives. The following two sections report experiments performed to evaluate eye movements between lines and eye movements between documents.

4.

Eye Movement between Lines

This section reports on an empirical experiment of code review for evaluating eye movements between lines. Using an environment, which we made to evaluate reading procedure on software review, 60 review processes were analyzed. As a result, we have identified a particular pattern, called scan, in the subject’s eye movements. Quantitative analysis showed that reviewers who did not spend enough time for the scan tend to take more time for finding defects. 4.1.

System Requirements

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

To evaluate reviewer’s eye movements, an integrated environment to measure and record the eye movements during the software review is necessary. Here, we present five requirements to be satisfied by the system. Requirement R1: Sampling Gaze Points over a Computer Display First, the system must be able to capture the reviewer’s gaze points over the software documents. Usually, reviewed documents are either shown on the computer display, or provided as printed papers. Considering feasibility, we try to capture gaze points over a computer display. To precisely locate the gaze points over the documents, the system should sample the coordinates with sufficiently fine resolutions, distinguishing normal-size fonts around 10–20 points. Requirement R2: Extracting Logical Line Information from Gaze Points As seen in source code, a primary construct of a software document is a statement. Software documents are structured, and often written on a one-statement-per-line basis. Thus, the reviewer reads the document in units of lines. The system has to be capable of identifying which line of the document the reviewer is currently looking at. Note that the information must be stored as logical line numbers of a document, which is independent of the font size or the absolute coordinates where the lines are currently displayed. Requirement R3: Identifying Focuses Even if a gaze point appears at a certain line in the document, it does not necessarily mean that the reviewer is reading that line. That is, the system has to be able to distinguish a focus (i.e., interest) from reviewer’s eye movements. It is reasonable that the fixation over a line reflects the fact that the reviewer is currently reading that line.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 129 Software document

Display Review Window move, resize, scroll Reviewer

Logical-line-wise Eye Movement Result Viewer Document Viewer Fixation line Fixation line Window info numbers, dates, numbers Monitor (Window size, durations position, scroll) Fixation TimeEvent Point / Line sequence Capturer Window event Converter Analyzer

Review Platform

Eye movement Eye image

Eye Camera

To other analysis tool

Image Processor

Fixation points (Absolute coordinate) Sampled gaze points (Absolute coordinate)

Fixation Analyzer

Dates of fixation points

Eye Gaze Analyzer

Figure 3. System architecture of DRESREM.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Requirement R4: Recording Time-Sequenced Transitions The order in which the reviewer reads lines is important information that reflects individual characteristics of software review. Also, each time the reviewer gazes at a line, it is essential to measure how long the reviewer focuses on that line. The duration of the focus may indicate the strength of the reviewer’s attention to the line. Therefore, the system must record the lines focused on as time sequence data.

Requirement R5: Supporting Analysis Preferably, the system should provide tool supports to facilitate analysis of the recorded data. Especially, features to play back and visualize the data significantly contribute to efficient analysis. The system may be useful to novice reviewers for subsequent interviews or for educational purposes. 4.2.

Implementation

Based on the requirements, we have developed a gaze-based review evaluation environment called DRESREM (Document Review Evaluation System by Recording Eye Movements). 4.2.1. System Architecture

As shown in Fig. 3, DRESREM is composed of three sub systems: (1) an eye gaze analyzer, (2) a fixation analyzer and (3) a review platform. As a reviewer interacts with these three sub systems, DRESREM captures the line-wise eye movements of the reviewers. While a reviewer is reviewing a software document, the eye-gaze analyzer captures his/her gaze points over the display. Through image processing, the gaze points are sampled as

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

130

Hidetake Uwano

Figure 4. Eye gaze analyzer EMR-NC.

absolute coordinates. Next, the fixation analyzer converts the sampled gaze points into fixation points to filter gaze points irrelevant for the review analysis. Finally, the review platform derives the logical line numbers from the fixation points and corresponding date information, and stores the line numbers as time-sequenced data. The review platform also provides interfaces to review a document for the reviewers, and analysis support for the analysts.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

In the following subsections, a more detailed explanation for each of the sub systems is given. 4.2.2. Eye Gaze Analyzer

To achieve Requirement R1, the eye-gaze analyzer samples reviewer’s eye movements on a computer display. To implement the analyzer, we have selected a non-contact eye-gaze tracker EMR-NC, manufactured by nac Image Technology Inc (http://www.nacinc.jp/). Figure 4 describes the eye-gaze analyzer used in the system. EMR-NC can sample eye movements within 30Hz. The finest resolution of the tracker is 0.3 degrees, that is 5.4 pixels on the screen at the experiment setting; equivalent to 0.25 lines of 20-point letters. The resolution is fine enough to satisfy Requirement R1: Sampling Gaze Points over Computer Display. EMR-NC consists of an eye camera and image processor. The system detects reviewer’s eye images, and calculates the position, direction, and angle of an eye. Then the system calculates the position of a display where the reviewer is currently looking. Each sample of the data consists of an absolute coordinate of the gaze point on the screen and sampled date. To display the document, we used a 21-inch liquid crystal display (EIZO FlexScanL771) set at 1024x768 resolutions with a dot pitch of 0.3893 millimeters. To minimize the noise data, we prepared a fixed and non-adjustable chair for the reviewers.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 131 4.2.3. Fixation Analyzer

For a given fixation criteria and the gaze points sampled by the eye gaze analyzer, the fixation analyzer derives fixation points (as absolute coordinates) and their observation date. In this chapter, we determined the fixation criteria as the area of 30 pixels in diameter where the eye mark stays more than 50ms. Extracting the fixation points from the gaze points is necessary to achieve Requirement R3: Identifying Focuses. To implement the fixation analyzer, we have used the existing analysis tool EMR-ANY.exe, which is a bundled application of EMR-NC.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

4.2.4. Review Platform

The review platform is the core of DRESREM, which handles various tasks specific to the software review activities. We have implemented the platform in the Java language with a SWT (Standard Widget Tool), comprising about 4,000 lines of code. What is most technically challenging is to satisfy Requirement R2: Extracting Logical Line Information from Gaze Points. In order to judge if the reviewer is looking at a line of the document, we use fixation points derived by the fixation analyzer. Here we define a line on which a fixation point overlaps as the fixation line. The goal is to capture the line numbers of the fixation lines. Note that the line numbers must be captured as logical line numbers. The logical line number is a sequence number attached to every line within the document. The line number is basically independent of the font size or the absolute position of the line currently being displayed. Hence, we need a sophisticated mechanism to derive the logical line numbers from fixation points captured as absolute coordinates. For this, we carefully consider the correspondence between absolute coordinates of points on the PC display and the lines of the documents displayed over those coordinates. We refer to such correspondence as point/line correspondence. As seen in Fig. 3, the review platform consists of the following five components. Document Viewer The document viewer shows the software document to the PC display, with which the reviewer reads the document. As shown in Fig. 5, the viewer has a slider bar to scroll up and down the document. By default, the viewer displays 25 lines of the document in a 20-point font, simultaneously. The viewer polls window information (such as window size, font size, position, scroll pitch) to the fixation point/line converter. This information is necessary to manage consistent point/line correspondence. Event Capturer As a reviewer interacts with the document viewer, the reviewer may scroll, move, or resize the window of the document viewer. These window events change the absolute position of the document within the PC display, thus modifying the point/line correspondence. To keep track of the consistent correspondence, the event capturer monitors all events issued in the document viewer. When an event occurs, the event capturer notes the event and forwards it to the fixation point/line converter.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

132

Hidetake Uwano

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Figure 5. Example of document viewer.

Fixation Point/Line Converter The fixation point/line converter derives the logical line numbers of fixation lines (referred as fixation line numbers) from the given fixation points. Let pa = (xa , ya ) be an absolute coordinate of a fixation point on the PC display. First, the converter converts pa into a relative coordinate pr within the document viewer, based on the current window position pw = (xw , yw ) of the viewer, i.e., pr = (xr , yr ) = pa − pw = (xa − xw , ya − yw ). Then, taking pr , the window height H, the window width W , the font size F, and the line pitch L into account, the converter computes a fixation line number l pr . Specifically, l pr is derived by the following computation:  ⌊yr /(F + L)⌋ + 1,    · · · if ((0 ≤ xr ≤ W ) and (0 ≤ yr ≤ H)) l pr = 0 (OUT OF DOCUMENT),    · · · otherwise Thus, the point/line correspondence is constructed as a pair (pa , l pr ). Note that l pr is changed by the user’s event (e.g., window move or scroll up/down). Therefore, the converter updates l pr upon receiving every event polled from the event capturer. For instance, suppose that the reviewer moves the document viewer to a new position pw′ . The converter is then notified of a window move event. Upon receiving the event, the converter re-calculates pr as pa − pw′ , and updates l pr .

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 133

Figure 6. Result viewer.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Thus, for every fixation point, the fixation point/line converter derives the corresponding fixation line number, which achieves Requirement R2: Extracting Logical Line Information from Gaze Points. Time-Sequence Analyzer The time-sequence analyzer summarizes the fixation line numbers as time-sequenced data to satisfy Requirement R4: Recording Time-Sequenced Transitions. Using the date information sampled by the fixation analyzer, the time-sequence analyzer sorts the fixation line numbers by date. This is done to represent the order of lines in which the reviewer reads the document. It also aggregates consecutive appearances of the same fixation line number into one with the duration. The duration for a fixation line then reflects the strength of the reviewer’s interest in the line. Result Viewer The result viewer visualizes the line-wise eye movements using a horizontal bar chart, based on the time-sequenced fixation line numbers. Figure 6 shows a snapshot of the result viewer. In the figure, the left side of the window shows the document reviewed by the reviewer. On the right side of the window, the sequential eye movements of the reviewer are described as a bar chart. In this chart, the length of each bar represents the duration for the fixation line. The result viewer can play back the eye movements. Using the start/stop buttons and a slider bar placed under the viewer, the analyst can control the replay position and the speed. On the result viewer, the time-sequenced transition of fixation lines is described by the highlighting of the line and the emphasis of bar. Moreover, the result viewer has a feature that can superimpose the recorded gaze points and fixation points onto the document viewer. This feature helps the analyst watch more detailed eye movements over the document. Thus, the result viewer can be extensively used for the subsequent analysis of the recorded data, which fulfills Requirement R5: Supporting Analysis.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

134 4.3.

Hidetake Uwano Experiment

To demonstrate the effectiveness of DRESREM, we have conducted an experiment of source code review. 4.3.1. Overview

The source code review is a popular software review activity, where each reviewer reads the source code of a software, and finds bugs without executing the code. The purpose of this experiment is to watch how the eye movements characterize the reviewer’s performance in the source code review. In the experiment, we have instructed individual subjects to review source code of small-scale programs, each of which contains a single defect. Based on a given specification of the program, each subject tried to find the defect as quickly as possible. The performance of each reviewer was measured by the time taken until the injected defect was successfully detected (We call the time: defect detection time). During the experiment, the eye movements of the individual subjects were recorded by DRESREM. Using the recorded data, we investigate the correlation between the review performance and eye movements.

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

4.3.2. Experiment Settings

Five graduate students participated in the experiment as reviewers. The subjects have 3 or 4 years of programming experience, and have at least once experienced the source code review before the experiment. We have prepared six small-scale programs written in the C language (12 to 23 lines of source code). To measure the performance purely with the eye movements, each program has no comment line. For each program, we prepared a specification, which is compact and easy enough for the reviewers to understand and memorize. In each program a single logical defect was intentionally injected, which is an error of program logic, but not of program syntax. Table 1 summarizes the programs prepared for the experiment. We then instructed individual subjects to review the six programs with DRESREM. The review technique was the ad-hoc review (AHR, see Sect. 2.2.). The task for each subject to review each program consisted of the following five steps. 1. Calibrate DRESREM so that the eye movements are captured correctly. 2. Explain the specification of the program to the subject verbally. Explain the fact that there exists a single defect somewhere in the program. 3. Synchronize the subject to start the code review to find the defect; start the capture of eye movements and code scrolling. 4. Suspend the review task when the subject says he/she found the defect. Then, ask the subject to explain the defect verbally.

Eye Movement: Theory, Interpretation, and Disorders : Theory, Interpretation, and Disorders, Nova Science Publishers, Incorporated, 2010.

Characterizing Eye Movements for Performance Evaluation of Software Review 135

Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.

Table 1. Programs reviewed in the experiment. Program IsPrime

LOC 18

Accumulate

20

Sum-5

12

Average-5

16

Average-any

22

Swap

23

Specification The user inputs an integer n. The program returns a verdict whether n is a prime number or not. The user inputs a nonnegative integer n. The program returns the sum of all integers from 1 to n. The user inputs five integers. The program outputs the sum of these integers. The user inputs five integers. The program outputs the average of these.

The user inputs an arbitrary number of integers (up to 255) until zero is given. The program outputs the average of the given numbers. The user inputs two integers n1 , n2 . The program swaps values of n1 and n2 using function swap(), and outputs them.

Injected Defect Logic in a conditional expression is wrongly reversed, yielding an opposite verdict. A loop condition is mistaken. The condition must be (i