Sherlock holmes: the complete stories.

Table of contents :
ADVANCES IN THE MODULARITY OF VISION
Copyright
Preface
Contents
Introduction
HISTORICAL CONTEXT
CURRENT DEVELOPMENTS
Information Processing in the Primate Visual System
RETINAL PROCESSING
VISUAL CORTEX
HIERARCHICAL PROCESSING
PROCESSING STREAMS
NEURONAL RESPONSE PROPERTIES
COMPUTATIONAL APPROACHES
REFERENCES
Areas and Modules in Visual Cortex
PROBLEMS IN SUBDIVIDING CORTEX
ASSIGNING FUNCTIONS TO MODULES
VISUAL CORTEX ORGANIZATION IN PRIMATES
CONCLUSIONS
REFERENCES
Visual Coding of Features and Objects: Some Evidence from Behavioral Studies
EARLY GROUPING OF PERCEPTUAL ELEMENTS
EXPECTANCY AND ATTENTION
LOCALIZATION
ROLE OF ATTENTION
"WHAT WITHOUT WHERE"
THE ABSENCE OF A FEATURE
FEATURE ANALYSIS AND THE ASYMMETRY OF CODING
PERCEPTION OF OBJECTS
REFERENCES

Citation preview

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

i

ADVANCES IN THE MODULARITY OF VISION

Selections From a Symposium on Frontiers of Visual Science

Committee on Vision Commission on Behavioral and Social Sciences and Education National Research Council

National Academy Press Washington, D.C. 1990

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

ii NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. This report has been reviewed by a group other than the authors according to procedures approved by a Report Review Committee consisting of members of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Frank Press is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Robert M. White is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Samuel O. Thier is president of the Institute of Medicine The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy's purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Frank Press and Dr. Robert M. White are chairman and vice chairman, respectively, of the National Research Council. Additional copies of this report are available from: Committee on Vision 2101 Constitution Avenue N.W. Washington, D.C. 20418 Printed in the United States of America

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

iii

COMMITTEE ON VISION SUZANNE MCKEE (Chair), Smith-Kettlewell Eye Research Foundation, San Francisco LYNN COOPER, Department of Psychology, Columbia University RUSSELL LEE DEVALOIS (NAS), Department of Psychology, University of California, Berkeley MERTON CLYDE FLOM, College of Optometry, University of Houston DAVID L. GUYTON, Wilmer Ophthalmological Institute, Johns Hopkins University DONALD HOOD, Department of Psychology, Columbia University JAMES LACKNER, Ashton Graybiel Spatial Orientation Laboratory, Brandeis University GORDON E. LEGGE, Department of Psychology, University of Minnesota PETER LENNIE, Center for Visual Sciences, University of Rochester LOUIS SILVERSTEIN, Honeywell, Inc., Phoenix, Ariz. KENT A. STEVENS, Department of Computer and Information Science, University of Oregon ANDREW B. WATSON, NASA Ames Research Center, Moffett Field, Calif. PAMELA EBERT FLATTAU, Study Director JOANNE ALBANES, Research Assistant CAROL METCALF, Administrative Secretary ROSE WHITE, Secretary

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

iv

SYMPOSIUM PARTICIPANTS

ANTHONY J. ADAMS, University of California, Berkeley JOHN ALLMAN, California Institute of Technology, Berkeley DANA BALLARD, University of Rochester RANDOLPH BLAKE, Vanderbilt University DAVID C. VAN ESSEN, California Institute of Technology, Pasadena JON KAAS, Vanderbilt University MORTIMER MISHKIN, National Institute of Mental Health, Bethesda ANNE TREISMAN, University of California, Berkeley ROBERT SHAPLEY, New York University ROBERT WURTZ, National Eye Institute, Bethesda

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

PREFACE

v

Preface

The Committee on Vision is a standing committee of the National Research Council's Commission on Behavioral and Social Sciences and Education. The committee provides analysis and advice on scientific issues and applied problems involving vision. It also attempts to stimulate the development of visual science and to provide a forum in which basic and applied scientists, engineers, and clinicians can interact. Working groups of the committee study questions that may involve engineering and equipment, physiological and physical optics, neurophysiology, psychophysics, perception, environmental effects on vision, and visual disorders. From time to time, the committee sponsors public meetings that feature papers on advances in vision research. The meetings are designed to aid the newcomer in reaching a preliminary understanding of the utility of the latest approaches to vision research and to challenge more experienced scientists, engineers, and clinicians alike to consider the appropriate role for these new models and methods in the advancements of vision research and its application to practical problems. In March 1987 the committee sponsored a Symposium on Frontiers of Visual Science at the National Academy of Sciences in Washington, D.C. The committee brought together seven leading investigators in vision research whose work embodies the integration of some of these newer models and methods. Participants discussed how converging lines of evidence indicate that the brain contains multiple neural representations (i.e., maps) of visual space, different maps being devoted to the analysis of different aspects of the visual scene. This volume provides a selection of papers from that meeting. Funds for the symposium were provided from the general budget of the committee, which receives support from the departments of the Army,

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

PREFACE

vi

the Navy, and the Air Force; the National Eye Institute; the National Institute on Aging; the National Aeronautics and Space Administration; the National Science Foundation; the Department of Veterans Affairs; the American Academy of Ophthalmology; the American Academy of Optometry; the American Optometric Association; and the Society for Information Display. The committee gratefully acknowledges the efforts of the vision scientists who took time from their demanding schedules to participate in the symposium. The committee also thanks its staff officer, Pamela Ebert Flattau, for organizing the meeting and preparing the final report. Production of the report was effectively assisted by Carol Metcalf of the committee staff. To all these, we express our gratitude. Suzanne McKee, Chair Committee on Vision

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. CONTENTS vii

Contents

Introduction Randolph Blake and Robert Shapley 1

Information Processing in the Primate Visual System David C. Van Essen 5

Areas and Modules in Visual Cortex Jon H. Kaas 24

Visual Coding of Features and Objects: Some Evidence from Behavioral Studies Anne Treisman 39

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. CONTENTS viii

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INTRODUCTION

1

Introduction

RANDOLPH BLAKE AND ROBERT SHAPLEY This conference on the modularity of vision is devoted to contemporary developments surrounding one of the oldest problems in science and medicine: the localization of function in the brain. This volume contains a collection of transcribed presentations by scientists from a diverse array of disciplines— anatomy, neurophysiology, and psychology. Taken together, these presentations critically explore the idea that the processing of visual information is carried out in different visual areas of the brain. This idea has been fostered, in part, by exciting developments in the technology for studying the brain as well as by new ways of thinking about the brain as a sophisticated device for computing. HISTORICAL CONTEXT The idea that the brain can be subdivided into regions with specialized functions is an old one. As early as the second century A.D., the Egyptians theorized that the brain was divided into regions, each region having a particular function—the emotions, for example, resided in the anterior ventricle and memories in the posterior ventricle. In the seventeenth century, the anatomist Thomas Willis (whose name is associated with the circle of Willis) realized that the brain tissue, not the fluid-filled spaces within the tissue, was the functional part of the brain. He localized memory in the convolutions of the cortex, imagination in the corpus callosum, and emotions in the base of the cerebrum. Of course, as we all know, Descartes went so far as to localize the soul in the pineal gland. All these early guesses at functional localization were based on philosophical speculation, not empirical proof. The nineteenth century anatomist

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INTRODUCTION

2

Franz Joseph Gall provided the first empirically based theory of localization. Gall divided the brain into functional units based on what he believed was the brain's topography, as evidenced by the bumps on the outside of the head. His method, known as phrenology, postulated that the bumps on the exterior surface of the skull corresponded to regions of the brain that were particularly well developed. He assigned specific cognitive processes and even personality traits to each area. The scientific establishment questioned Gall's methods, pointing out that the exterior of the skull is a poor representation of the structure of the brain within. Phrenology was wrong in its details but, as a method, it played a role in advancing the notion that mind is divisible into faculties spatially distributed throughout the brain. Gall's reputation as a distinguished anatomist meant that his beliefs were not to be dismissed lightly. Following Gall, the question of localization of function within the brain drew more and more attention. Gradually evidence accumulated in favor of the idea that the brain consists of an assembly of discrete, specialized machinery designed to accomplish particular tasks, perceptual, cognitive and motor. The evidence took several forms. Lesion experiments by the David Ferrier in England demonstrated specific behavioral deficits produced by carefully placed, localized lesions. At more or less the same time, anatomists were beginning to subdivide the brain into histologically distinct areas; probably the most famous of these brain cartographers was Korbinian Brodmann, whose areas are still part of the neuroanatomical landscape. Interesting evidence was also cropping up in the hospital clinics. Neurologists were beginning to describe syndromes wherein localized damage to the human brain produced rather specific functional losses. Perhaps the most famous case is the one described by Paul Broca, the neurologist who localized speech within the temporal lobe of the left hemisphere of the brain. By the early twentieth century, the question of broad functional segregation was settled: there was no doubt that the brain was divisible into discrete sensory and motor areas, and by the 1940s it was realized that there existed even finer subdivisions within these two broad functional categories. However, it wasn't until the late 1950s that we began to get a closer glimpse of the actual neural machinery housed within these specialized sensory areas. The Nobel prizewinning work of Torster Wiesel and David Hubel gave entirely new meaning to the notion of function specificity, particularly as it applied to the collation of neurons with similar receptive field properties. Their pioneering experiments set the stage for a more refined examination of visual information processing. From that work, we now know that in higher mammals Brodmann's areas 18 and 19 in fact consist of multiple representations of the visual field, varying in their size, internal organization, and pattern of connections. In very recent years physiologists have

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INTRODUCTION

3

begun elucidating functional differences between these areas, and we're now seeing in the literature complex circuit diagrams. Some believe that the field has matured to the point at which it is reasonable to speculate intelligently about the specialized roles played by various extrastriate areas in visual perception and in visually guided behavior. And it is some of those speculations that we'll be hearing about today. But interestingly, there's more to the story than neuroanatomy and neurophysiology: there have also been important developments in the fields of computer science and psychophysics. Those developments likewise speak to the issue of the functional role of multiple visual maps in the brain. CURRENT DEVELOPMENTS This symposium on the modularity of vision is mainly about visual areas of the cerebral cortex and the role of the cerebral cortex in visual information processing. To those of us who work in what is called “early vision,” that is, stages of visual processing up to and including the primary visual cortex or striate cortex, this symposium is mainly concerned with “late vision,” that is, areas of the cerebral cortex that are involved in fairly high levels of visual image processing. The work is therefore very closely related to perception, and also to algorithms of high-level image processing by machines. One of the major issues in late vision—or, indeed, in the entire field of vision research—is the nature of the processing of the visual signal by the brain: how is it done, where is it done, and so on. A more detailed question, one that is very much a current focus of research, is whether the visual system processes visual information in a serial or parallel manner. The idea of a strict serial or hierarchical mode of visual signal processing was most explicitly stated in the early papers of Hubel and Wiesel, though it has remained an idea of contemporary appeal since that time. The idea, in its simplest form, is that visual properties of neurons—the receptive fields—are built up at each level by convergence of neural information from the previous stage of the visual pathway. The concept is that receptive fields of neurons in the primary striate cortex are composed of the sum of receptive fields in the lateral geniculate nucleus. The concept is also that the receptive fields of neurons in accessory visual areas—the extra striate cortex—are built up by summing the activity of groups of neurons in V-1, the primary area, and so on in series from V-2 to V-3 and V-4. The name serial processing refers to a series of receptive field processing, one after the other. The concept of parallel processing, in contrast, is that there are separate and independent lines of signal flow from the retina to the brain,

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INTRODUCTION

4

and from one area to another within the brain. The receptive field properties of neurons in the cerebral cortex are determined not only by what stage or level of processing they are at, but also by which stream of signal flow they are getting input from at the periphery. Perhaps the boldest expression of this hypothesis of parallel processing was presented initially by Jonathan Stone and Bogdan Dreyer in their work on the visual receptive field properties of neurons in primary visual cortex. The frontier of vision science represented here today has revealed that neither an extreme serial nor an extreme parallel view is sufficient to account for the way our brains are wired. There is evidence of segregation of parallel channels in the cortex—segregated all the way from the eye to the cortex. There is also clear evidence for hierarchical serial processing within these parallel streams. The idea that there is strict serial or strict parallel processing seems to be ruled out: there is evidence for both. Another issue addressed in this symposium is the existence and the significance of multiple maps of the visual world inside our heads. Some rationale for the existence of these multiple maps comes from recent work on computational vision. Multiple maps of the outside world are found not only in vision; they are also found in the auditory system. They signify the importance of having multiple models of the world inside our head for subsequent processing in order to perform, in the case of vision, significant tasks about calculating the brightness and the color of objects and their shapes, for example. That the existence of spatial maps also shows us that, intrinsically, the analysis of visual space is done in parallel, by having available to the brain at any one time activity representing all visual space.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

5

Information Processing in the Primate Visual System

DAVID C. VAN ESSEN It is useful at the outset to recall how impressive our visual system is in analyzing and integrating information. A simple example can be found in Figure 1A . This is a black and white image that starts out as millions of bits of information about gray-level intensities that are encoded within the retina. What happens, literally in a split second, is that the brain processes that information and yields a rich and vivid set of perceptions. In particular, we perceive a human face. Moreover, by automatically comparing this face with the enormous number of images viewed over a lifetime, we recognize it as a unique individual—Albert Einstein. What is equally impressive is that a vastly degraded and simplified image (Figure 1B ) is also immediately recognized as Einstein. This is something that no computer vision system as yet can come even remotely close to achieving—not because computers inherently lack the computational power to process the image, but because they have not been programmed with the right strategies for processing this information. The issue for this symposium, then, is the specific strategies used by the visual system to carry out such elegant analyses of the immense variety of images that we confront during normal vision. My remarks concentrate on the macaque monkey. Monkeys have been chosen because their sense of vision is very similar to that of the human in a variety of important and basic ways. RETINAL PROCESSING Images formed on the retina are picked up by approximately 100 million photoreceptors in both macaque monkeys and humans. About 95 percent of those are rods, which are used for night vision. In daytime vision, about 5 million cones feed information onto approximately 1 million

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

FIGURE 1A A photograph of a unique individual—Albert Einstein.

6

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

7

retinal ganglion cells. Thus, there is already a great deal of convergence within the retina. That convergence is handled very elegantly by putting a high density of cones and retinal ganglion cells to subserve the fovea. In the center of the fovea there are 10,000 or more ganglion cells subserving each square degree of the visual field. Out in the periphery of the retina, however, the image is analyzed at a much coarser level—on the order of just a handful of ganglion cells per square degree. Another important point about the output from the retina is that retinal ganglion cells can be subdivided into major cell classes. In primates, the subdivision is a basic dichotomy between approximately 10 percent of the cells that are very large neurons with large dendritic trees, termed the magnocellular population. These are distinct from the majority (90 percent) of the ganglion cells, which are of the smaller and termed the parvocellular subtype. It is the parvocellular neurons that send high-acuity information, including information about color. Parvocellular neurons are generally associated with sustained responses to illumination. The magnocellular system, by contrast, carries little chromatic information and responds transiently to illumination. These two pathways head out from the retina and relay through separate layers of the lateral geniculate nucleus (LGN). The magnocellular neurons of the retina terminate in the ventral-most pair of layers, while the majority of parvocellular neurons terminate in the uppermost layers. This dichotomy is continued in the relay up to primary visual cortex—the striate cortex. VISUAL CORTEX It has been known for more than a century that the primary visual area, also known as striate cortex, V1, and area 17, is easily distinguishable from neighboring areas by its characteristic structure. It receives the direct inputs from the LGN, and it contains a very precise and orderly representation of the opposite half of the visual field. In addition, there is a much larger belt of tissue, the extrastriate visual cortex, much of which is buried in one or another of the assorted folds of the cortex. At one time, it was thought that the visual cortex occupied only the occipital lobe. From studies done in a number of laboratories, however, it is now clear that visual cortex extends well down into the temporal lobe (the inferior temporal region) and well up into the posterior parietal region. So, altogether, more than half of the macaque's cerebral cortex is largely or exclusively visual in function. How is this belt of tissue organized? Classical anatomists historically emphasized that there was just a small number of subdivisions associated

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM 8

FIGURE 1B A line drawing that is much reduced in overall information content but is still easily recognizable as Einstein. Drawing by R.A. Eatock.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

9

with visual processing in the cortex. It is now clear this was a vast oversimplification. There are not merely a few, but rather a large number of distinct visual areas that have been identified in various laboratories over the past two decades (Van Essen, 1985).

FIGURE 2 Visual areas in the cerebral cortex of the macaque monkey. The location of different areas are indicated on a drawing of the right hemisphere (upper left) and on an unfolded 2-dimensional cortical map (center). Areas that have been particularly well-studied are shown in stippled: area V1, V2, V4, the middle temporal area (MT), the inferotemporal complex (IT), and the posterior parietal complex (PP). Source: Van Essen and Anderson (1990) In a two-dimensional unfolded map of the cerebral cortex in the right hemisphere of a macaque monkey, one can see more than two dozen different visual areas occupying the entire posterior (left) half of the hemisphere. For simplicity, only a few of these areas (the stippled ones) have been labeled in Figure 2 . Many of these are well-defined areas that nearly all laboratories agree on, although not everyone uses exactly the same terminology. Although substantial number of regions are less well-defined, there is reasonable evidence that they represent distinct subdivisions.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

10

Delineating different areas of the cortex has turned out to be a major undertaking. It is difficult because the criteria for identifying these different subdivisions vary considerably. In general, visual areas in the cerebral cortex have been identified by a combination of criteria, the most important of which is that each area has a distinct pattern of inputs from other cortical areas and outputs to other target areas. Most of these areas have maps of the contralateral half of the visual field, which is accordingly represented over and over again. The maps are themselves an important part of the identification process. There is a lot of individual variability, however, in the detailed organization of these areas from one animal to the next. Although this is of interest in its own right, it also contributes to the difficulty of working out the arrangement of different areas. HIERARCHICAL PROCESSING Anatomical connections can also be used, to assess the way in which information flows from one place to another in a cortex. Studies in a number of laboratories have shown that connections within the visual cortex are nearly always reciprocal in nature. If area A projects to area B, then there is a reciprocal connection from B back to A. In the great majority of such cases, these connections are asymmetrical. Several research groups have noted this asymmetry and have suggested that direction could be associated with forward or ascending information flow, whereas flow in the opposite direction is viewed as feedback. Based on these anatomical criteria, John Maunsell and I (Van Essen and Maunsell, 1983) suggested several years ago that the overall collection of visual areas could be grouped together into an anatomically based hierarchical scheme that starts with area V1 and then goes through a half dozen separate layers until one gets to higher-level processing centers in the temporal, frontal, and parietal lobes. An important question is how high into cortical processing centers this scheme remains valid. There is now enough evidence to trace the succession of processing centers all the way from the retina up through visual areas in the occipital, temporal, and parietal lobes, all the way up and out of visual cortex proper—in fact, out of neocortex and into the hippocampus. In our current version Dan Felleman and I have proposed a dozen stages of hierarchical processing in the cortex, plus an additional pair of stages represented by the retina and LGN. This, then, gives us a sense of the degree to which information goes through successive hierarchical levels, and also of the rich degree of parallelism and reciprocity in terms of multiple outputs from any one area to targets at both higher and lower levels. Thus, we can think of the visual system as being divided into a large number of discrete, higher,

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

11

interconnected modules: individual cortical areas containing anywhere from just a few million cells in small areas like MT to hundreds of millions of cells in areas like V1. PROCESSING STREAMS Another aspect of modularity comes from looking in more detail at the way in which some visual areas, in particular areas V1 and V2, can be broken up into discrete compartments. (V1 is the largest of all visual areas; V2 is nearly as large and adjoins V1.) The cortex, when sliced parallel to the cortical surface and stained with the mitochondrial enzyme cytochrome oxidase, reveals a distinctive pattern. This pattern was first discovered for V1 by Margaret Wong-Riley (Carroll and WongRiley, 1984) and for V2 by Roger Tootell and colleagues (Tootell et al., 1983). Within V1, there is a remarkable arrangement of little patches or so-called blobs, particularly in the superficial layers of the cortex, that stain densely for cytochrome oxidase (Livingstone and Hubel, 1984). These patches are separated by so-called interblobs which stain less densely for the same enzyme. At the border between V1 and V2, a rather different configuration is evident orthogonal to the boundary between the areas. The darkly stained regions have a different connectional pattern than the more lightly stained region, and both of these differ from a third compartment of pale interstripes, all within area V2. Not only are there distinct patterns of connections between V1 and V2 associated with this compartmental organization, but there are also distinctive connectional patterns between the compartments in V2 and higher-order targets—in particular, areas MT and V4 (Figure 3 ). Evidence for these connections comes from experiments done in England (Shipp and Zeki, 1985) and in my laboratory (DeYoe and Van Essen, 1985, 1988). Experiments we have conducted show that when fluorescent tracers were injected into the target areas MT and V4, we were able to identify cells projecting to MT—located primarily in the thick stripes, with a much lower number—and some cells in the thin stripes as well. We also identified cells projecting off to area V4 that are concentrated in both the thin stripe and the interstripe regions. There appears to be a dichotomy in the retinal and geniculate organization. The magnocellular and parvocellular subdivisions project to separate portions of area V1. The magnocellular projects to its own layer, area layer 4C, which in turn projects to layer 4B. That is one discrete compartment. Then the parvocellular system, perhaps including an additional subset of so-called interlaminer cells in the LGN, projects indirectly to the blobs and the interblobs associated with the superficial layers. This tripartite

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

12

arrangement is preserved by the projections to the tripartite scheme in area V2: interblobs to interstripes, blobs to thin stripes, and layer 4B to thick stripes. There is further segregation of these streams with the thick stripes as well as layer 4B projecting both to area MT and V4 receiving inputs not just from one stream but from both of these together.

FIGURE 3 Processing streams and hierarchical organization of the primate visual system. At the two lowest levels, retinal ganglion cells (RGC) and lateral geniculate nucleus (LGN), there is a dichotomy between the small parvocellular (P) cells and the large magnocellular (M) cells. In areas V1 and V2 there is a tripartite compartmentalization. Layer 4B of V1 and the thick stripes of V2 are dominated by magnocellular inputs, and they project most strongly to area MT and from there to the posterior parietal complex. The blobs and interblobs of V1 and the thin stripes and interstripes of V2 are dominated by parvocellular inputs, and information flows from them to area V4 and then to the inferotemporal complex. All lines except that from the retina to the LGN represent reciprocal connections. Note that there is significant cross-talk between processing streams at several levels of the hierarchy. Source: Van Essen and Anderson (1990).

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

13

NEURONAL RESPONSE PROPERTIES What are the cells in these different compartments actually doing in terms of processing information? The standard approach—inspired by the work of Hubel and Wiesel—is to use simple stimuli such as bars and edges of light, and to ask what turns on a cell in any given area. Figure 4 shows an example of a cell from area VP, which happened to be highly selective for stimulus wavelength. The cell responds to long wavelengths, i.e., red, not at all to short wavelengths, and not at all to white. There is now quite a rich catalog of information of the basic selectivity of cells in the visual pathway, based on studies in many laboratories. We know, for example, that color selectivity is very common in V4, but is very rare in MT. Interestingly, the same distinction applies to the different compartments of area V2. Those compartments projecting off to MT have very low color selectivity, whereas both of the thin and interstripe compartments that project off to V4 are rich in color selectivity by our criteria. The opposite is evident when looking at direction selectivity. Here there is a high incidence of direction selectivity in area MT and very low in area V4. A similar bias occurs in that there is a very low incidence of direction selectivity in the V2 compartments projecting to V4 and a somewhat higher incidence in the thick stripes that project off to MT. But it is not a perfect match, in that the percentage of direction selective cells in the thick stripes of V2 is not nearly as high as the actual percentage for MT. One has to wonder what is going on in this compartment other than a simple analysis of stimulus direction. Using this kind of information, again gleaned from a number of different studies, Ted DeYoe put together an illustration to give a qualitative impression of the kinds of information processing represented within the different channels that we have seen. This is illustrated in Figure 5 with a set of icons representing different types of selectivity (prism = wavelength selectivity; spectacles = binocular disparity selectivity; pointing hand = direction selectivity; and angle = orientation selectivity); these are placed in areas and compartments in which a high incidence at such selectivity is encountered. In the magnocellular stream, for example, projecting through V1 and V2 into MT, there is a substantial incidence of direction selectivity, suggesting an involvement with motion analysis. There is also information about stimulus orientation and binocular disparity represented at all these levels. So it is not just a single kind of selectivity. Multiple cues appear to be analyzed within this stream. Within the parvocellular stream, the compartments associated with the blobs and the thin stripes are dominated by an analysis of wavelength; one suspects that it is involved in the analysis of stimulus color. The compartments the one associated with the interblobs contain a combination

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

14

of color-selective cells and orientation selective cells, many cells showing selectivity along both dimensions.

FIGURE 4 Selectivity of a cell in area VP to stimuli of different wavelengths. The cell responds well to long wavelengths (red) but not to shorter wavelengths or to white light. Source: A. Burkhalter and D. Van Essen, unpublished. This raises a question of whether there is a real difference in the way in which color (wavelength) information is used in these two streams. We really do not know the answer, but it seems likely that color information

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

15

FIGURE 5 Representation of various response selectivities in different areas and compartments in the visual hierarchy. Icons are placed in each compartment to symbolize a high incidence of cells showing selectivity for stimulus wavelength (prisms), orientation (angle symbols), direction (pointing hands), or binocular disparity (spectacles). Each processing stream has a distinctive physiological profile, but most types of selectivity are representative in more than one stream. Source: Adapted from DeYoe and Van Essen (1988).

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

16

might sometimes be used to encode the presence of interesting features, such as chromatic borders. Once the information about the presence of a border has been generated, however, information about the colors used to define the border might be discarded higher in the system. Thus, the presence of wavelength selectivity per se does not in and of itself imply that it is part of the stream explicitly involved in color analysis. We need to be aware of those subtleties in order to fathom the relationship between the properties of individual neurons and the functions of the circuits in which they are involved (see DeYoe and Van Essen, 1988). As one traces the visual pathway through fairly high levels, it is striking that not only can one activate these higher-level cells with rather simple stimuli—bars and edges—but also the sharpness of tuning that one sees is not dramatically different at these high levels than it is in V-1 (the first level at which such information becomes explicit at the single neuron level). That raises a fundamental question about what these higher-order areas are actually doing. Neurons in these areas are not simply relaying information; they must also be processing the information in an interesting way. We suspect that the answer does not lie in the way in which simple stimuli are analyzed. More complicated stimuli are needed to understand the role of these cells. The type of stimulus complexity that is needed should be linked to the tasks of the visual system in mediating perception. We should use psychophysics, then, as a guide for studying higher-level processing. A good example is an experiment inspired by the perception of texture. It is notable that we are able to get the percept very quickly that we are looking at a photo of a rocky beach (Figure 6 ). We don't need to scrutinize individual rocks, one after another and sort of assemble them sequentially into the percept of a beach. It is the texture of that pattern that we process very quickly to get the impression not only that this is a particular rocky beach, but that its surface is receding in depth. How do we begin to analyze the visual processing of texture? Psychophysical experiments (e.g., Julesz, 1984; Beck, 1976) have made considerable progress on this issue in significant part by reducing the problem to one in which textures are defined using a set of distinct texture elements, or textons. Using their work as a general guide, we have made recordings from areas V-1 and V-2 using texture patterns of a rather simple type. Using computergenerated texture patterns we have done the following kind of experiment (Figure 7 ). By recording from a cell in area V-2, we found that the cell preferred near-vertical stimuli when tested with a single texture element. When this element was surrounded by a texture pattern of the same element orientation (a uniform texture field), the response of the cell was almost completely suppressed. When the surround elements were of the orthogonal orientation (orientation contrast), the responses were still

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

FIGURE 6 A natural scene that is rich in textural information.

17

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

18

quite vigorous. Thus, the responses of this particular cell correlate well with the perceptual salience of the central texture element.

FIGURE 7 Responses of a neuron in are a V1 to orientation contrast. The cell responds well to a near-vertical bar presented within the classical receptive field (C) and to the same bar when it is surrounded by a texture pattern containing bars of orthogonal orientation (C = S). Responses are suppressed when the bar is part of a uniform texture field (C = S). Source: Van Essen et al. (1989). COMPUTATIONAL APPROACHES I would like now to address the role of computational approaches to vision in understanding basic aspects of visual processing. It is popular these days to consider the possibility that computational approaches will provide strong insights for understanding visual processing, but the field is still in its infancy, especially in terms of making a correlation between abstract computational theories and actual physiological processes occurring within the brain. Although computational models are very interesting in their own

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

19

right, it is hard to see exactly what their implications are at the level of single neurons in visual cortex. One example of a computational strategy that we suspect may have firm roots in the underlying anatomy and physiology involves the understanding of depth perception by stereopsis, an idea generated by Charles Anderson, now working at the Jet Propulsion Laboratory in Pasadena. The standard concept of stereopsis begins with the eyes fixating on a given point. That point is imaged on the center of the fovea in the right and left eyes. Images lying on the horopter, or fixation plane, fall on precisely corresponding portions of the two retinas. Images in depth relative to the horopter, however, fall on disparate or noncorresponding portions of the two retinas. Our sense of stereoacuity is exquisitely good. We can pick up disparities on the order of a few seconds of arc—a fraction of a photoreceptor diameter. Although that is certainly impressive by any standard, it is even more striking when one takes into account the fact that our ability to fixate an object and hold it on the center of the two foveas is really not all that great. Psychophysical observations have demonstrated that there is actually a fair amount of jitter in the precise localization of the image and that the jitter is not concordant in the two eyes. From moment to moment, there are fluctuations on the order of at least a few minutes of arc, and up to 10 or 20 minutes of arc during a time when we can see a well-defined spot that appears to be stable in depth. The implication is that our stereoacuity is a couple of orders of magnitude sharper than the binocular vergence errors that are part of our normal visual processing. How do we do this? This is a serious computational problem for the visual system to cope with, yet we obviously succeed. We suggest that one needs some kind of dynamic shifting or remapping process between the retina and the first stage of binocular integration in the cortex (Anderson and Van Essen, 1987). This is termed the registration problem in stereopsis. Imagine the left and right eyes are looking at very similar patterns except that they are physically offset. This peak falls on disparate parts of the two retinas because of the vergence errors (Figure 8 ). What we need is a process in which the projection from the retinal ganglion cells up to the cortex can be adjusted independently for the right and left eyes, and adjusted by sufficient magnitude that there would be alignment of the image representations. If that could be achieved in the manner illustrated schematically in the figure, then the registration problem could be cleverly solved by the visual system. Figure 9A shows a type of circuit that could do this job. In this succession of relay stages from the retinal ganglion cells up through several successive layers, connections ascend, but they branch a little bit at the first level, a little more at the second, even more at the highest level. As they

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

20

branch, one would like only one set of branches active at any one moment. So if, for example, the right side of the pathway were active, information would go up and get shifted to the right, and then get shifted to the left, register, and shift back to the right. If one had control over which branches were being affected, one could get a dynamic remapping of the cells at the beginning onto any set of contiguous cells up at highest level.

FIGURE 8 Schematic diagram of how a dynamic shifting process could provide a binocular registration at the cortical level (V1) despite misregistration of luminance patterns for the two eyes. Note that the sharp luminance peak, which activates noncorresponding RGCs (hatched circles) maps onto corresponding cells at the registration stage. Source: Anderson and Van Essen (1987). We propose, in the slightly simpler set of layers and cells shown in Figure 9B, that one could achieve this by means of a set of inhibitory neurons that could actually shunt or veto the signals going through one set of dendrites. The other inhibitory neuron would be silent, thereby letting the pathway going to the other set of dendrites send its signal through. The activity of these inhibitory neurons could determine whether information goes to the right or the left at each successive level. Is there any shred of evidence that something like that might be going on in the visual system? To know, one would need several successive seemingly simple relay stages, inhibitory neurons, and some kind of feedback mechanism. All those features are in fact present in area V1, but their significance has heretofore been puzzling. The inputs from the lateral geniculate nucleus have been known for a long time to terminate within the layer four complex, in which there is an enormous increase in the number of cells available. Yet these cells have been described as simple relay stages without having orientation selectivity or any other kind or processing. So

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

21

there are successive relay stages available, inhibitory neurons in these layers, and also massive feedback known to come from layer six to all of these intermediate layers.

FIGURE 9 A simple shifter circuit. A: Ascending components for a four-level circuit with eight cells at the bottom. Cells at each level have bifurcating axons that contact a pair of target cells at the next level. B: A complete shifter circuit for a three-level network, starting with four cells at the bottom level. Specific dendritic innervation patterns are shown for both ascending inputs and inhibitory neurons involved in the control of the shifting process. Heavy lines in A and B represent an activity pattern involving successive shifts to the right, left, and (in A) again to the right. Source: Anderson and Van Essen (1987). There is also physiological evidence (Poggio, 1984) that the disparity tuning of cells in the superficial layers of the cortex is actually sharper than the vergence errors known to exist in the monkey as well as in humans.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

22

Thus, there is physiological evidence that the registration problem has been solved very early in the visual cortex. Our specific hypothesis, then, is that it may be solved by means of a dynamic shifting process of predictable magnitude that would exist in these earlier and heretofore mysterious set of relay layers within primary visual cortex. This is something we are exploring at the present time. Altogether, we have learned a fair amount about what is going on in the monkey brain. The extent to which this is relevant to the human brain, with its tenfold or more greater size, is an issue that I will leave to other investigators. I think, though, it is fair to conclude that we are continuing to make progress in understanding how the detailed microcircuitry of the cortex can explain interesting aspects of visual processing. REFERENCES Anderson, C.H. , and D.C. Van Essen 1987 Shifter circuits: A computational strategy for dynamic aspects of visual processing . Proc. Natl. Acad. Sci. 84 : 6297-6301 . Beck, J.J. 1976 Effect of orientation and of shape similarity on perceptual grouping . Perception & Psychophysics 1 . Carroll, E.W. , and M. Wong-Riley 1984 Quantitative light and electron microscopic analysis of cytochrome oxidase-rich zones in VII prestriate cortex of the squirrel monkey . J. Comp. Neurol. 222 : 1-17 . DeYoe, E.A. , and D.C. Van Essen 1985 Segregation of efferent connections and receptive field properties in visual area V2 of the macaque . Nature 317 : 58-61 . 1988 Concurrent processing streams in monkey visual cortex . Trends in Neurosci. 11 : 219-226 . Julesz, B. 1984 Toward an axiomatic theory of preattentive vision . Pp. 585-612 in G.M. Edelman , W.E. Gall , and W.M. Cowan , eds., Dynamic Aspects of Neocortical Function . New York : Wiley . Livingstone, M.S. , and D.H. Hubel 1984 Anatomy and physiology of a color system in the primate visual cortex . J. Neurosci. 4 : 309-356 . Poggio, G.F. 1984 Processing of stereoscopic information in primate visual cortex . Pp. 613-634 in G.M. Edelman , W.E. Gall , and W.M. Cowan , eds., Dynamic Aspects of Neocortical Function . New York : Wiley . Ship, S. , and S. Zeki 1985 Segregation of pathways leading from area V2 to areas V4 and V5 of macaque monkey visual cortex . Nature 315 : 322-325 . Tootell, R.B.H , M.S. Silverman , R.L. DeValois , and G.H. Jacobs 1983 Functional organization of the second cortical visual area in primates . Science 220 : 737-739 .

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

INFORMATION PROCESSING IN THE PRIMATE VISUAL SYSTEM

23

Van Essen, D.C. 1985 Functional organization of primate visual cortex . Pp. 259-329 in E.G. Jones and A. Peters , eds., Cerebral Cortex , Vol. 3 . New York : Plenum Press . Van Essen, D.C. , and J.H.R. Maunsell 1983 Hierarchical organization and functional streams in the visual cortex . Trends in Neurosci. 6 : 370-375 . Van Essen, D.C. and C.H. Anderson 1990 Information processing strategies and pathways in the primate retina and visual cortex . In S.F. Zornetzer , J.L. Davis , and C. law , eds., Introduction to Neural and Electronic Networks . Orlando, Fla. : Academic Press . Van Essen, D.C. , E.A. DeYoe , J.F. Olavarria , J.J. Knierim , D. Sagi , J.M. Fox , and B. Julesz 1989 Neural responses to static and moving texture patterns in visual cortex of the macaque monkey . Pp. 49-67 in D.M.K. Lam and C. Gilbert , eds. , Neural Mechanisms of Visual Perception . Woodland, Tex. : Portfolio Publishing .

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

24

Areas and Modules in Visual Cortex

JON H. KAAS One of the disadvantages of speaking on the topic of visual cortex organization is that there are already a lot of tracks in the snow. But there is also the advantage that tracks have already been laid, and I can travel along some of these trails. I would like to start by noting some of the difficulties in determining brain organization. PROBLEMS IN SUBDIVIDING CORTEX In talking about the organization of the brain, there is an initial problem: What kind of concepts do we use to describe the structure of the brain? We commonly use the terms, area and nucleus. We have some idea of what we mean by nucleus, although the term has been used somewhat inconsistently for subdivisions of the brain. David Van Essen has already talked about the difficulty of experimentally defining an area; the same criteria do not apply to what constitutes an area as one moves from lower to higher areas in a processing hierarchy. We also talk about smaller subdivisions, bands, clusters, modules, columns, and so on. Delimiting areas and nuclei can be difficult, and defining subdivisions of the brain becomes even more difficult with these smaller units. A second problem in dealing with studies of the mammalian brain is that there are a lot of species, and, hence, a lot of different brains. Figure 1 illustrates the pathways by which mammals arose very early from therapsid reptiles around 200 million years ago. Clearly there are a number of major separate lines of mammalian evolution, and they have been separate for a long time. Brains that we now can study evolved from the rather simple brains of early mammals. Figure 2 provides an example of one of the changes that has occurred in mammalian brains. Radinski (1975) has

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

25

illustrated how much mammal brains can differ in size, even if the animals are roughly the same size. He considered the brains of a hedgehog, a galago, and a squirrel monkey. It is clear that neocortex has expanded in primates, with major differences in the neocortex of prosimian primates and New World monkeys. There must be major differences in brain structure as well. So there is also the difficulty of comparing brains when major differences undoubtedly exist. A third problem is that current theories are often constrained by a history of conclusions based on rather ambiguous evidence. Early investigators of brain organization were very limited by the available methods. Early architectonists (e.g., Brodmann, 1909) considered brain structure and divided the brain into histologically distinct subdivisions that were presumed to be functional subdivisions. However, it is now clear that many errors occurred. Figure 3 demonstrates some of the difficulties of interpreting differences in histological structure. In the caudal part of a tree shrew brain, primary visual cortex (area 17) can be clearly identified, and binocular and monocular parts can be distinguished by thickness. If we consider a similar section through the brain of a hedgehog, area 17 is also apparent, with structurally distinct binocular and monocular parts. However, what is most obvious is that the appearance of area 17 differs in hedgehogs and tree shrews. One of the real achievements of Brodmann was that he recognized area 17 as homologous, that is, the same field across these and other mammals. We now know these fields are the same from many types of evidence, although some people deny that area 17 of hedgehogs is area 17 even to this day. If there is disagreement over the identity of the primary field—which is the most easily identified field—you can imagine the problem of identifying homologies for other fields across species. Investigators relying only on architectonic evidence have had quite different opinions on how brains are divided. This variability is due partly to the difficulty of making distinctions, and partly because it is very hard to deduce homologies from histological appearance. The solution to the problem of identifying cortical areas is not simply to say that species differences relate to the expansion of a lot of cortex we know nothing about—what we will call association cortex. We would then need only to identify a few primary areas across different species. The methods are available to do much better than that. Figure 4 shows an example of how the application of a new procedure can reveal the organization of part of the brain with such certainty that there can be no further doubt. A brain section through somatosensory cortex in a rat, when reacted for cytochrome oxidase (CO), reveals a map of the rat body surface, with the hindfoot, the trunk, the forefoot, and the head, including the mouth parts and the lower lip, all represented by CO dense regions. The use of such new stains and reactions, which reveal aspects of cortical

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. AREAS AND MODULES IN VISUAL CORTEX 26

FIGURE 1 The major phylogenetic radiations of mammals (based on Kaas, 1987b).

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

27

organization with clarity, will allow the development of new hypotheses about how cortex is organized in different mammals. Using many methods, often in conjunction, much progress has been made. One conclusion of Brodmann that seems to be absolutely supportable is that animals with very small brains, like hedgehogs, have few cortical areas, and animals with big brains have more areas. Some of these areas can be identified as the same across different mammals, but others must have evolved independently.

FIGURE 2 An example of how brains vary in size relative to body weight. E = brain weight; P = body weight. Source: Radinksy (1975). With regard to smaller subdivisions of the brain, a popular idea introduced by Mountcastle (1957) was that of “columns” within somatosensory cortex. Expanding this concept from the original description of two classes

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

28

of columns (one related to receptors in deep tissues and one related to cutaneous receptors) allows the sort of scheme to emerge that has been illustrated by Wally Welker (1973: Figure 10). Within a single cortical area, all submodalities related to each body part (hair, touch, pressure, joint, temperature) would have their own cortical columns of neurons extending from surface to white matter in S-I. It turns out that there is no evidence that any cortical area is organized in this complex manner. However, the segregation of neurons into groups of similar response properties and activity patterns seems to be a general feature of cortex. What is needed now

FIGURE 3 Frontal brain sections through caudal visual cortex of a hedgehog (A) and a tree shrew (B). Nissl stain. Although areas 17 (V-I) and (18) (V-II) have been identified by multiple criteria, they are quite different in histological appearance (modified from Kaas, 1987a).

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. AREAS AND MODULES IN VISUAL CORTEX 29

FIGURE 4 The cytochrome oxidase reaction pattern of primary somatosensory ortex in a rat showing the details of the body surface representation. Source: Li et al., (1990), based on illustrations of Dawson and Killackey (1987).

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

30

is to expand the concept of the cortical column to include a lot of different kinds of configurations of segregations of neurons. One example is in the primary somatosensory cortex of monkeys, area 3b. Mriganka Sur and coworkers (1984) have shown that the two classes of peripheral afferents (slowly adapting and rapidly adapting cutaneous afferents) seem to activate quite different band-shaped configurations of neurons. Similarly, a number of people working in the auditory system have described neurons in primary auditory cortex that are either excited by inputs from the two ears, or excited by one ear and inhibited by the other (Middlebrooks et al., 1980). The two types of bands alternate and cross A-I counter to the lines of the isofrequency representations. Of course, there are the well-known ocular dominance and orientation specific columns and cytochrome oxidase blobs in area 17 and the three types of bands in area 18 of primates (see Livingstone and Hubel, 1988, for a review). These are not traditional columns because they have different shapes, or they fail to include all cortical layers, so the term module is more applicable. ASSIGNING FUNCTIONS TO MODULES Given that the concepts of areas and modules are useful, I would like to talk a little bit about the functional significance of the subdivisions of areas that are called modules. Brodmann (1909) called areas “organs of the brain” and argued that areas have different functions. In principle, areas can be related to functions. But when we are talking about modules, we need to use a little caution. Modules may be collections of small groups of neurons that interact in ways that are aided by proximity and form small processing units. However, segregations of neurons may occur for other than functional reasons. Selection in evolution may produce modular segregations as a by-product of selection for other factors (Gould and Lewontin, 1979). Although this is a little confusing, Figure 5 provides an example of one kind of segregation in the nervous system, the ocular dominance columns or bands that subdivide layer 4 of 17 of macaque monkey. Florence et al. (1986) have summarized the distribution of ocular dominance bands in different primates. The important finding is that many species of monkeys do not have ocular dominance bands in cortex, some primates have weakly segregated bands, and some have highly segregated bands as in Figure 5 . However, most animals do not have ocular dominance bands, and these mammals function quite well without them. Ocular dominance columns obviously evolved independently a number of times, which raises several questions. Are ocular dominance columns functionally significant, in that some visual functions are improved by the monocular activation of

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

31

alternating bands? Or are the bands an epiphenomenon resulting from selections for other factors? The fact that the capacity for ocular dominance bands can be experimentally induced without the direct evolution of this capacity is suggested strongly by the research of Martha Constantine-Paton (1982). In one study, by adding an extra eye she directed two eyes into the same optic tectum of a developing frog and the two inputs segregated into alternating bands. Clearly, the frog's tectum did not evolve to segregate visual information from each eye, but the capacity for such segregation is there. The capacity must be a result of factors that evolved for some other purpose. My point is that we need to be very cautious when talking about the functions of such segregation as ocular dominance bands because they might be totally unrelated to such functions as, for example, binocular vision.

FIGURE 5 Flattened cortex showing ocular dominance bands in area 17 of a macaque monkey. Black bands and black monocular area (left) indicate parts of layer 4 activated by the contralateral eye. The central white oval corresponds to the blind spot of the contralateral eye. The lower white area indicates a region of mission data. Based on unpublished experiments of S.L. Florence and J.H. Kaas. Another example of a type of modular organization with implications for how modules form is the lamination patterns in the lateral geniculate nucleus. One of the remarkable features of the lateral geniculate nucleus is that the laminar pattern is so variable across different mammals and even different primates. Figure 6 is a schematic of the laminar patterns in different primates. Why all these patterns? It is quite puzzling. Such

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

32

variation has fascinated investigators for a long time (e.g., Walls, 1953), and a number of hypotheses have been born and died along the way. Although some understanding of the nature of lamination in some species in coming is about, an adequate explanation for the great variety still is not available. Looking at a relatively simple lateral geniculate nucleus of an owl monkey, one can see two poorly separated parvocellular layers of medium-sized neurons, two obvious magnocellular layers of large neurons, and a large number of small cells in the interlaminar zone between the magno- and parvocellular layers (Figure 7 ). Thus, there are three main populations of cells in the LGN: medium, very small, and large. The reason I show the LGN of a nocturnal monkey is that the small cell system is well developed in nocturnal monkeys. Parvocellular neurons have small receptive fields and have a sustained response to a maintained stimulus, while magnocellular neurons have large receptive fields and tend to respond to the onset and offset of a stimulus (Sherman et al., 1976). These are just examples of some of the many differences in the response properties of these kinds of neurons. Determining the response properties of the small cells turns out to be difficult because there are few of them and it is more difficult to record from small neurons. However, Tom Norton and Vivien Casagrande (1982) have studied the small (W) cells in the LGN of galagos, in which they are more frequent, and they have properties that are distinctively different than parvocellular and magnocellular neurons. These three groups of cells seem to relate to different processing streams in the visual system that originate in the retina. The W-cell or small cell stream from the retina passes through the lateral geniculate nucleus and up into cortex, where it terminates largely in puffs in layer 3 (Fitzpatrick et al., 1983). We have heard some speculation that the parvocellular system relates more to form vision, and the magnocellular system more to motion, attention, and detection. One wonders about the small cell system that is not very well-developed in diurnal primates. That system may be modulating both of the other systems, not having a major activating influence on cortical neurons. However, the major point about the laminar pattern of the LGN of primates (and other mammals) is that the layers separate neurons that have different properties. The apparent significance of the different properties is that they result in discorrelations in activity in development. The types of discorrelations that occur depend on transduction factors in the retina, which can be species variable. Other factors, such as the developmental stage at which discorrelations become effective, may also be important. Layers are also formed because activities in the two eyes are discorrelated. In addition, breaks or discontinuities in layers (see Kaas et al., 1972) occur because the nerve head or optic disc of the retina results in adjacent neurons in some layers having significant discorrelations. In a very similar

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. AREAS AND MODULES IN VISUAL CORTEX 33

FIGURE 6 A schematic of the evolution of different laminar patterns in the lateral geniculate nucleus of primates. External and internal parvolcellular, magnocellular, and koniocellular layers and sublayers are shown (PE, etc.). See Kaas et al. (1978) for details.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

34

manner, the clusters of neurons and the groupings of thalamic inputs to neurons in S-I of rats and mice, reflected in the cytochrome oxidase pattern (Figure 4 ), may be the outcome of correlations in activity of afferents from a single whisker or body part and the discorrelations that occur for inputs from adjacent body parts. The argument here is that modular and laminar segregations separate neurons of differing properties, but that this is an outcome of the interaction of a few basic developmental mechanisms, and such segregations need not have patent functional correlates.

FIGURE 7 A Nissl stained, paragittal section through the lateral geniculate nucleus of an owl monkey showing the parvocellular, magnocellular and interlaminar neurons. Compare with Figure 6 . Based on Kaas et al., (1987a). VISUAL CORTEX ORGANIZATION IN PRIMATES There are many uncertainties in our understanding of how visual cortex is organized in primates, and differences in proposals for how cortex is subdivided vary for New World and Old World monkeys and even from laboratory to laboratory for the same species. Undoubtedly there are major differences across the major primate groups, since such features as brain size relative to body weight vary considerably from prosimians to humans. Some of the proposed differences undoubtedly reflect the difficulties in determining valid subdivisions of the brain and the limitations of past and current studies. Nevertheless, some features have been clearly demonstrated in a range of primate species and appear to be parts of the

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

35

basic primate plan. Figure 8 is a drawing of a brain section cut parallel to the surface of manually flattened cortex from the brain of a prosimian galago. The drawing is of the posterior part of the hemisphere, and it shows the distribution of transported label after an injection of the tracer, WGA-HRP, into area 17. The advantage of the flattened preparation is that a favorable section can reveal most of the surface-view patterns of connections without the distortions and errors introduced by the laborious reconstruction process from serial sections in traditional plans. The first notable feature of the projection pattern is that small patches or puffs of label are distributed in a pinwheel fashion around the injection site in area 17. Such widespread, systematic, and discontinuous distributions of intrinsic connections were first adequately described by Rockland and Lund (1982, 1983). In area 17 of primates, the most widespread connections are between neurons in the cytochrome oxidase blobs. Widespread and discontinuously distributed intrinsic connections appear to exist in all visual areas of primates, suggesting that all areas are modularly organized. A second notable feature is that nearly all of the projections are to two other visual areas, V-II and the middle temporal visual area, MT. Such connections have been consistently demonstrated since the very first studies of the projections of area 17 (Kuypers et al., 1965), and they are part of the evidence that areas 17, V-II, and MT are basic to all primates and even to other mammals. Other, less dense projections (not apparent in the figure) are to cortex we call the dorsomedial visual area, DM (see Lin et al., 1982), and (from the dorsolateral part of area 17) the dorsolateral visual area, DL (commonly called V-4 in macaque monkeys). These connections have been shown to be present in a number of primate species, and this provides part of the evidence that areas DL and DM are present in all primates as well. A third notable feature of the projection pattern is that it is discontinuously distributed in all targets, further supporting arguments for modular organization in these fields. Such patchy projections of area 17 are characteristic of all studied mammals. CONCLUSIONS Figure 8 demonstrates what one can learn about the visual system from a few simple experiments. I started this talk by noting the magnitude of the problem of determining the organization of visual cortex in mammals when there are so many species, and when their brains appear to vary so much. I argued that the concepts of cortical areas and modules within areas are valid and useful, but cautioned that modules may occur in the brain for developmental reasons that are not related to obvious function. I also noted that there have been many proposals for subdividing cortex. Early proposals based largely or only on evaluations of differences in

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

36

histological appearance produced many conclusions we now know to be in error. Thus, we need to critically evaluate past and current proposals, with the realization that establishing valid subdivisions is difficult, and often will depend on evidence from a multitude of procedures. Nevertheless, as Figure 8 shows, when the outline of a conceptual framework have been established, much can be deduced from rather simple and easy experiments, and the problem of having so many brains to study and understand is greatly diminished.

FIGURE 8 A drawing of a brain section from the flattened cortex of a galago showing the injection site of WGA-HRP (dark circle), and projections within area 17 and in more rostral cortex. From Cusick and Kaas (1988). REFERENCES Brodmann, K. 1909 Vergleichende Lokalisationslehre der Grosshirnrinde , Barth : Leipzig .

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

37

Constantine-Paton, M. 1982 The retinotectal hookup: the process of neural mapping . In: Developmental Order: Its Origin and Regulation , S. Subtelny (ed.), New York : Alan R. Liss , pp. 317-349 . Cusick, C.G. and J.H. Kaas 1988 Surface view patterns of intrinsic and extrinsic cortical connections of area 17 in a prosimain primate . Brain Research 458 , 383-388 . Dawson, D.R. and H.P. Killackey 1987 The organization and mutability of the forepaw and hindpaw representations in the somatosensory cortex of the neonatal rat . J. Comp. Neurol. 256 , 246-256 . Li , X-.G. , S.L. Florence , and J.H. Kaas 1990 Areal distribution of cortical neurons projecting to different levels of the caudal brain stem and spinal cord in rats . Somato. & Motor Res. Vol. 7 . Fitzpatrick, D. , K. Itoh , and I.T. Diamond 1983 The laminar organization of the lateral geniculate body and striate cortex in the squirrel monkey (Saimiri sciurens) . J. Neuroscience 3 , 673-702 . Florence, S.L. , M. Conley , and V.A. Casagrande 1986 Ocular dominance columns and retinal projections in New World spider monkeys. (Ateles ater) . J. Comp. Neuro. 243 , 234-248 . Gould, H.J., III , and R.C. Lewontin 1979 The spandrels of San Marco and the Panglossian paradigm: A critique of adaptation programme. Proc. R. Soc. Lond . (Biol.) 205 : 234-248 . Kaas, J.H. , M.F. Juerta , J.T. , and J.K. Horting 1978 Patterns of retinal terminations and laminar organization of the lateral geniculate nucleus of primates . J. Comp. Neurol. 182 , 517-554 . Kaas, J.H. 1982 The segreation of function in the nervous system: Why do sensory systems have so many subdivision? In: Contributions to Sensory Physiology , W.P. Neff (ed.). Academic Press , New York , pp. 201-240 . Kaas, J.H. 1987a The organization of neocortex in mammals: Implications for theories of brain function . Ann. Rev. of Psych. 38 , 124-151 . 1987b The organization and evolution of neocortex . Pp. 347-378 in Higher Brain Functions , S.P. Wise (ed.). New York : John Wiley . Kaas, J.H. , R.W. Guillery , and J.M. Allman 1972 Some principles of organization in the dorsal lateral geniculate nucleus . Brain, Behavior Evol. 6 , 253-299 . Kuypers, H.G.J.M , M.K. Szwarcbart , M. Mishkin , and H.E. Rosvold 1965 Occipitotemporal cortico-cortical connections in the rhesus monkey . Exp. neurol. 11 : 245-262 . Lin , C.-S. , R.E. Weller , and J.H. Kaas 1982 Cortical connections of striate cortex in the owl monkey . J. Comp. Neurol. 211 , 165-176 . Livingstone, M.S. and D.H. Hubel 1988 Segreation of form, color, movement, and depth: Anatomy, physiology, and perception . Science 240 , 740-749 . Middlebrooks, J.C. , R.W. Dykes and M.M. Merzenich 1980 Binaural response-specific bands in primary auditory cortex (A-I) of the cat: Topographic organization orthogonal to isofrequency contours . Brain Res. 181 , 31-48 .

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

AREAS AND MODULES IN VISUAL CORTEX

38

Mountcastle, V.B. 1957 Modality and topographic properties of single neurons of cats somatic sensory cortex . J. Neurophysiol , 20 , 408-434 Norton, T.T. and V.A. Casagrande 1982 Laminar organization of receptive-field properties in the lateral geniculate nucleus of bushbaby (Galago crassicoudatus) . J Neurophysiol. 47 , 715-741 . Radinksy, L. 1975 Primate brain evolution . American Sci. 63 , 656-663 . Rockland, K.S. , and J.S. Lund 1982 Widespread periodic intrinsic connections in tree shrew visual cortex (area 17) . Science 215 : 1532-1534 . 1983 Intrinsic laminar lattice connections in primate visual cortex . J. Comp. Neurol. 216 : 303-318 . Sherman, S.M. , J.R. Wilson , J.H. Kaas , and S.V. Webb 1976 X and Y cells in the dorsal lateral geniculate nucleus of the owl monkey (Aotus trivirgatus) . Science 192 , 475-477 . Sur, M. , J.T. Wall , and J.H. Kaas 1984 Modular distribution of neurons with slowly adapting and rapidly adapting responses in area 3b of somatosensory cortex of monkeys . J. Neurophysio. 51 , 724-744 . Walls, G.W. 1953 The lateral geniculate nucleus and visual histophysiology . Univ. of Calif. Publ. Physio. , pp. 911-1000 . Welker, W.I. 1973 Principles of organization of the ventrobasal complex in mamammals . Brain, Behav., Evol. 7 , 253-336 .

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

39

Visual Coding of Features and Objects: Some Evidence from Behavioral Studies

ANNE TREISMAN I am going to talk this afternoon about some particular aspects of perception that I have been exploring using behavioral tasks rather than brain studies. The question I will discuss is what can we find out about the early stages of visual processing by using purely behavioral data. Like many other psychologists, we compare response latencies and error rates in different visual tasks. From these, we obtain a measure of relative difficulty and some indication of which operations are carried out parallel and which sequentially. We infer the use of different operations from increases or decreases in total response times as we either complicate or simplify the task, and we look at different kinds of errors that may suggest ways in which the system breaks down. No one result will ever provide compelling support for a hypothesis, so we try to marshal as much converging evidence as we can to support the same underlying hypothetical mechanism. If we get consistent results, we gain confidence that our theory is on the right approach. One immediate observation is that perception feels effortless and automatic. The minute we open our eyes, we seem to be aware of an organized scene containing meaningful objects. We are not normally conscious of color patches, movements, edges, and textures that we then assemble, object by object. It might be the case, however, that this apparently effortless achievement is actually the result of complex preprocessing stages, involving many operations to which we have no conscious access. In fact, the ease of introspection seems to be inversely related to the order of processing, at least from what we can infer. That makes sense, since what we need to react to are tigers, footballs, or motor cars, not color patches. If there are extensive preprocessing operations, we need to probe them through indirect behavioral evidence; we cannot expect people to

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

40

introspect. One approach is to ask what functions need to be carried out early in the task of perceiving the real world, and then see which factors make those tasks easy or difficult.

FIGURE 1 Salient boundary between groups defined by shape or by color. (Striped areas should be green and white areas should be red.) Source: Adapted from Beck (1966). EARLY GROUPING OF PERCEPTUAL ELEMENTS Certainly, an early step must be to locate and define the boundaries of what might be candidate objects. We need to group areas that are likely to belong together and to separate the scene into potential objects to be identified. One approach, then, would be to ask what kinds of discrimination mediate the early grouping phenomena. A long time ago, the Gestalt psychologists suggested a number of different principles that seem to be important in understanding this process. Elements are grouped by proximity, by similarity, by common movement, and by good continuation. Now it turns out that those are all good guides to what might be parts of the same object. If you see a cow behind a tree, its front and rear are likely to be the same color, they are likely to move together, and so on. But, maybe we could say a little more about what kinds of similarity are important in mediating grouping. Here we find a fairly sharp dichotomy: differences in simple aspects of shapes, like curved or straight lines and edges, will produce a good boundary between groups of elements; so will differences in colors and in brightness. In both cases(in Figure 1 ) the division down the middle is immediately salient. But if we ask people to find a boundary between green circles and red triangles on one side and red circles and green triangles on the other side (see Figure 2 ) they find it much more difficult. Similarly, we can look at the arrangements of parts of shapes. Figure 3 is taken from Beck (1966), who showed that we get a very good boundary

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

41

between elements, defined by their orientation. So T's and tilted T's segregate well, but T's and L's with the same horizontal and vertical lines in different spatial arrangements do not. The finding is interesting because, as Beck showed, similarity judgments go the other way: if you show somebody a tilted T and a normal T and get them to rate how similar they are, then show them a T and an L, they will say that the “T” and the tilted T are more similar than the T and the L. For the earlier preattentive level of processing, however, grouping is based on different principles from those that mediate consciously judged similarity for single attended figures.

FIGURE 2 Poor segregation between groups defined by conjunction of color and shape. Segregation and boundary formation offer one possible diagnostic for what happens early in visual processing. They suggest that simple properties like straight versus curved, tilted versus vertical, and color and brightness, all of which mediate good grouping, are likely to be distinguished early and in parallel across the visual field. But if we have to put parts or properties together to define a boundary, then we are not so good at it. The visual system just does not work that way. EXPECTANCY AND ATTENTION What else might we look at? Another possible diagnostic that might indicate early processing would be independence from central control, from voluntary decisions, expectancy, and attention. We can look to see what kinds of things are spontaneously salient or “pop out” of a display—what catches our attention when we look at a scene with a single black sheep among hundreds of white ones, for example. In visual search tasks, we ask subjects to find a target in displays in which it differs either in color, in

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES 42

FIGURE 3 Good segregation between groups differing in line orientation, but not between groups differing only in line arrangement. Source: Adapted from Beck, 1966.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

43

line orientation, or in size. Targets defined by simple features are available immediately and effortlessly. Can we say any more than just that feature detection tasks are easy? we can bring in another argument about the probable function of early visual processing, independent of attention: We would expect it to be spatially parallel. If the goal of early visual stages is to establish figure-ground relations and to monitor the field for any salient stimuli, there would be an advantage to doing it across the whole scene at once, rather than relying on a sequential scan. This allows us to make a prediction about the effect of varying the number of items in the field. We can ask subjects to find a target when there is only 1 item in the display, or when there are 6 or when there are 60. If the target can be found at an early level of visual processing, at which detection is spatially parallel, we would expect search times to be independent of the number of items in the display. That is in fact what we find, for quite a number of different kinds of stimuli. A target that is green against a background of not green, or filled against open stimuli, or a bullseye pattern against circles with dots outside the boundary (Figure 4 ) will be found without attention or effort. Latencies to detect these targets show no effect of added nontarget items (distractors). Performance seems to reflect spatially parallel processing; these targets show what I will call a pop-out effect. The search diagnostic may throw more light on the early stages of processing if we look at the effects of varying the background stimuli (the distractors). We can make the distractors vary in size, orientation, gap vs. completion and so on and see whether this makes a target defined by color any harder to find. Similarly, we can vary background colors and other features in a task requiring search for targets defined by orientation. We have found that background heterogeneity has little or no effect on search, provided that the variation is only on irrelevant dimensions and not on the relevant dimension that defines the target (Treisman, 1988). The apparent independence of visual processing on each of these separate dimensions suggests a modular organization. The idea is that there may be a number of relatively independent modules, each of which computes its own property, one specializing in color, one in orientation, one in stereoscopic depth, one in motion, and so on. These modules need not necessarily be anatomically separate, although some specialization into different anatomical channels has been described (Livingstone and Hubel, 1988; Van Essen, 1985); but I am suggesting they may be functionally separate. If features are analyzed in functionally separate, specialized modules, we might make the converse prediction about heterogeneity when we vary the nature of the target. In this case, it should be important to know that you are looking for a target that is blue rather than large or horizontal. You

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

44

can then check just the appropriate module for evidence of its presence. In an experiment to test this prediction, we compared how fast subjects could detect a blue target or a larger target or a horizontal target when they did not know whether it would be blue or large or horizontal, and when they did they did know which it would be. The target always appeared against a background of small green vertical bars. The results suggest that checking several different properties takes longer than checking a single property. Although search remains spatially parallel, the latency to detect the target was greater when its nature was not specified in advance, as if subjects checked separately within each of the different modules until they found it.

FIGURE 4 Easy detection (pop-out) of targets with a unique feature not shared by the nontargets. LOCALIZATION So far, I have given you some evidence for two kinds of information that is available from these early feature modules, if they exist. The first is the presence of global discontinuities or boundaries dividing one area from another. The second is the presence of a unique item in a display. Do these early representations contain any precise information about where things are, that is, about their localization?

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

45

Suppose we set up a display that has a locally unique item, for example a red circle amongst some green ones, or an X amongst O's; the unique item is very salient: it pops out. Suppose now we embed the group in which the target is locally unique in a larger display that has the same locally unique property present elsewhere. Figure 5 illustrates the more complex display. The locally unique item is now much harder to find when its defining feature is present elsewhere in the display, even though it may be some distance away (Treisman, 1982). The difficulty is not due simply to the larger or more complex display, because, if the target is unique not just locally but also in the whole display, it remains about as easy to find in the larger display as in the smaller local context. What is going on here? It seems as if we can hide an object perceptually. Just by embedding an item in a display that has its locally unique property elsewhere, we can make it preattentively invisible. This suggests that the early representation automatically makes available some kind of pooled response that tells you, “Yes, there is some red there,” or “Yes, there is a diagonal line.” But the same process cannot tell you where the red item or the diagonal are located. What must the visual system do to locate the item? Performance in tasks that force subjects to create a unique identity for an item defined only by a conjunction of properties may give us some clue. We can, for example, look at a task in which subjects search for a green T amongst other green shapes mixed with other colored T's (Treisman and Gelade, 1980). As Figure 6 illustrates, the search time for this type of conjunction target increases linearly as a function of the number of the distractors in the display. This pattern of performance suggests that each item was serially checked, adding about 60 milliseconds for each extra nontarget item that had to be rejected. If the target was present in the display, it would be found on average halfway through. It looks like the kind of pattern you would get if you were focusing attention on each item in turn and stopping when you found the target. I should mention at this point that Nakayama (1988) has found some versions of search for conjunction targets that give faster search latencies than the ones I have reported, although none of them are completely flat. If the features whose conjunction defines the targets are highly discriminable, search can be considerably faster than 60 ms per item. I have confirmed that there are, in fact, clear differences in difficulty between different conjunctions of the same four dimensions. To test this, I presented displays containing bars in highly discriminable colors (pink and green), highly discriminable orientations (45 degrees left and right), moving in highly discriminable directions (up-down oscillation versus left-right), and in highly discriminable sizes (ratio of 1.8 to 1). Figure 7 presents the search latencies. Conjunctions of color and size are found very quickly, whereas

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

46

conjunctions of motion and orientation are quite slow; the other conjunctions are intermediate between them. What is intriguing is that these findings do not seem to link very closely to what is known so far about the physiological and anatomical segregation. Many single units respond

FIGURE 5 (a) A locally unique item is hard to find when items elsewhere share its locally unique property. (b) and (c) When the property is not present elsewhere in the display, the targets become salient.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

47

to combinations of size or spatial frequency or motion with orientation, whereas color and motion seem to be segregated into different pathways. Yet color-motion conjunctions are relatively easy to find, and conjunctions with orientation are difficult.

FIGURE 6 Search times for a conjunction target (a green T among green H's and brown T's). Both functions increase linearly with the number of items in the display and the slope for the positives (target present) is about half the slope for the negative trials (target absent). What seems to happen, according to both Ken Nakayama and me, is that subjects get very good segregation between the two sets of distractors when their features are as discriminable as these. It seems possible to attend, for example, to the items that are moving up and down, even

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES 48

FIGURE 7 Search times for each conjunction of color, size, motion, and orientation and for each feature on its own. M = motion; C = color; S = size; O = orientation.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

49

though they are interspersed with items moving left and right. Take, for example, a display containing a green target moving up and down among green distractors moving left and right and red distractors moving up and down. Perhaps subjects can reject all distractors that are moving left and right (for example) without conjoining their features. Any remaining green item must be moving up and down and must therefore be the target. ROLE OF ATTENTION To get some further evidence for the idea that attention is involved in conjoining features, we have tried a number of different tasks. Perhaps the most dramatic result came when we prevented subjects from focusing attention on each item in turn (Treisman and Schmidt, 1982). We showed them brief displays with more items than they could attend to. For example, the display shown in Figure 8 might be flashed up briefly (for about 200 msec) and the subjects would be asked to report first the two digits and then any colored letters they had seen, giving both the color and the letter for each item whenever possible. Their responses included a large number of illusory conjunctions, as I call them. That is, the subject put together a color and a shape in the wrong combination, for example a green T in Figure 8 . They reported illusory conjunctions on about one-third of trials, which is nearly as often as they reported correct conjunctions. So, when subjects are forced to divide attention (in this case to make sure they would get the digits correct), they seem unable to conjoin the shapes and the colors correctly. In further experiments, we obtained similar illusory recombinations with parts of shapes (Treisman and Paterson, 1984). For example, when we showed displays like those in Figure 9 and asked subjects to look for a dollar sign, they frequently reported illusory dollar signs in displays in which none was present. The illusory targets resulted from combining the diagonal lines with S's when both were present, since far fewer were reported when only the S's or the lines were present on their own. Surprisingly, subjects saw as many illusory dollar signs with the triangle displays (Figure 9c) as with the displays with separate lines (Figure 9b). This suggests that at the preattentive level, the triangle is analyzed into three separate lines. Unless these lines can receive focused attention, they seem to be free to recombine with the S's to form illusory dollar signs. An interesting finding was recently reported by Kolinsky. When she tested young children with displays of this kind, the children also saw illusory dollar signs with the separate line displays, but they did not with the triangles. Perhaps young children perceive more holistically and do not separately detect each line of the triangles at the preattentive level.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

50

FIGURE 8 Example of display that gave rise to illusory conjunctions. The filled area represents blue, the white area red, and the dotted area green. Are there any constraints on illusory conjunctions, in terms of the overall similarity of the items? There seemed to be absolutely no effect of similarity in my experiments. Subjects were just as willing to take the blue color from a small outline circle, and use it to fill a large triangle, as they were to take it from another large triangle. This, again, seems to be quite strong evidence for modularity, in the sense that the presence of the color is separable from its spatial layout. Without attention, apparently we code the separate features, such as blue, triangle, outline, but not their interrelations. To recap so far, I have suggested that early vision simply registers the presence of separate features in the scene. It does so within a number of separate modules that can be related to each other only once we focus attention on them. This locates the features that we are currently attending to and ties them together through their shared location. The evidence suggests at least a functional separation between a set of color maps, a set of orientation maps, a set of directions of motion, and so on. When we are involved in a visual search task with a target defined by a single feature, we can simply check: Is there activity in the red map? Is there activity in the horizontal map? We can then respond regardless of what is going on in all the other maps. The diagram in Figure 10 outlines in functional terms what might happen when attention is focused on a particular location. We suggest

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

51

that attention selects particular stimuli through a kind of master map of locations to which the different feature maps in separate modules are all connected. Attention retrieves information about the different features present in a particular restricted area of the field. When attention is focused on a particular location, it pulls out the features, for example, “red” and “horizontal,” that are currently present in that same location. In this way, the attended color and orientation are conjoined to form a single unitary perceptual object.

FIGURE 9 Examples of displays used to demonstrate illusory conjunctions of parts of shapes. (a) Display containing a real target (dollar sign). (b) and (c) Displays that gave rise to approximately equal numbers of illusory dollar signs. If attention is divided over the whole area, we can know from the separate feature maps which features are present, but not how they are spatially related to each other.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

52

FIGURE 10 Schematic framework to explain the results described. The hypothesis seemed a little far-fetched, and we felt it would certainly be nice to get more evidence to support it. We therefore devised a couple more experiments, in which we tried to test some further predictions. In one study we asked: Is it possible to detect which feature is present, without knowing where it is? It should be, if the model I outlined is correct. When presented with brief displays of multiple objects, subjects should be able to check the map for “red” and to see whether there is activity there, without necessarily linking it to any particular location in the master map of locations. In the other experiment, we tested the prediction that the presence of a feature could be detected when its absence could not. I will come back to that experiment in a moment.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

53

FIGURE 11 (a) Example of display used to investigate the dependence of feature identification on correct localization. (b) Same for conjunction identification. “WHAT WITHOUT WHERE” We did an experiment in which we asked subjects both to identify a target and to say where it was. We flashed up a display of red O's and blue X's like that in Figure 11a (Treisman and Gelade, 1980). The subject's task was to report whether there was an orange letter or an H. Each of those targets is defined by a unique feature. We were interested to see whether they sometimes got the identity correct when they got the location wrong. Is it possible to know “what” without knowing “where”? We measured the conditional probability of getting the identity correct, given that the location was wrong and found that it was quite high. On around 70 percent of the trials in which the subjects mislocated the target by more than one square in the matrix, they were nevertheless correct in choosing whether it was orange or an H. In another condition (Figure 11b) we replaced the “orange or H” feature targets by two conjunction targets. Subjects had to do the same two tasks: decide both the identity of the target and also its location. They were asked: Was there a red X or a blue O, and also where was it in the display? In this case, we found that if subjects got the location wrong, they were at chance on getting the identity of the target. The theory claims that to identify a conjunction target, you must attend to it, and therefore you will know where it is, because attention is spatially controlled. So that was

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

54

one piece of supporting evidence: it seems that we can identify features without necessarily locating them, but we cannot conjoin them correctly without also knowing where they are. When attention is overloaded, it seems that we have some free-floating feature information for which the location is indeterminate. We can know, “Yes, there is orange there, but I do not know where.” Obviously, if the display remains present for long, the subject will home in on the target very quickly; but our results suggest that it is possible to cut off processing at a time at which the subject knows what the target is but not where it is. THE ABSENCE OF A FEATURE If the story is correct, then there should also be other tasks besides search for conjunction targets, that require attention. An interesting one is search for a target defined by the absence of a feature, when that feature is present in all distractors. The pop-out strategy should not work here if it in fact depends on detecting activity in a feature map that is unique to the target. Suppose that we look in Figure 12a for the one circle that does not have an intersecting line. We cannot check a map for any of its features—vertical or straight or intersecting— because each of these feature maps would be swamped with activity. All the background items have the lines and the target is the only one that does not have it. However, when we look for the only circle that does have an intersecting line, as in Figure 12b, we can presumably just check the map for vertical (or whatever feature defines the line), and we will find it automatically. This is exactly what the results suggest (Treisman and Souther, 1985). Search for the circle without the line gives fairly steep linearly increasing functions which suggest serial scanning. Search for the circle with the line gives flat functions with no effect of the number of background items. So there does seem to be a difference between “search for presence” and “search for absence.” This finding is surprising because exactly the same discrimination is involved in the two tasks. We test the same pair of stimuli; it is just that one plays the role of target in one case, and of distractor in the other. FEATURE ANALYSIS AND THE ASYMMETRY OF CODING If I am right that search is parallel when the target is signalled by activity in the relevant map for a feature that is unique to the target, this might give us a diagnostic to discover what other features are analyzed early in the visual system. We cannot assume that the brain analyses visual displays in the same way as physicists might. Perceptual properties might not map directly and simply onto physical properties. We need some empirical evidence to tell us what features function as natural elements or

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES 55

FIGURE 12 Search for the presence and search for the absence of a feature. (a) The target is the only circle with an intersecting line. (b) The target is the only circle without an interesting line. (c) Mean search times for these two types of targets.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

56

“primitives” in the language of early vision. We used the search task to look for possible asymmetries in the coding of a number of other simple properties (Treisman and Gormican, 1988). For example, we asked subjects to find a curved line amongst straight lines or a straight line amongst curved lines and looked to see whether there was any asymmetry in the difficulty of the two tasks. What could this tell us? Suppose straightness is a primitive feature, detected early in visual processing. Then its presence in the target should mediate pop-out; it should be detected in parallel, just like the added line was among circles without lines. Similarly, if curvature is a primitive feature, a single curved target line should pop out of a display of straight lines. Its presence would be signalled by the presence of activity in the map for curvature. It might also be the case that only one of these two features is coded positively, as the presence of activity, while the other is coded simply as the absence of its opposite. In fact, we found a very large asymmetry that was clearest when the lines and curves were least discriminable (see Figure 13a). The asymmetry suggests that the curved line functions as a feature in the visual system, while the straight line does not. It is as if we code curvature as the presence of something, and we code straightness by default, as the absence of curvature. If we take seriously the analogy to the circle and line experiment, curvature may be coded as the addition of a feature; a curved line, then, would be represented as a line, plus a deviation from the standard or reference value of straightness, just as the circle with an intersecting line could be represented as a basic circle with an added feature. We looked next at some other features of simple lines, for instance, orientation. Is there any asymmetry there? We can ask subjects to look for a tilted line amongst vertical lines or a vertical line amongst tilted lines. Again, we found a large asymmetry: this time it was the tilted line that was easy to find against a background of vertical lines, and gave flat functions relating latency of search to number of distractor lines. When the target was a vertical line on a background of tilted lines, search was slower and latencies increased with the number of distractor lines. Again, by analogy with the circles and lines, we might infer that the tilted line is coded as the presence of an added feature—perhaps tilt—and the vertical is coded simply as the standard orientation with no added deviation. Even colors seem to show a similar pattern. Colors tend to give flat search functions unless the target and the distractor are very similar and hard to discriminate, but we did find some asymmetry in search even here. We looked at search for deviating colors like magenta and lime and turquoise against standard colors like red, green, and blue, and found faster, more parallel search than with the reverse arrangement. The colors that were harder to find as targets were the “good” colors, the red, the

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution. VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES 57

FIGURE 13 Search asymmetries in displays with different target and distractor stimuli. In each case, the latencies increase much more steeply with the number of items when the more standard or reference stimulus is target than when the non-standard or deviating stimulus is target.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

58

green and the blue, and the ones that were easier to find were the deviating colors, magenta, lime and turquoise. The same asymmetries recur with some other properties: for instance, converging lines against parallel lines. A pair of converging lines pop out, while a pair of parallel lines in a background of converging lines are found more slowly. Similarly, a circle with a gap pops out of a display of complete circles, but not the reverse. The results of these search tasks are shown in Figure 13 . We seem to have stumbled on quite a general principle of perceptual coding. Perhaps we can generalize and say that the visual system is tuned to signal departures from a normal or standard value. If this is correct, we may be able to use it to explore some even less obvious cases, such as the perceptual coding of “inside” versus “outside.” Would a dot inside a closed shape be easier or harder to find than a dot outside a shape? It turns out that inside is harder to find, suggesting that this is the standard, and outside is the deviating value. The asymmetry of coding appears to be quite pervasive and may prove a useful tool to throw light on the nature of the features extracted by the visual system at the early preattentive levels. The experiments I have described so far all tested stimuli defined by luminance contrasts. It may be of interest to ask whether the same principles of coding would also extend to other media. How general and abstract is the analysis? Patrick Cavanagh (1987) has been exploring the properties of shapes defined by other kinds of boundaries; for example color boundaries at isoluminance, texture boundaries defined by motion, or by the size of the texture elements, or by stereoscopic depth. He and I have recently looked at search performance when the stimuli (bars or discs) are defined by discontinuities in these other media. For example, we can create vertical or tilted bars from stationary random dot textures against otherwise identical moving backgrounds. We can then ask subjects to look for a target bar that is tilted among vertical distractor bars, or for a vertical target bar among tilted distractors. We find results that are very similar to those obtained with bars defined by luminance (i.e., darker or lighter than the background). The same pop-out for a tilted target and serial search for a vertical target appears with bars created by color, or motion, or texture, or stereoscopic disparity. The coding language used by the visual system seems to be quite general across these different channels or media. PERCEPTION OF OBJECTS My speculations at present are that vision initially forms spatially parallel maps in functionally separate specialized modules. These modules signal the presence of positively coded features that code deviations from a standard or a norm. In order to access their locations, or to specify that

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

59

they are not present in any particular stimulus, or to tie them correctly to other features of the same object, we have to focus attention serially on each location in turn. The currently attended features can then be selected and entered into some temporary representation of the attended object. Once the features are assembled, their conjunction can be compared to memories, to stored descriptions in a longterm recognition network, and the appropriate identification can be made. Other research (Treisman, 1988) suggests that anomalous conjunctions that we might otherwise make in everyday life get weeded out at this comparison stage and not before. Top-down constraints from expectations and prior knowledge seem not to influence which features are entered into each object representation; the only constraints at this level appear to come from spatial attention. Thus subjects who were expecting to see a carrot, for example, were no more likely to recombine the orange from another object with the shape of a blue carrot than they were to imagine its orange color when no other orange object was present in the display. These temporary object representations may also be important in maintaining the perceptual continuity of objects as they move and change. Once a set of features are conjoined and a perceptual unit is established, it can be updated as the object moves or changes. In some recent experiments with Daniel Kahneman and Brian Gibbs, we have found evidence that new stimulus information gets integrated with the previously perceived object that is best linked to it by spatio-temporal continuity. For example, a letter is named faster if the same letter was previously presented within the same outline shape, even when the shape has moved to a new location in the interval between the two letters (Figure 14 ). The naming latency is unaffected if the same letter had appeared in a different outline shape, even though the time interval and the distance between the pairs of letters were equated. When the matching letter appeared in the same shape as the first, the motion of the frame was sufficient to link the two letters as parts of the same continuous object. If the features of an object change, we simply update the temporary representation. The perceptual unity and continuity of the object is maintained so long as the spatial-temporal parameters are consistent with the continued presence of a single object. If we were ever to see a frog turn into a fairy tale prince, we would perceive it as a single character transformed, just one perceptual entity, even though everything about it has changed—its properties, its identity, its label, and so on. That continuity, we suggest, would be mediated by a single object representation. If my story is correct, we may have no introspective access to the earlier stages of processing. These object specific representations may be the basis of conscious experience. In fact, they would be our subjective windows into the mind.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

60

FIGURE 14 Example of displays used to demonstrate the integration of information in object-specific representations. (a) The two squares appear first; two letters are briefly flashed in the squares, which then move (empty) to two new locations. (b) A single letter then appears in one of the squares, and subjects are asked to name it as quickly as possible. In this example, the latency would be about 30 milliseconds shorter than it would have been if the letter N had appeared in the left-hand square in the second display.

About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.

VISUAL CODING OF FEATURES AND OBJECTS: SOME EVIDENCE FROM BEHAVIORAL STUDIES

61

REFERENCES Beck, J. 1966 Effects of orientation and of shape similarity on perceptual grouping . Perception and Psychophysics 1 : 300-302 . Cavanagh, P. 1987 Reconstructing the third dimension: Interactions between color, texture, motion, binocular disparity, and shape . Computer Vision, Graphics and Image Processing 37 : 171-195 . Livingstone, M.S. , and D.H. Hubel 1987 Psychological evidence for separate channels for the perception of form, color, movement and depth . Jounal of Neuroscience 7 : 3416-3468 . Nakayama, K. 1988 The iconic bottleneck and the tenuous link between early visual processing and perception . In C. Blakemore , ed., Vision: Coding and Efficiency . New York : Cambridge University Press . Treisman, A. 1982 Perceptual grouping and attention in visual search for features and for objects . Journal of Experimental Psychology: Human Perception and Performance 8 : 194-214 . 1985 Preattentive processing in vision . Computer Vision, Graphics, and Image Processing 31 : 156-177 . 1988 Features and objects: The fourteenth Bartlett Memorial Lecture . Quarterly Journal of Experimental Psychology 40A : 201-237 . Treisman, A. , and G. Gelade 1980 A feature integration theory of attention . Cognitive Psychology 12 : 97-136 . Treisman, A. , and S. Gormican 1988 Feature analysis in early vision: Evidence from search asymmetries . Psychology Review 95(1) : 15-48 Treisman, A. , and R. Paterson 1984 Emergent features, attention and object perception . Journal of Experimental Psychology: Human Perception and Performance 10 : 12-31 . Treisman, A. , and N. Schmidt 1982 Illusory conjunctions in the perception of ojects . Cognitive Psychology 14 : 107-141 . Treisman, A. , and J. Souther 1985 Search asymmetry: A diagnostic for preattentive processing of separable features . Journal of Experimental Psychology:General 114 : 285-310 . Van Essen, D.C. 1985 Functional organization of primate visual cortex . In A.Peters , and E.G. Jones , eds., Cerebral Cortex . Vol. 3 , Visual Cortex . New York : Plenum Press .