Neurocomputing: Foundations of Research 9780262267137

940 106 52MB

English Pages [760] Year 1988

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Neurocomputing: Foundations of Research
 9780262267137

Table of contents :
General Introduction
PDF (579.9 KB)

1. (1890) William James, Psychology (Briefer Course), New York: Holt, Chapter XVI, "Association," pp. 253-279.
PDF (1.1 MB)

2. (1943) Warren S. McCulloch and Walter Pitts, "A logical calculus of the ideas immanent in nervous activity," Bulletin of Mathematical Biophysics 5:115-133.
PDF (926.6 KB)

3. (1947) Walter Pitts and Warren S. McCulloch, "How we know universals: the perception of auditory and visual forms," Bulletin of Mathematical Biophysics 9:127-147.
PDF (1.1 MB)

4. (1949) Donald O.Hebb, The Organization of Behavior, New York: Wiley, Introduction and Chapter 4, "The first stage of perception: growth of the assembly," pp. xi-xix, 60-78.
PDF (1.2 MB)

5. (1950) K. S. Lashley, "In search of the engram," Society of Experimental Biology Symposium, No. 4: Psychological Mechanisms in Animal Behavior, Cambridge: Cambridge University Press, pp. 454-455, 468-473, 477-480.
PDF (629.1 KB)

6. (1956) N. Rochester, J. H. Holland, L. H. Habit, and W. L. Duda, "Tests on a cell assembly theory of the action of the brain, using a large digital computer," IRE Transactions on Information Theory IT-2: 80-93.
PDF (1.1 MB)

7. (1958) John von Neumann, The Computer and the Brain, New Haven: Yale University Press, pp. 66-82.
PDF (623.6 KB)

8. (1958) F. Rosenblatt, "The perceptron: a probabilistic model for information storage and organization in the brain," Psychological Review, 65:386-408.
PDF (1.6 MB)

9. (1958) O. G. Selfridge, "Pandemonium: a paradigm for learning," Mechanisation of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory, November 1958, London: HMSO, pp. 513-526.
PDF (593.8 KB)

10. (1960) Bernard Widrow and Marcian E. Hoff, "Adaptive switching circuits," 1960 IRE WESCON Convention Record, New York: IRE, pp. 96-104.
PDF (908.2 KB)

11. (1962) H. D. Block, "The Perceptron: a model for brain functioning. I," Reviews of Modern Physics 34:123-135.
PDF (1.4 MB)

12. (1969) D. J. Willshaw, O. P. Buneman, and H. C. Longuet-Higgins, "Non-holographic associative memory," Nature 222:960-962.
PDF (499.4 KB)

13. (1969) Marvin Minsky and Seymour Papert, Perceptrons, Cambridge, MA: MIT Press, Introduction, pp. 1-20, and p. 73 (figure 5.1).
PDF (1 MB)

14, 15. (1972) Teuvo Kohonen, "Correlation matrix memories," IEEE Transactions on Computers C-21: 353-359.(1972) James A. Anderson, "A simple neural network generating an interactive memory," Mathematical Biosciences 14:197-220
PDF (1.6 MB)

16. (1973) L. N. Cooper, "A possible organization of animal memory and learning," Proceedings of the Nobel Symposium on Collective Properties of Physical Systems, B. Lundquist and S. Lundquist (Eds.), New York: Academic Press, pp. 252-264.
PDF (942.7 KB)

17. (1973) Chr. von der Malsburg, "Self-organization of orientation sensitive cells in the striate cortex," Kybernetik 14:85-100.
PDF (1.4 MB)

18. (1975) W. A. Little and Gordon L. Shaw, "A statistical theory of short and long term memory," Behavioural Biology 14:115-133.
PDF (1 MB)

19. (1976) S. Grossberg, "Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectors," Biological Cybernetics 23:121-134.
PDF (1.2 MB)

20. (1976) D. Marr and T. Poggio, "Cooperative computation of stereo disparity," Science 194:283-287.
PDF (777.8 KB)

21. (1977) S.-I. Amari, "Neural theory of association and concept-formation," Biological Cybernetics 26:175-185.
PDF (980.3 KB)

22. (1977) James A. Anderson, Jack W. Silverstein, Stephen A. Ritz, and RAndall S. Jones, "Distinctive features, categorical perceptron, and probability learning: some applications of a neural model," Psychological Review 84:413-451.
PDF (2.8 MB)

23. (1978) Scott E. Brodie, Bruce W. Knight, and Floyd Ratliff, "The response of the Limulus retina to moving stimuli: a prediction by Fourier synthesis," Journal of General Psychology 72:129-154, 162-166.
PDF (1.6 MB)

24. (1980) Stephen Grossberg, "How does a brain build a cognitive code?" Psychological Review 87:1-51.
PDF (3.6 MB)

25. (1981) James L. McClelland and David E. Rumelhart, "An interactive activation model of context effects in letter perception: part 1. An account of basic findings," Psychological Review, 88:375-407.
PDF (2.5 MB)

26. (1982) Elie L. Bienenstock, Leon N. Cooper, and Paul W. Munro, "Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex," Journal of Neuroscience 2:32-48.
PDF (1.8 MB)

27. (1982) J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences 79:2554-2558.
PDF (1 MB)

28. (1982) David Marr, Vision, San Francisco: W. H. Freeman, pp. 19-38, 54-61.
PDF (1.4 MB)

29. (1982) J. A. Feldman and D. H. Ballard, "Connectionist models and their properties," Cognitive Science 6:205-254.
PDF (2.2 MB)

30. (1982) Teuvo Kohonen, "Self-organized formation of topologically correct feature maps," Biological Cybernetics 43:59-69.
PDF (1 MB)

31. (1983) Kunihiko Fukushima, Sei Miyake, and Takayuki Ito, "Neocognitron: a neural network model for a mechanism of visual pattern recognition," IEEE Transactions on Systems, Man, and Cybernetics SMC-13:826-834.
PDF (932.1 KB)

32. (1983) Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson, "Neuronlike adaptive elements that can solve difficult learning control problems," IEEE Transactions on Systems, Man, and Cybernetics SMC-13:834-846.
PDF (1.3 MB)

33. (1983) S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, "Optimization of simulated annealing," Science 220:671-680.
PDF (1.3 MB)

34. (1984) Francis Click, "Function of the thalamic reticular complex: the searchlight hypothesis," Proceedings of the National Academy of Sciences 81:4586-4590.
PDF (673.5 KB)

35. (1984) J. J. Hopfield, "Neurons with graded response have collective computational properties like those of two-state neurons," Proceedings of the National Academy of Sciences 81:3088-3092.
PDF (622.6 KB)

36. (1984) Andrew G. Knapp and James A. Anderson, "Theory of categorization based on distributed memory storage," Journal of Experimental Psychology: Learning, Memory, and Cognition 10:616-637.
PDF (1.8 MB)

37. (1984) Stuart Geman and Donald Geman, "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images," IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6:721-741.
PDF (2 MB)

38. (1985) David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski, "A learning algorithm for Boltzmann machines," Cognitive Science 9:147-169.
PDF (1.2 MB)

39. (1985) Nabil H. Farhat, Demetri Psalitis, Aluizio Prata, and Eung Pake, "Optical implementation of the Hopfield model," Applied Optics 24:1469-1475.
PDF (775.1 KB)

40. (1986) Terrence J. Sejnowski and Charles R. Rosenberg, "NETtalk: a parallel network that learns to read aloud," The Johns Hopkins University Electrical Engineering and Computer Science Technical Report JHU/EECS-86/01, 32 pp.
41, 42. (1986) D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propogation," Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. I, D. E. Rumelhart and J. L. McClelland (Eds.) Cambridge, MA: MIT Press, pp. 318-362.(1986) David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, "Learning representations by back-propogating errors," Nature 323:533-536.
PDF (2.1 MB)

43. (1987) Massimo A. Sivilotti, Michelle A. Mahowald, and Carver A. Mead, "Real-time visual computations using analog CMOS proceeding arrays," Advanced Research in VLSI: Proceedings of the 1987 Stanford Conference, P. Losleben (Ed.), Cambridge, MA: MIT Press, pp. 295-312.
PDF (907.3 KB)

Afterword
PDF (234.5 KB)

Citation preview

m s i?

THOMAS J. BATA LIBRARY TRENT UNIVERSITY

Digitized by the Internet Archive in 2019 with funding from Kahle/Austin Foundation

https://archive.org/details/neurocomputingfoOOOOunse

Neurocomputing

Neurocomputing Foundations of Research

Edited by James A. Anderson and Edward Rosenfeld

The MIT Press Cambridge, Massachusetts London, England

P2>--,Pm})

(13)

Pm+i(Zi) = Pr\(Pi,P2,-'-,Pm,Zx) is true; and it is a TPE not involving ‘S’ if and only if this holds when \^fi’ is replaced by and we then obtain Theorem IX A series of classes ocl, a2, • • • ocsis a series of prehensible classes if and only if {Em)(En)(p)n(i)(\l/):(x)m\l/(x) = Ovtj/(x) = 1: —> :(£/?)

((£(■). y = (Xi)']:(t) ():£ a,.

On account of limitations of space, we have pre¬ sented the above argument very sketchily; we propose to expand it and certain of its implications in a further publication. The condition of the last theorem is fairly simple in principle, though not in detail; its application to practical cases would, however, require the exploration of some 22" classes of functions, namely the members of ^({a1, ■ ■ ■ ,asj). Since each of these is a possible fi of Theorem IX, this result cannot be sharpened. But we may obtain a sufficient condition for the realizability of an S which is very easily applicable and probably covers most practical purposes. This is given by theorem x recursion:

Let us define a set K of S by the following

1. Any TPE and any TPE whose arguments have been replaced by members of A'belong to K;

(Ey)m.^(y) = 0. (}&, nt + p). ->. (£/) -f s P. (w)m(x)t — 1. (n(t + 1) + p,nx + p,w) — f(nt + p,nx + p, w).

The proof here follows directly from the lemma. The condition is necessary, since every net for which an expression of the form (4) can be written obviously verifies it, the i/fis being the characteristic functions of the Sa and the /? for each i[/ being the class whose designation has the form niEaPrinjepPrj, where Prk denotes afc for all k. Conversely, we may write an ex¬ pression of the form (4) for a net jE fulfilling prehensible classes satisfying (14) by putting for the Pra Pr denoting the i/fis, and a Pr, written in the analogue for classes of the disjunctive normal form, and denoting the a corre¬ sponding to that i/l conjoined to it. Since every S of the form (4) is clearly realizable, we have the theorem. It is of some interest to consider the extent to which we can by knowledge of the present determine the whole past of various special nets: i.e., when we may construct a net the firing of the cyclic set of whose neurons requires the peripheral afferents to have had a set of past values specified by given functions t. In this case the classes a; of the last theorem reduced to

2. If Prfzx) is a member of K, then (z2)z1.Pr1(z2), (Ez2)zx • Prfz2), and QJzj). s belong to it, where Cmn denotes the property of being congruent to m modulo n, m < n. 3. The set K has no further members Then every member of K is realizable. For, if Prfzf) is realizable, nervous nets for which A/,(zi). = ./VTzifiSWifzj)

NAzx). = .Prx(zx)y SNt(zx)

are the expressions of equation (4), realize (z2)zx. Prfz2) and (Ez2)zx. Prx(z2) respectively; and a simple circuit, Ci, c2, ■■■, c„, of n links, each sufficient to excite the next, gives an expression

for the last form. By induction we derive the theorem. One more thing is to be remarked in conclusion. It is easily shown: first, that every net, if furnished with a tape, scanners connected to afferents, and suitable efferents to perform the necessary motor-operations, can compute only such numbers as can a Turing ma-

25 McCulloch and Pitts 1943

chine; second, that each of the latter numbers can be computed by such a net; and that nets with circles can be computed by such a net; and that nets with circles can compute, without scanners and a tape, some of the numbers the machine can, but no others, and not all of them. This is of interest as affording a psychological jus¬ tification of the Turing definition of computability and its equivalents, Church's A—definability and Kleene’s primitive recursiveness: If any number can be computed by an organism, it is computable by these definitions, and conversely.

IV. Consequences Causality, which requires description of states and a law of necessary connection relating them, has appeared in several forms in several sciences, but never, except in statistics, has it been as irreciprocal as in this theory. Specification for any one time of afferent stimulation and of the activity of all constituent neurons, each an “all-or-none” affair, determines the state. Specification of the nervous net provides the law of necessary con¬ nection whereby one can compute from the descrip¬ tion of any state that of the succeeding state, but the inclusion of disjunctive relations prevents complete determination of the one before. Moreover, the regen¬ erative activity of constituent circles renders reference indefinite as to time past. Thus our knowledge of the world, including ourselves, is incomplete as to space and indefinite as to time. This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful. The role of brains in determining the epistemic relations of our theories to our observations and of these to the facts is all too clear, for it is apparent that every idea and every sensa¬ tion is realized by activity within that net, and by no such activity are the actual afferents fully determined. There is no theory we may hold and no observation we can make that will retain so much as its old defective reference to the facts if the net be altered. Tinitus, paraesthesias, hallucinations, delusions, confusions and disorientations intervene. Thus empiry confirms that if our nets are undefined, our facts are undefined, and to the “real” we can attribute not so much as one quality or “form.” With determination of the net, the unknowable object of knowledge, the “thing in itself,” ceases to be unknowable. To psychology, however defined, specification of the net would contribute all that could be achieved in that field—even if the analysis were pushed to ultimate psychic units or “psychons,” for a psychon can be no less than the activity of a single neuron. Since that activity is inherently propositional, all psychic events

have an intentional, or “semiotic,” character. The “allor-none” law of these activities, and the conformity of their relations to those of the logic of propositions, insure that the relations of psychons are those of the two-valued logic of propositions. Thus in psychology, introspective, behavioristic or physiological, the fun¬ damental relations are those of two-valued logic.

Expression for the Figures In the figure the neuron c; is always marked with the numeral i upon the body of the cell, and the corre¬ sponding action is denoted by ‘AT with i as subscript, as in the text. Figure Figure Figure Figure Figure

la lb lc Id le

N2(t).= N3 (t).= N3(t). == N3(t). == N3(t): =

1) 1)v N2(t — 1) -Ah(t~ l).AT2(f- 1) -N^t — 1). co N2(t — 1) 1) •v.JV2(t - 3). co N2(t .N2(t- 2) • N2(t — 1) —

N4(t). ==

Figure If

N4(t):~== \coN1(t - l).N2(f- 1)

vtV3(f- 1). v.

Figure lg Figure lh Figure li

(f — 1) N2(t - 1 )-N3(t- 1) N4.it): == :co Nx(t - 2). N2(t - 2) yN3(t- 2).\. N1{t - 2) N2(t ~ 2) • N3(t - 2) N3(t).== .N2(t- 2). co iV, (t — 3) N2(t). =E./Vf- l).JVi(f-2) N3(t): == :N2(t- lj.v.iVf- 1) • (Ex)t- - l.iVijxj.JVy(x)

Hence arise constructional solutions of holistic prob¬ lems involving the differentiated continuum of sense awareness and the normative, perfective and resolvent properties of perception and execution. From the irreciprocity of causality it follows that even if the net be known, though we may predict future from present activities, we can deduce neither afferent from central, nor central from efferent, nor past from pre¬ sent activities—conclusions which are reinforced by the contradictory testimony of eye-witnesses, by the difficulty of diagnosing differentially the organically diseased, the hysteric and the malingerer, and by comparing one’s own memories or recollections with his contemporaneous records. Moreover, systems which so respond to the difference between afferents to a regenerative net and certain activity within that net, as to reduce the difference, exhibit purposive behavior; and organisms are known to possess many such sys¬ tems, subserving homeostasis, appetition and attention. Thus both the formal and the final aspects of that activity which we are wont to call mental are rigorously

26 Chapter 2

-4

27

McCulloch and Pitts 1943

deduceable from present neurophysiology. The psychi¬ atrist may take comfort from the obvious conclusion concerning causality—that, for prognosis, history is never necessary. He can take little from the equally valid conclusion that his observables are explicable only in terms of nervous activities which, until recently, have been beyond his ken. The crux of this ignorance is that inference from any sample of overt behavior to nervous nets is not unique, whereas, of imaginable nets, only one in fact exists, and may, at any moment, exhibit some unpredictable activity. Certainly for the psychi¬ atrist it is more to the point that in such systems “Mind” no longer “goes more ghostly than a ghost.” Instead, diseased mentality can be understood without loss of scope or rigor, in the scientific terms of neuro¬ physiology. For neurology, the theory sharpens the distinction between nets necessary or merely sufficient for given activities, and so clarifies the relations of disturbed structure to disturbed function. In its own domain the difference between equivalent nets and nets equivalent in the narrow sense indicates the appropriate use and importance of temporal studies of nervous activity: and to mathematical biophysics the theory contributes a tool for rigorous symbolic treatment of known nets and an easy method of constructing hypo¬ thetical nets of required properties.

Literature Carnap, R. 1938. The Logical Syntax of Language. New York: Harcourt, Brace and Company. Hilbert, D., und Ackermann, W. 1927. Grundiige der Theoretischen Logik. Berlin: J. Springer. Russell, B., and Whitehead, A. N. 1925. Principa Mathematica. Cam¬ bridge: Cambridge University Press.



.

3

Introduction

(1947) Walter Pitts and Warren S. McCulloch How we know universals: the perception of auditory and visual forms Bulletin of Mathematical Biophysics 9:127-147

This paper by Pitts and McCulloch came only a short time after their famous 1943 paper. It is less well known than the first, but has many interesting ideas, and is much more in the direction in which neuroscience and network research has progressed since the 1940s. It contains some rather detailed neurophysiology, and suggests a model for the organization and operation of the superior colliculus that is very modern in character. The basic problem addressed in the paper is one that recurs constantly in psychology, neurophysiology, artificial intelligence, and neural network research and has not yet been solved, in general. As Pitts and McCulloch state it, “We seek general methods for designing nervous nets which recognize figures in such a way as to produce the same output for every input belonging to the figure” (p. 128). The most direct realiza¬ tion of this problem is the construction of invariant geometrical codings of images. An example Pitts and McCulloch use is recognizing that a square is a square wherever it appears in the visual field. In its generality, however, this problem is one expression of a concept formation problem that goes back to the Greeks. How, by seeing different examples of something, can I learn that all the examples are instances of the same thing? Pitts and McCulloch refer to a raw sense image as an apparition, a wonderfully suggestive name that unfortunately was not used in the later literature. In the first part of the paper, they discuss several mathematical techniques of differing complexity for transforming an apparition into the constant representation. They propose that at an early level the nervous system is specifically constructing a transformation to carry the input to a uniform representation. In the cases they discuss in detail, they assume some linearity in the transformation. They assume that we have access somehow to trans¬ formations carrying a number of different examples into the invariant representation. Then, the overall transformation is the average of the individual representations. They then discuss some extensions of this basic idea, with excursions into detailed neuroanatomy. One of their suggestions involves having anatomically separate layers or substruc¬ tures do the individual transformations, which are then brought together at another layer to form the final representation. At first glance this idea seems wasteful of neural machinery, but they suggest that it actually may be a powerful way of accomplishing the task and discuss then current cortical neuroanatomy in considerable detail in light of their idea. If we rephrase these ideas as that of using multiple converging preliminary representations to form a final representation, we can have cooperating independent computations giving rise to an overall final result. As a way of using less machinery, they suggest the possibility of a time-space trade-off, reusing the neural hardware in a cycle of temporal processing that can replace

30 Chapter 3

a spatial dimension. This idea was more attractive then than now, because in 1947 there was some interest in the idea that cortical EEG (electroencephalogram) rhythms were reflections of a scanning or sweep process, like generating a television image with a beam of electrons that is swept rapidly across the screen. This clever idea has not held up. The notion that the nervous system constructs average transformations as a way of generalizing from multiple examples has also been used in later work on concept formation. (See Knapp and Anderson, paper 36, for an example in which this technique is used explicitly.) One of the most modern aspects of this paper is the discussion of spatial computation in the superior colliculus. As Pitts and McCulloch discuss in the paper, Julia Apter had shown a precise, though interestingly distorted, map on the collicular surface by recording the point of maximum evoked electrical potential when a spot of light was presented in the visual field. She also showed that a motor map was in register with the spatial map, as could be demonstrated by showing that an animal would move its eyes to a particular location if that point on the sensory map were excited. Later neurophysiology and neuroanatomy have demonstrated the existence of multiple maps of this kind, throughout the cortex; for example, it appears that one of the most common information representation strategies in the brain is to form a spatial map representing a quantity of interest (see Knudsen, du Lac, and Esterly, 1987). Often these maps require a fair amount of preprocessing to be formed. For example, there is a two-dimensional map of auditory space on the colliculus of the owl that requires rather complex convergence of information from interaural time delays and intensities at different frequencies to be formed. Later studies have amply confirmed the presence of the collicular maps, and the registration between sensory and motor representations, and have suggested more realistic models for the collicular control of the extraocular muscles for direction of gaze. However, the basic models are often strikingly similar in philosophy to that proposed by Pitts and McCulloch (see Mcllwain, 1976). The difference in computational strategy used between the 1943 paper and the 1947 paper is remarkable. The strong implication in the 1943 paper is that the brain is computing logic functions. The later collicular model uses very different techniques. Here, the collicular computer is taking a spatial map, computing the “weighted center of gravity” of the activity on the map, and moving the eyes to that location. This is not a logic function as customarily defined, but something that looks a great deal like spatially distributed analog computation, one that could easily be realized with realistic analog neurons. The power of logic has been replaced with the power of spatial representation and analog computation. By the standards of modern neuroscience, this was the right way to go. A nice touch in this paper is the use of the original captions for some of the figures, in the original language. With the dropping of the language requirements in most American graduate schools, there will not be too many young American engineers or scientists able to read them.

31 Introduction

References E. I. Knudsen, S. du Lac, and S. D. Esterly (1987), “Computational maps in the brain,” Annual Review of Neuroscience, W. M. Cowan, E. M. Shooter, C. F. Stevens, and R. F. Thompson (Eds.) 10:41-65. J. T. Mcllwain (1976), “Large receptive fields and spatial transformations in the visual system,” International Review of Physiology, Neurophysiology II, R. Porter (Ed.) 10:223-248.

(1947) Walter Pitts and Warren S. McCulloch How we know universals: the perception of auditory and visual forms Bulletin of Mathematical Biophysics 9:127-147

Two neural mechanisms are described which exhibit recogni¬ tion of forms. Both are independent of small perturbations at synapses of excitation, threshold, and synchrony, and are referred to particular appropriate regions of the nervous system, thus suggesting experimental verification. The first mechanism averages an apparition over a group, and in the treatment of this mechanism it is suggested that scansion plays a significant part. The second mechanism reduces an apparition to a standard selected from among its many legiti¬ mate presentations. The former mechanism is exemplified by the recognition of chords regardless of pitch and shapes regardless of size. The latter is exemplified here only in the reflexive mechanism translating apparitions to the fovea. Both are extensions to contemporaneous functions of the knowing of universals heretofore treated by the authors only with respect to sequence in time.

To demonstrate existential consequences of known characters of neurons, any theoretically conceivable net embodying the possibility will serve. It is equally legitimate to have every net accompanied by anatom¬ ical directions as to where to record the action of its supposed components, for experiment will serve to eliminate those which do not fit the facts. But it is wise to construct even these nets so that their principal function is little perturbed by small perturbations in excitation, threshold, or detail of connection within the same neighborhood. Genes can only predetermine statistical order, and original chaos must reign over nets that learn, for learning builds new order according to a law of use. Numerous nets, embodied in special nervous struc¬ tures, serve to classify information according to useful common characters. In vision they detect the equiva¬ lence of apparitions related by similarity and congru¬ ence, like those of a single physical thing seen from various places. In audition, they recognize timbre and chord, regardless of pitch. The equivalent apparitions in all cases share a common figure and define a group of transformations that take the equivalents into one another but preserve the figure invariant. So, for ex¬ ample, the group of translations removes a square appearing at one place to other places; but the figure

of a square it leaves invariant. These figures are the geometric objects of Cartan and Weyl, the Gestalten of Wertheimer and Kohler. We seek general methods for designing nervous nets which recognize figures in such a way as to produce the same output for every input belonging to the figure. We endeavor particularly to find those which fit the histology and physiology of the actual structure. The epicritical modalities map the continuous vari¬ ables of sense into the neurons of a fine cortical mosaic that strikingly imitates a continuous manifold. The visual half-field is projected continuously to the area striata, and tones are projected by pitch along Heschl’s gyrus. We can describe such a manifold, say Ji, by a set of coordinates (x1,x2,---,x„) constituting the pointvector x, and denote the distributions of excitation received in Jt by the functions {x, t) having the value unity if there is a neuron at the point x which has fired within one synaptic delay prior to the time t, and otherwise, the value zero. For simplicity, we shall measure time in mean synaptic delays, supposed equal, constant, and about a millisecond long. Indications of time will often not be given. Let G be the group of transformations which carry the functions (x,t) describing apparitions into their equivalents of the same figure. The group G may always be taken finite, as is seen from the atomicity of the manifold; let it have N members. We shall distinguish four problems of ascending complexity: 1) The transformation T of G can be generated by transformations f of the underlying manifold M, so that T(j)(x) =