The Hard Problem of Content is Neither

135 87 653KB

English Pages [22] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Hard Problem of Content is Neither

Table of contents :
The Hard Problem of Content is Neither
Abstract
1 Introduction
2 Representational Entity Deflationism and Content Inflationism
3 The Problematic Nature of Content
4 Deflating Content8 and Inflating Vehicles
4.1 Summary
5 Caveats and Confessions
6 Looking Ahead
Acknowledgments
References

Citation preview

Review of Philosophy and Psychology https://doi.org/10.1007/s13164-023-00714-9

The Hard Problem of Content is Neither William Max Ramsey1  Accepted: 10 November 2023 © The Author(s), under exclusive licence to Springer Nature B.V. 2023

Abstract For the past 40 years, philosophers have generally assumed that a key to understanding mental representation is to develop a naturalistic theory of representational content. This has led to an outlook where the importance of content has been heavily inflated, while the significance of the representational vehicles has been somewhat downplayed. However, the success of this enterprise has been thwarted by a number of mysterious and allegedly non-naturalizable, irreducible dimensions of representational content. The challenge  of  addressing these difficulties has come to be known as the “hard problem of content” (Hutto & Myin, 2012), and many think it makes  an account  of  representation in the brain impossible. In this essay, I argue that much  of  this is misguided and based upon the wrong set of priorities. If we focus on the functionality  of  representational vehicles (as recommended by teleosemanticists) and remind ourselves of the quirks associated with many functional entities, we can see that the allegedly mysterious and intractable aspects of content are really just mundane features associated with many everyday functional kinds. We can also see they have little to do with content and more to do with representation function. Moreover, we can begin to see that our explanatory priorities are backwards: instead  of  expecting a theory  of  content to be the key to understanding how a brain state can function as a representation, we should instead expect a theory  of  neural representation function to serve as the key to understanding how content occurs naturally. Keywords  Content · Intentionality · Teleosemantics · Representation vehicle · Normativity · Camouflage · Emulation

* William Max Ramsey [email protected] 1



Department of Philosophy, University of Nevada, Las Vegas, 4505 Maryland Pkwy, Box 455028, Las Vegas, NV 89154‑5028, USA

13

Vol.:(0123456789)

W. M. Ramsey

1 Introduction Sometimes when doing philosophy, it makes sense to step back and reconsider the investigative path one is on. It is, after all, possible to start down a path that initially looks promising, but then, after trudging along for a while, recognize that the path doesn’t fully make sense. In this paper I want to do this sort of “stepping back” with regard to popular theorizing about the nature of cognitive representation in the brain. In particular, I want to rethink the path philosophers have gone down for the past 40 years of focusing upon the content of mental representations. Content is the popular term philosophers use to refer to the unique relation between a representation and that which is represented.1 It is, as folks are fond of saying, a representation’s “aboutness” (Dennett and Haugeland 1987) or it’s “intentionality” or “semantics”; it is what the representation is, in some sense, “saying”. Moreover, explaining how mental representations can have content is widely regarded as both the key to understanding representation, and the primary stumbling block to comprehending how low-level representation2 can occur in the brain. Just as the phenomenal “whatit’s-likeness” is the central mystery of conscious experiences, intentional content is viewed as the central mystery of mental representations. Indeed, along with the well-known “hard problem” of consciousness, based upon the weird nature of subjective experience, so too, people now talk about the “hard problem of content”, based upon the allegedly unexplainable, un-naturalizable nature of low-level representational content (Hutto and Myin 2012). In this paper I want to explore a suspicion that this focus and deep puzzlement over representational content is largely misguided and counter-productive. Once we remind ourselves that representations are at their core, functional entities, then many of the problems associated with content start to look far less serious and unique. Moreover, we can begin to see that the challenges are less about content, as such, and really more about the functional role of representing. In other words, I’ll argue that the so-called hard problem of content is less hard than assumed, and less about content than assumed. Furthermore, I’ll suggest that content is not something that makes a state or structure into a functioning representation. Instead, a state or 1

  There is some need for clarification in the literature regarding how we should understand the term ‘content’ and what a theory of content is about. The term ‘content’ is commonly used to refer to the intentional object of a representation – the thing (or property, abstract entity, etc.) represented. With this usage, the content of a thought about Paris is Paris (or perhaps some proposition associated with Paris). However, this implies that a theory of content is thereby a theory about the things represented, like Paris (or propositions). But theories of content are not about these things. Theories of content are really theories about how mental representations can come to have content – theories about what the having of content amounts to. They are really theories of the intentionality relation between representations and the represented. 2   Just as cognitive systems possess different levels of sophistication (with the basic minds of animals having fewer capacities than those of intelligent humans), so too, it is not unreasonable to assume that cognitive mechanisms like representations also come with different capabilities. Of course, for something to qualify as a functioning cognitive representation, it will need to do a great deal of what we ordinarily associate with mental representations. But as we’ll see below, some of the features we associate with more advanced, personal-level, conscious reflection should not be expected to apply equally to all subpersonal, low-level representational states and structures.

13

The Hard Problem of Content is Neither

structure’s functioning as a representation is what makes one of the state’s natural relations into one that qualifies as contentful. By showing how our focus on content and neglect of representation function is misguided, I hope to also reveal a better orientation for moving forward and for understanding representation in the brain. In effect, I want to solve the hard problem of content by dissolving it – by showing that content is not nearly as problematical as it has been made out to be. To show this, in the next section I’ll briefly discuss some possible sources of our overemphasis upon representational content and neglect of representational entities. Then, in Section III, I’ll look at some of the ways in which representational content has traditionally been regarded as deeply and incurably inexplicable. In Section IV, I’ll present my reasons for altering our perspective, and for diminishing our focus on content and increasing our focus on the functionality of representation. By building on ideas put forth by teleosemanticists, and by comparing the intentionality relation to more mundane functional relations – particular those associated with camouflage – I hope to convince you that content is less problematic than often assumed, and that the truly hard problems are less about content. In Section V, I will offer some further qualifications (and confessions) about my outlook. In the final section VI, I will explore what the shift in focus I am proposing might look like moving forward.

2 Representational Entity Deflationism and Content Inflationism Let’s begin by considering what I believe is a common though seldom appreciated dimension of our approach to understanding representation in cognitive systems. The dimension involves two sides of the same coin: a tendency to discount or deflate the explanatory relevance of representational entities as such while, at the same time, exaggerating or over-inflating the explanatory importance of the content relation. In a great deal of philosophical work, especially on sub-personal, low-level representations, there is a tendency to be somewhat cavalier or handwavy about the specific nature of the representational entities, while at the same time treating the nature of the representational relation to whatever is being represented as far more critical. For example, various accounts develop the idea that representational structures in the brain stand in some sort of structural similarity relation to their target, thereby operating as maps and models (e.g., Cummins 1989; Ramsey 2007; Gladziejewski and Milkowski 2017). And yet these philosophical accounts focus almost exclusively upon issues associated with the possession of content, and rarely attempt to answer questions about just how, exactly, neurological states and structures actually are structurally similar to some environmental target. Indeed, insofar as the activity of the representational entity is emphasized, it is often done so in the service of developing a theory of content – as simply a means towards achieving the primary goal of developing such a theory. A tacit assumption in the literature appears to be that what makes a neural or computational state/structure a representational state/structure is this special intentional relation. Thus, explaining a representation’s content is seen as the key to explaining any thing’s or state’s status as a representation. For instance, Fodor famously argued that constructing an adequate account of “semanticity of mental

13

W. M. Ramsey

representations” should be “the issue in mental representation for the foreseeable future” (1990, p. 25). Following up on this idea, Stich and Warfield note that, “... the project of providing a semantic theory for mental representation (or a “theory of content” as it is often called) has been center stage in the philosophy of mind... Many writers now view it as the central problem in the philosophy of mind” (1994, p. 4). Much more recently, Nicholas Shea has discussed this as the “content question”: In short, what determines the content of a mental representation? That is the ‘content question’. Surprisingly, there is no agreed answer. . .The content question is widely recognized as one of the deepest and most significant problems in the philosophy of mind, a central question about the mind’s place in nature” (Shea 2018, pp. 6, 8.). Thus, explaining content has been for some time the dominant goal in our efforts to make sense of cognitive representation. But how did content come to take on such a preeminent role, more so than explaining representation functionality? To answer this question, it will help to first consider the odd philosophical nomenclature associated with representation. Why do we call representational entities ‘vehicles’ and their relation to the things they represent as having ‘content’? This implies that the representational entities are serving as something like mere containers that, as such, have the job of holding or carrying their more significant cargo. How did this strange terminology come about, this language suggesting that the intentional relation is somehow encased in the representational entity? Part of the answer might be found by looking back at how mental states have been regarded historically. The relevant perspective is nicely captured in Brentano’s now famous quote about the medieval view of mental states including mental representation: "Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional…Every mental phenomenon includes something as object within itself, although they do not all do so in the same way… We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves." (Brentano 1924, pp. 88f.) In other words, for Brentano (and earlier, for Scholastics like Scotus), mental states like beliefs or thoughts are seen as containing “within themselves” the critical intentional object. A thought about the Eifel Tower is something that, somehow, non-spatially, contains within itself some manifestation of the Eifel Tower. While the details of this picture needn’t concern us, it is worth reflecting on this conception of mental representation and what it implies. It is a conception that treats the representational vehicles as vessels for conveying freight. It regards representational contents as important and mysterious somethings, whereas the representational entities themselves, qua-packaging, are regarded as having diminished significance. Today, of course, no one actually thinks that intentional content is somehow contained within the representational vehicle. Most of us recognize that possessing content involves some sort of relation between

13

The Hard Problem of Content is Neither

representation and represented. But this language stemming from our earlier conception of mental representations, as packaging holding vital goods, has arguably contributed to a lingering tendency to devalue representational vehicles and over-emphasize content. At the very least, it invites a flawed interpretation of the functional role of representational entities. It implies their job is to serve as a carrier for something else that is truly critical – to function as a mere holder, not as something that is the locus of what really matters.3 Representational entity depreciation along with content hyping has been further encouraged by work on the nature of public language, and various efforts to understand mental representations as language-like structures. With public language words, there is nothing intrinsically special or interesting about their nature that gives rise to their status as representations. Their representational status is entirely a function of linguistic convention and our ability to assign meaning to randomly chosen markings or noises. Philosophers construct theories of reference to explain how these markings and noises come to have the semantic properties they do, and these accounts, like the causal theory of reference, properly treat the inherent features of linguistic symbols as irrelevant to semantics. Within the context of natural language, it is a symbol’s content that makes it into a representational entity – the reference/content relation is the representation-maker. Consequently, if language serves as your inspiration for understanding the nature of mental representation, as it clearly has for many (see, for example, Fodor 1975; Field 1978), then you are likely to think that having content is the thing that converts a neurological state into a representation. Even for those who no longer regard representations as language-like structures, this might still explain a lingering tendency to regard the construction of a workable theory of content as the key to constructing a workable theory of mental representation.4 This neglect of representational entities and prioritizing of content has also been encouraged by the classical computational theory of the mind. In the classical computational account, data structures are manipulated entirely by virtue of their nonsemantic, syntactic properties. From the standpoint of the mechanical computational system, computational symbols are purely syntactic tokens. This observation has motivated some philosophers like Searle (1980) and Stich (1983) to outright deny that these computational data structures should be regarded as representations at all. But for computationalists like Fodor unwilling to abandon representationalism,5 it 3   For more on the ‘vehicle-content’ terminology and how it came about, see the interesting discussions here: https://​philo​sophy​ofbra​ins.​com/​2010/​03/​16/​first-​menti​on-​of-​conte​ntveh​icle-​disti​nction.​aspx; also here: https://​philp​apers.​org/​bbs/​thread.​pl?​tId=​190. I’m grateful to an anonymous reviewer for pointing out this discussion. 4  This language-oriented perspective does provide one noteworthy exception to vehicular neglect, at least concerning complex representations of propositions. According to the Language of Thought hypothesis, the vehicles representing full-blown propositions must have a combinatorial structure, such that the content of the molecular representation stems from the content of its atomic parts and their syntactic “arrangement” (Fodor 1975). Still, even on this view the nature of the atomic representations themselves is largely ignored. 5   It should be noted that in a well-known paper, these considerations did encourage Fodor to promote a sort of “methodological solipsism” in our investigation of computational cognition (Fodor 1980).

13

W. M. Ramsey

became clear that computational symbols could acquire genuine representational status only if something was added; namely, an independent psycho-semantics (Fodor 1980, 1987). Thus, a theory of content came to be seen as something that was not exactly part of the computational theory of mind. Instead, it was regarded as a separate, independent add-on, needed to explain how computational symbols could qualify as genuine representations. Here again, content is seen as the thing that converts non-intentional and purely syntactic entities into real representations. Computational symbols, as such, are purely nonrepresentational structures and states, and any theory about how they serve as representations must be a theory about how they acquire content. These are just a few of the possible ways in which we can see how, in the past, representational content inflationism has taken root in our theorizing about mental representation. Whatever the reasons, representational vehicles have come to be regarded as non-intentional structures that require an additional and independent intentional link to the world to achieve genuine representational status. With this outlook, a theory of content is thereby seen as a theory about what it is that makes those entities or states into representations. But what if the mental representations that exist in brains don’t work in the manner suggested by this picture? What if, in contrast to the situation with linguistic and computation symbols, things work in exactly the opposite manner? Suppose that a neural state’s functioning as a representation is what makes one of its relations to the world an intentional relation. With that perspective, possessing content would not be the representation-maker; instead, representational function would be the contentmaker. If that were the case, then our theorizing about the nature of mental representation will have been somewhat misguided. Instead of developing a theory of content as an avenue to understanding mental representation, we should have been trying to understand representation function as the proper pathway to understanding naturalistic content. In the remainder of this paper, I’ll suggest reasons for thinking that this alternative perspective is the correct one.

3 The Problematic Nature of Content The sort of representation content inflationism/vehicle deflationism discussed in the last section has encouraged two popular perspectives among philosophers working on mental representation. The first is simply an extension of the view just discussed, that the possession of content is what makes something into a representation. This has led to the view that a successful naturalistic theory of content is the key to understanding representation in cognitive systems. By “naturalistic”, it is understood that the explanation needs to invoke only the sort of entities, relations and properties that are found in a naturalistic conception of reality. So, starting in earnest in the 1980s6

6

  This project became popular in the 1980s, but there were several precursors accounts closely related to the project. These included Sellars’ (1957) version of intentional role semantics and Stampe’s (1977) causal theory of meaning.

13

The Hard Problem of Content is Neither

and continuing on today, many philosophers have engaged in the “naturalizing content project”, an enterprise involving theories designed to successfully reduce representational content to some other combination of natural relations, properties and occurrences.7 For example, a popular approach initially developed by people like Stampe (1977), Dretske (1981, 1988), Millikan (1984), and Fodor (1987) attempted to explain the content of cognitive representations as arising from some sort of causal, covariation or nomic dependency relations, such that a neural state A stands in an intentional relation to B if A, in some way, reliably responds to B. A different approach, suggested by writers like Cummins (1989) and Swoyer (1991), has maintained that content can be explained through some form of structural similarity between representational systems and their representational targets. With this approach, a neural state A stands in a content relation to B if A, in some way, either is or participates in something that is, in some way, suitably structurally similar to B. The second popular perspective is that providing such a successful, naturalistic theory of content is extremely difficult to do, if not impossible, and thus a central obstacle to understanding how representation happens in the brain (see, for example, van Gelder 1995; Ramsey 2007; Chemero 2009; Hutto and Myin 2012). The difficulties stem from various concerns, but it is mostly driven by the belief that real representational content involves a number of deeply problematic features that thwart any naturalization project. For example, in introducing the hard problem of content, Hutto and Myin insist that real content is something more than the mere informational or co-variation relations that many naturalists try to use as reductionist base. As they put it, “…positing informational content is incompatible with explanatory naturalism. The root trouble is that Covariance doesn’t Constitute Content” (Hutto and Myin 2012, xv). Today, there is growing opposition to the representational theory of mind, going all the way to various forms of representational eliminativism. In large measure, these are motivated by the alleged intractable dimensions of content. What are these problematic aspects of representational content? While a complete list of all the ways in which content is alleged to be problematic is not possible here, what follows are five commonly discussed features claimed to impede a successful naturalistic theory: 1. Asymmetric directionality: Representational content only works in one direction. If A represents B, that does not entail that B represents A (and in fact, it almost never does). And yet as Goodman (1968) and others have pointed out, many of the natural relations thought to ground representational content are reflexive (bidirectional). If A co-varies with B, then B co-varies with A; if A is structurally similar to B, then B is structurally similar to A. Consequently, the one-directional

7   Here is how Fodor puts it: “Well, what would it be like to have a serious theory of representation? Here too, there is a consensus to work from. The worry about representation is above all that the semantic (and/or the intentional) will prove permanently recalcitrant to integration in the natural order; for example, that the semantic/intentional properties of things will fail to supervene upon their physical properties. What is required to relieve the worry is therefore, at a minimum, the framing of naturalistic conditions for representation” (1990, p. 32). Fodor goes on to suggest that this project can largely ignore questions about the sort of things that serve as representational vehicles.

13

W. M. Ramsey

pointing of representational content is regarded as a stumbling block to efforts at naturalization. 2. Normativity and the capacity for error: Representational content is normative, such that it is possible for a representation to depict the world as being a way that it isn’t. A theory of content must allow for misrepresentation and falsehood. However, it is unclear how this can be captured by natural relations. If we try to reduce content to, say, causal relations, given that there is no such thing as miscausation, it is unclear how falsehood is possible. A famous manifestation of this challenge is Fodor’s “disjunction problem” (Fodor 1987). We want to say a fly representation can misrepresent a BB as a fly when it is causally activated by a BB. But on a crude causal account of content, it seems we can only say that the representation has the representational content of “fly or BB”, and thus is not misrepresenting the BB as a fly. Consequently, any naturalization project has the challenge of explaining how normativity and false representation can occur, and it is hard to see how this is possible. 3. Non-existent relata: Representational content can sometimes involve a relation between the mental representation and things that have never existed. We can have thoughts about unicorns, Sherlock Holmes and demonic possessions. But it is not all clear how this is possible with natural relations. In the case of natural relations, if one of the relata does not exist, then there is simply no actual natural relation that is instantiated (Chisholm 1957). 4. Content determinacy and intensionality: Mental representations can have very specific content because of the agent’s internal conception. I can think specifically of a carrot, and not simply an orange vegetable that grows in the ground because it is the former that I consciously conceptualize. But it is hard to see how this sort of content precision can ever be attained with purely natural processes and relations. If neurons in the frog’s brain are activated by a flies buzzing about, and are thereby considered to be representations, do they have the representational content of “flies” or “moving black dot” or “food” or something else? Naturalistic accounts of representational content often suggest that exactly what is represented can be indeterminate (Neander 2017).   A somewhat related issue concerns the intensionality of mental representations and the potential for what appears to be content variance even for representations that have the same truth conditions. This concern is often highlighted by what are commonly known as Frege puzzles. John’s belief that the Morning Star is visible has the same truth conditions as his belief that the Evening Star is visible, and yet these beliefs play very different roles in John’s cognitive economy (since he does not know that both are actually Venus). Once again, it is far from clear how a naturalistic theory of content can account for this puzzling feature of mental representation. 5. Non-supervenience with causal/explanatory relevance: The content of our mental representations is widely regarded as critical for explaining why we do things. If someone wants to know why I am walking to the kitchen, it matters, explanatorily, that the operative belief is linked to the proposition “There is a delicious beer in

13

The Hard Problem of Content is Neither

the fridge” and not the proposition “There are unpleasant Brussels sprouts in the fridge”. We also generally recognize that our behavior is caused by properties that supervene on the intrinsic physical nature of our brain and nervous system. And yet the content of our thoughts cannot be captured by the physical nature of the brain alone. Putnam has made this clear with his now-famous Twin Earth cases: On Earth you have a particular brain state representing “that’s water”, whereas on Twin Earth your twin’s physiological identical brain state representing “that’s water” is actually about something different (XYZ, not H20) (Putnam 1975). Thus, the content of mental representations fails to supervene on the intrinsic neurological features of the brain, and thus cannot be reduced to what is happening in the brain. But now given that only the intrinsic properties of physical states contribute to the state’s causal powers, and because content can vary while the intrinsic properties of representational neural structures remain the same, it seems content is irrelevant to any causal role the neurological entities perform. But if content is causally inert to what happens in the brain, then it seems content would have to be explanatorily irrelevant to cognitive processes and behavior. It is thereby unclear how, on a naturalistic construal, content is explanatorily relevant in the way that it (purportedly) should be (Dretske, 1988).

4 Deflating Content8 and Inflating Vehicles In this section I am going to make a case for reorienting our approach to theorizing about the nature of cognitive representation. Instead of treating a theory of content as the avenue for understanding how brain states could function as representations, we should instead regard a theory of representation function as the avenue for making sense of content. To help promote this reorientation, I want to illustrate how the allegedly idiosyncratic and challenging features of content are, in fact, not so idiosyncratic or challenging after all. Once we remind ourselves that representations are a functional sort of thing, with a particular job to perform, we can begin to see that these purportedly problematic features of intentionality are actually somewhat mundane features of many functional kinds and should not be treated as insurmountable hurdles that a theory of representation cannot overcome. Much of what I have to say has been inspired by the points made by teleosemanticists like Millikan (1984), 8  It should be noted that the sort of content deflationism I am recommending is very different from another very sort of content deflationism. A much more conventional sort of content deflationism concerns the objective reality of representational content, where representational content is treated as something like a mere interpretive stance that we adopt, or an explanatory heuristic. For example, Egan has argued that in many computational theories of cognitive processes, such as Marr’s account of early visual processing, neural representations have a sort of cognitive content that is merely, in her words, an “intentional gloss” – a content interpretation that is driven by our own unique explanatory purposes and agendas (Egan 2014; see also Dennett 1978). The sort of content deflationism I am endorsing is not this sort of content deflationism. The sort of deflating I am recommending is with regard to the level of explanatory energy we should devote to content. Rather than suggesting that content is in some way unreal or merely a gloss, I’m suggesting that however objectively real it is, our efforts to understand representation would be better served if we turned away from content and instead focused upon representation entity function.

13

W. M. Ramsey

Neander (2017) and Shea (2018), and somewhat similar themes have been suggested by Anderson and Rosenberg (2008), Lee (2021), Milkowski (2015) and Piccinini (2022). If we should emphasize the functional nature of mental representations, we need to consider what that involves. Functional kinds are defined by their functional roles, and functional roles are defined by a combination of relations and relational properties (rather than intrinsic compositional properties) that constitute functionally defined categories. An understanding of a functional role typically carries with it an understanding of the various role-defining relations and how those come to be instantiated. Many functional kinds are artefacts, like door-stops or hammers. But, of course, there are also plenty that are fully natural, biological functional entities, like hearts, lungs and teeth. Most functional roles are causal in nature, but there are arguably also functional kinds that are defined by other properties, such as computational, linguistic or logical relations. Because there has been a good deal of important work on the different ways in which functions can arise, we now have a fairly robust understanding of teleology in nature (Wright 1976; Allen et al. 1998; Neander 2017; Allen and Neal 2020). For our purposes, it is important to also recognize that sometimes there can be curiosities and quirks associated with many of the function-defining relations for functional kinds. The relations can invite questions that are not always easy to answer or that encourage further analysis. However, by and large, no one thinks that these puzzles serve as a fundamental barrier to a scientifically respectable account of these functional entities. For example, nearly everyone acknowledges that the heart functions to pump blood so that oxygen can be carried to cells and carbon dioxide and wastes can be removed. This is relatively uncontroversial. Moreover, this understanding is not diminished by further dimensions of this functionality associated with hearts. Our naturalistic understanding of the heart is not challenged by the observation that the heart’s function involves normativity, such that the heart can occasionally pump something it should not, like blood filled with nitrogen bubbles. Nor is it seriously undermined by the counterfactual realization that, say, there could be a twin Earth where the oxygen-rich liquid pumped by a (mostly) biologically identical twin’s heart is not, technically, human blood (it is some other similar substance, say XYB). It is not undermined by a little indeterminacy about how to describe the thing the heart has the function of pumping (is it actual blood, or simply a more generally defined but biologically workable oxygen-carrying fluid?). Moreover, no one thinks that the best way to understand circulation would be to treat the functional nature of the heart itself as secondary, and instead to focus primarily upon the special relation between the heart and blood. Everyone recognizes that understanding the heart’s role requires focusing on the mechanistic, functional operation of the heart, the comprehension of which can then be used to better grasp things like the relation that exists between hearts and the blood that is circulated. Consider another type of biological functional mechanism that many organisms use; namely, various forms of camouflage. Camouflage is a fairly well-understood phenomena in ethology. It functions to conceal both prey from predators and predators from prey. While most of us think of camouflage with regard to visual deception, it comes in other forms, such as acoustic and olfactory camouflage. More

13

The Hard Problem of Content is Neither

generally, we can think of successful camouflage as consisting of something like a 4-place relation between 1) the organism that benefits from it, 2) the actual masking mechanism employed, 3) the environmental condition that the masking mechanism is exploiting, and 4) the thing the biological entity is hiding from, such as a predator or prey. For instance, (1) the Peacock Flounder is an organism that benefits from camouflage, (2) its skin’s ability to visually mimic the sea floor is a mechanism whereby camouflage is achieved, (3) the sea floor is the environmental condition exploited for hiding, and (4) a predator like a nearby shark is what the flounder is hiding from. Let’s focus on the relation between (2) and (3), the camouflaging mechanism and the relevant environmental condition. For visual camouflage, we can think of the relation that matters here as one of imitation or mimicry. In the case of the flounder, its skin’s pigmentation adopts a color pattern that replicates, visually, the color and patterning of the sea floor, allowing it to blend in with that environment. The ability of the camouflaging mechanism to emulate something else is crucial to its proper functioning. Emulation is the special relation that comes about when the camouflaging mechanism functions the way it should.9 For biologists and ethologists who study animals that use camouflage, there is nothing deeply mysterious or philosophically problematic about any of this. Some organisms adapt by evolving the capacity to hide; they do this by, in some way, emulating their surrounding environment. We have a pretty good understanding of how all this works (Forbes 2009). Moreover, it is an understanding that focuses upon the functional role resulting from the camouflaging mechanism doing something. Whether it is a color pattern, or a specific acoustic signal, or some expression of odors, our understanding of camouflage is almost entirely based upon making sense of something doing a particular sort of job. While the relation between the camouflaging system and the relevant part of the environment is obviously important, it is not the primary focus of investigation. In fact, this relation, as such, is so explanatorily peripheral that I had to introduce the term ‘emulation’ just to isolate and designate this relation for our purposes here.10 Now, let’s consider how theorizing about camouflage would have gone if philosophers had been involved and pursued an investigative strategy comparable to the strategy we have pursued for cognitive representation. With this approach there likely would have heavy emphasis upon the emulation relation itself, and perhaps less focus upon the functionality of the camouflaging mechanism, as such. There would have been discussions about constructing a naturalistic theory of emulation in order to understand camouflage, and then perhaps various expressions of concern that emulation is deeply problematical and hard to properly naturalize. This might have involved the claim that mere similarity between specific 9   It should be noted that Grush (2004) also invokes a (somewhat different) notion of emulation as the basis for his account of representation. Since my aim is to highlight parallels between emulation with regard to camouflage and representational content, it is perhaps unsurprising that such a relation is associated with certain accounts of the latter. 10   In ethology, biologists do regularly refer to mimicry. However, mimicry (as I understand it) is somewhat different, as it involves cases where a relatively harmless organism imitates a more poisonous or malevolent organism.

13

W. M. Ramsey

environmental properties and some camouflaging mechanism would be insufficient for real emulation, since real emulation has satisfaction conditions, or normative constraints, that go beyond a mere correlation of patterns or features. This might have led to discussions about the “hard problem of emulation”, concern over the non-supervienience of successful emulation, and worries about whether emulation could be truly explanatorily relevant. Consider once again the puzzling features of content, only now applied to camouflage and emulation: 1. Asymmetric directionality: The emulation relation works in only one direction. If A emulates B, that does not entail that B emulates A. And yet the natural relation that serves to ground emulation – namely, some form of similarity – is bidirectional. If the flounder’s skin is visually similar to the seafloor, then the sea floor is also visually similar to the flounder’s skin. Why isn’t this a major problem for the study of camouflage?   It isn’t a major problem because the asymmetric directionality has nothing to do with the emulation relation itself, and everything to do with the functionality of only one of the relata (the emulating mechanism). The flounder’s skin emulates the seafloor while the seafloor does not emulate the flounder’s skin because, of course, the flounder’s skin mechanism has the function of appearing similar to the seafloor while the seafloor has no such function. This is a non-issue for camouflage theorists because they properly focus upon the functional role of the camouflaging mechanism.   But the exact same point applies to representation. If a representation exploits a co-variation or similarity relation, the bi-directionality of these relations does not lead to bi-directional representation because one of the relata (the representational entity) has a function of representing while the other relata (the representational target) does not. This is clear once we focus our attention on the functionality of the representational vehicle. In fact, the tendency I am trying to expose – a tendency to dwell on the content relation and neglect vehicle functionality – helps explain why this was ever thought to be an issue. The one-directionality of representational content can seem mysterious only if we ignore the functional role of the representational vehicle (see also, Keifer and Hohwy 2018). 2. Normativity and the capacity for error: Camouflage and emulation can be normative, such that it is possible for camouflaging system to malfunction and create an error in the emulation relation. Suppose that a flounder’s skin camouflaging mechanism malfunctions such that it displays prominent black stripes where no such stripes appear on the seafloor. This would be a breakdown in the mechanism that could be described as a case of “mis-emulation” – an instance where the black stripes wrongly imitate a background environment that is not present. Of course, someone could claim that the flounder’s skin is properly emulating some other area of the sea floor – an area where there really are such black stipes. This person might claim there is a “disjunction problem of emulation”: the black striped flounder doesn’t mis-emulate a stripe-less seafloor, instead it properly emulates a stripe-less or striped sea floor.   Ethologists do not lose sleep over these issues because they correctly focus on the functionality of camouflaging mechanism. The proper outlook is fairly clear: the camouflaging mechanism has the function of emulating the current, proximal

13

The Hard Problem of Content is Neither

seafloor environment (and not some other seafloor location) because that is what allows the animal to hide. When it fails to do so, that is a case of error. If someone were to insist that the striped pattern is actually successfully emulating some striped seafloor somewhere else in the world, this would require ignoring the functionality of the camouflaging mechanism in this application. The normativity does not arise from the emulation relation itself, but instead arises from the proper functional role of the camouflaging mechanism and what it should be emulating in this context.   Here again, the same point applies to representation. As teleosemanticists have convincingly shown, normativity and misrepresentation are possible because representational entities have the proper function of representing specific things in specific contexts, and misrepresentation occurs when there is some form of malfunction. Perhaps a representational entity is triggered by an inappropriate stimuli, or perhaps a representational model depicts the world as being a way that it isn’t, given its functional role. When we focus upon the proper functionality of the representational entity or state, we can see there is no deep puzzle or mystery in explaining the normative nature of representation or how misrepresentation can occur. As I will emphasize in greater detail below, teleology should not be treated as an ingredient in a theory of content; it should be treated instead as a core feature of representation that explains many of its aspects. As Millikan has emphasized, “[w]hat teleological theories have in common is not any view about the nature of representational content; that is, about what makes a mental representation represent something. What they have in common is only a view about how falseness in representations is possible” (2009, p. 394). 3. Non-existent relata: At least in theory, we can imagine scenarios in which a camouflaging mechanism has the function of emulating something that does not exist. Lel Jones (in conversation) has provided a nice suggestion of how this could happen. Suppose sharks in a given region regularly eat a substance that causes them to hallucinate and see purple floating blobs that are not actually there (which they learn to ignore). In such a situation, it is at least conceivable that a local fish would acquire purple coloring to emulate such non-existent purple blobs so that the sharks pay them no mind. Prima facie, this would be a case of a camouflaging mechanism emulating something non-existent.11 The more fundamental point is that it is through the functionality of a given mechanism that physical things can have a role associated with non-existent things. At the present, one can purchase Ouija boards with the function of promoting communication with spirits or magic crystals that are supposed to ward off negative energy. Of course, as a reviewer has noted, these are artefacts, the functional role of which is assigned by human designers. Nonetheless, they still demonstrate the basic point that relations to non-existent fictional entities arise from the functional role of the mechanism. Insofar as there are natural functions, and insofar as there are relations to fictional entities that come about naturally, then they come about via the functionality 11   It is easy to imagine auditory or olfactory versions of something similar. For example, if a certain predator experiences a hallucinatory tone (similar to ringing in the ears), we can imagine potential prey signaling danger by using a similar frequency, hiding the signal by emulating a non-existent environmental sound.

13

W. M. Ramsey

of the natural mechanism. Functional roles can extend beyond reality, and thus functionally defined entities can acquire connections to non-existent entities.   This point applies equally to cognitive representations. In truth, it is far from clear that low-level, sub-personal representations really ever do represent nonexistent entities. Our ability to think about unicorns probably has more to do with the mysteries of consciousness and our imaginative faculties – a kind of super-sophisticated conscious representational capacity. But even if low-grade, sub-personal representations could represent non-existent entities, this would have less to do with the special nature of intentionality and more to do with the functionality of representation vehicles. Included among the artefacts that have relations to fictional entities are representational devices, such as “ghost detectors” – that have the function of representing the presence of ghosts. If there are low-level representations that represent fictional entities, it will be because of the functionality of the representational entity, and not because of a mysterious dimension of intentionality. 4. Emulation Indeterminacy and Intensionality: With the emulation relation, there can be indeterminacy in how we describe the target of emulation. Is the flounder’s skin emulating the specific, current patch of sea-floor it is lying upon, or should we say it is successfully emulating a broader local area of sea-floor that has that particular color patterning? How large is the emulated environment? Five feet? Ten feet? The answer is not immediately obvious; indeed, it is far from clear that there actually is a precise answer.   No one is terribly bothered about this. Not all questions in science have highly specific answers, and questions about the exact target of camouflage emulation may be among them. The same point applies with equal force to representational content. Insofar as mental representations can have the sort of content specificity associated with the conscious conceptualization of something as something in particular, this facet applies primarily to highly sophisticated conscious thoughts. For low-level representational neural states, some degree of indeterminacy is to be expected and should be tolerated. Moreover, the way to reduce content indeterminacy is to work on specifying representational entity function. Teleosemanticists have successfully shown that the level of content indeterminacy can be minimized by focusing upon precisely what a representational entity was selected (or recruited) to represent (Neander 2017; Lee 2021; Piccinini 2022). But this functional analysis can come with varying degrees of imprecision and lack of specificity regarding the representation’s target. Some degree of content indeterminacy may be an endemic feature of representation, and not something that should be treated as a serious barrier to constructing a naturalistic theory of representation.   With regard to the intensionality of thought, in a recent paper Mann and Pain (2022) remind us that the explanation of certain Frege puzzles has little to do with content, and everything to do with representational entity function. Frege puzzles arise because representations with the same truth conditions can be instantiated

13

The Hard Problem of Content is Neither

in vehicle states with different functionality – that play different conceptual roles. Intensionality is less problematic for a theory of mental representation once we properly focus upon the mechanics of representational entities; it is exactly the sort of phenomena that is easily handled by turning away from content and instead attending to the functionality of representational vehicles. 5. Non-supervenience and causal relevance: The property of being a successful camouflage does not supervene on the intrinsic biological properties of an organism or the specific camouflaging mechanism. Obviously, successful emulation depends upon the nature of the background environment and, presumably, the sensory system of the predator or prey. If we insist that an organism’s fitness is explained by the organism’s inherent physical make-up, and recognize that successful camouflage does not supervene on that inherent physical make-up, then it seems successful camouflage does not explain an organism’s reproductive fitness. But if camouflage doesn’t contribute to an organism’s reproductive fitness, how could it come about through evolution or be explanatorily relevant?   In ethology, this is a non-issue for investigators because they properly reject the idea that all explanatorily relevant features must supervene on an organisms intrinsic properties. Although successful emulation does not supervene on the physical organism, it certainly causally explains how animals can survive and enjoy reproductive fitness. The relational fact that the flounder’s skin successfully blends in with the seafloor literally causes the shark to not notice it. The fact that its skin successfully emulates the sea floor causally explains how it avoids being eaten, and thereby explains its adaptive success.   All of this applies, mutatis mutandis, to cognitive representations in the brain. The relational properties of a representational vehicle that ground content do not directly contribute to the causal powers of that vehicle. But the fact that a representation accurately depicts some feature of the environment that the organism must navigate does cause the behavior to be successful, and thus promotes reproductive fitness (see also Lee 2021 and Piccinini 2022). Proper functioning of the representational system causes the organism’s behavior to properly fit with the environment, and proper fitting promotes adaptiveness. Indeed, as argued by Gladziejewski and Milkowski (2017), for internal models, maps, and other sorts of so called S-representations, the structural similarity of the representational vehicle to the distal target is explanatorily salient in a manner that reflects the way a camouflage’s emulation relation is explanatorily salient. In both cases, degrees of similarity to a target give rise to degrees of reproductive fitness. If camouflage investigators are not distressed over these matters, then neither should be cognitive scientists exploring theories of mental representation.12

12

  In his recent book, Matej Kohar (2023) argues that a localist form of neural mechanist explanation of cognition cannot invoke representational content because intentional content extends beyond neural elements and processes. Insofar as his arguments are sound, they would seem to work equally well for a mechanistic explanation of the survival value of camouflage, or any other adaptation that involves organism-world relational properties. This suggests, unsurprisingly, that purely localist mechanistic explanation is insufficient for a complete accounting of behavior and adaptability.

13

W. M. Ramsey

  What about the claim that commonsense psychology is committed to content, as such, being a direct cause of behavior? Are we not committed to saying I went to the fridge because the content, “There is beer in the fridge” caused me to do so? In truth, there is no reason to think that commonsense psychology is committed to this. Ascertaining the actual commitments of folk psychology is largely an empirical matter; however, prima facie, there is no clear justification for claiming that people believe that the content of their mental representations cause those mental representations to do whatever they do. Instead, people think that states like beliefs generate behavior suitable to whatever is believed. We assume that believing that there is beer in the fridge causes a person to go to the fridge in the same way we assume that putting quarters in the vending machine causes it to dispense a candy bar (Neander 2017). The relevant relational properties – i.e., representing the proposition “there are beers in the fridge” or being created in the US mint in Philadelphia – are not treated as causally efficacious by most people. 4.1 Summary The point of this last section is to suggest that in the case of cognitive representation, there is good reason for thinking we are unnecessarily making trouble for ourselves by overly focusing upon the wrong sort of questions and issues. The sorts of issues that are treated as hurdles to understanding representational content are actually just curiosities associated with other, non-representational functional entities and the relations they involve. We have a relatively uncomplicated and straightforward understanding of the biological camouflage that involves a relation – emulation – that helps animals avoid detection. No one thinks there are deep problems here. And yet as we have seen, in a number of significant ways, camouflage is similar to representation in that they both share certain dimensions that result from the functional roles involved. Of course, my prescription is not that we come to see natural camouflage as deeply problematic. It is the opposite recommendation: we should stop thinking about representational content as something that is overly baffling or vexing. In particular, it is that we stop over-emphasizing the quirks associated with representational content and redirect our focus upon representational vehicles, addressing the question of how brain states can actually function as representations. Insofar as there are truly hard problems, those problems pertain to the functionality of the representational states and structures, and providing an account of how representational function can be instantiated in brains. Indeed, with a proper understanding of how brain states function as a representation in hand, we will almost certainly get a suitable account of content more or less for free.

5 Caveats and Confessions There are a few important qualifications I need to make with regard to my vehicle/content priority inversion. The first point is that I am not trying to pretend that cognitive representation is entirely unproblematic, or that representation is every bit

13

The Hard Problem of Content is Neither

as easy to understand as something like camouflage. I concede that there are certain aspects of representation that pose unique explanatory challenges. For instance, given that our everyday exemplars of non-mental representation involve a full-blown interpreting mind as a representation user, it is unclear how representation can be an internal component part of a mind. It is a challenge to explain how something can function as a representation without there being an internal homuncular mind treating the representational entity as something that is a source of information. The functional role of informing or signaling is not a straightforward causal role, and insofar as representations are “consumed” by downstream sub-systems, it is not obvious just what, exactly, this consumption must involve. So explaining how a brain state can function as a representational entity is of course a major challenge. Nevertheless, it is a challenge that, when properly understood, should shift our explanatory priorities away from content, and toward the genuine explanatory difficulties associated with illuminating representational brain function. Fortunately, teleosemanticists have started addressing many of these more legitimate questions about representation function (see also Piccinini 2022 and Lee 2021). A second matter involves the connection of my message to the paradigm of teleosemantics. A proponent of teleosemantics might be annoyed with the degree to which I am borrowing from their central message, and insist that I’m just saying what they’ve been saying all along. I certainly want to acknowledge the substantial importance of teleosemantic theorizing in helping us understand representation. However, even for many teleosematicists, there is still an overemphasis upon content that has distorted our understanding of how teleology itself is relevant. Despite the way teleosemantics is generally described (see, for example, Stich and Warfield 1994; von Eckardt 2012), teleology is not just another potential natural reduction base for content, vying alongside other natural relations like covariation or structural similarity. Instead, teleosemantics should be viewed as our taking advantage of the simple fact that representational vehicles are, essentially, functional kinds. It is appreciating that, as with all functional kinds, teleology provides a kind of normativity and allows us to explain various key dimensions. It allows us to say which covariation or which structural similarity relations to which relata are functionally appropriate in specific circumstances, and which ones are not. Properly framed, there is really no such thing as a teleosemantic theory of representational content because any suitable representational theory needs to be a theory about something that, by definition, is teleological in nature. It makes no more sense to talk about teleological theories of representation than it does to talk about teleological theories of valve-lifters or hearts. Teleosemanticists have revealed the ways in which the essential functionality of representational entities explains various aspects of representation; however, this work should not be viewed as the development or promotion of a theory of content grounding, as such. An illustration of how teleosemantics can be misconstrued as a theory of content is provided by telesosemanticists Mann and Pain (2022). As noted above, these authors point out that Frege puzzles can be handled by realizing that functionally different representational entities can nevertheless have the same truth conditions (e.g., the vehicle instantiating the belief that Samuel Clemens has died can play a different cognitive role than the vehicle instantiating the belief that Mark Twain has

13

W. M. Ramsey

died). This is a classic example of how the functionality of representational entities and states can explain a critical feature of mental representation often associated with content. And yet, Mann and Pain oddly insist that because this explanation focuses upon representation vehicle functionality, and not directly upon content, Frege puzzles are not matters in the domain or explanatory purview of teleosemantics. They say this, presumably, because (like most writers) they regard teleosemantics as a theory about content, or content grounding. But given the central argument of this essay, we can see that this is a misguided assessment. Mann and Pain should instead treat this solution as one of the virtues of a teleologically-minded and entityfocused account of mental representation. Finally, I should acknowledged my own culpability in contributing to the problem I am trying to expose here. For example, in my 2016 paper, “Untangling Two Questions About Mental Representation”, I argued that we should treat questions about how a neural or computational state functions as a representation as distinct from questions about how any state functioning in that manner comes to have the intentional content that it does. And while I properly argued that one should not expect a theory of content to provide a full-blown theory of representation, I now believe I did not go far enough. Not only is it the case that a theory of content grounding cannot serve as a theory of representation, but, in fact, a theory of representation function may go a long ways in helping to provide a theory of content grounding. Instead of treating a theory of representational function as equally important as a theory of representational content, I should have recommended the stronger view that a theory of function is more primary, central, and where our investigative priorities should lie.

6 Looking Ahead I’ve argued that in trying to make sense of how the brain represents information, philosophers have wandered down the wrong path. To summarize, I’ve suggested that many are wrong to believe that standard approaches to content cannot work because content is so bizarre and inexplicable, that the way to explain representation is to construct first and foremost a theory of intentionality, and then use that to make sense of what a functioning representation is (instead of doing the opposite), that the problematic features of content are unique and don’t arise in other cases of natural relations in biology, that the main challenges to explaining low-level representation are due to intentionality, as such, and not due to making sense of how something can function as a representation, and that teleosemantics should be treated as a theory of content, alongside covariation or structural similarity theories, as opposed to taking seriously the basic fact that representations are functional entities. The shift in perspective I am suggesting would move away from these views. Of course, many philosophers have already been making this shift in perspective. But even for philosophers of cognitive science who do not lose sleep over the hard problem of content, there is nevertheless a strong emphasis upon explaining content and a de-emphasis on questions about just what, exactly, neural states do that constitutes their status as representations. So I’m suggesting a reorientation of our explanatory priorities is

13

The Hard Problem of Content is Neither

needed, towards representational entities and away from content. With this sort of reorientation, how would things be different? With the shift in investigative priorities that I am recommending, more philosophers would attend to the legitimate and real challenges that confront a proper understanding of representation in the brain, and thereby come to see content-grounding in a new light. The aim of this paper has not been to attack conventional accounts of content grounding; indeed, it has been to help support them. For example, it is often claimed that representational content cannot be reduced to some sort of covariation relation because covariation is inadequate for real content. But there is nothing deeply wrong with covariation serving as a suitable ground for content. The real problems pertain to how we specify a representational function for a vehicle that is linked to its intentional object via covariation. The problem is that on many theories that invoke covariation, the vehicle’s role is best described as a functioning causal mediator or relay circuit (Ramsey 2007). With that proper function, the covariation relation is not serving to provide information and is not any type of content. When something goes wrong, the connection can only be described as inappropriate or unsuitable, but not false. In the language preferred by some teleosemanticists, it is not being consumed as a representation. What we need is a specification of the functional role of the vehicle that is properly described as functioning as an informer or representer to its consumer. With that sort of role, we can then properly characterize break-downs as cases of misrepresenting… as depicting things as being a way that they are not. So with the shift being proposed, effort would be devoted to specifying how the neurological vehicle functions in a manner such that the covariation relation works as it does, say, with a blaring smoke alarm reacting to the presence of smoke, and not like it does with the engagement of brake pads reacting to the pressure of my foot on the brake pedal. Explaining the correct functionality of the vehicle will resolve many of the issues about content that are worth caring about. A similar point applies to how we think about accounts of representation based upon some sort of structural similarity, as with cognitive maps or models. Again, there is nothing wrong with some sort of homomorphism or structural similarity serving as a ground for content. But, as noted in Section II, our energy needs to be directed towards articulating a theory of neuronal state function that explains how neuronal states actually instantiate something operating like a model or map. No one thinks that neurons line up spatially to implement an iconographic map that can be read off surface of the cortex. So in what sense do brain structures implement and exploit a structural isomorphism? While there are various promising proposals out there (see, for example, Burgess and O’Keefe 2002), here again further work is needed.13 Besides steering philosophical work in a more productive direction, the shift proposed here would make philosophical work much more in line with work in the empirical sciences, especially cognitive neuroscience. Cognitive science researchers are much more focused upon trying to understand how neurological states can 13

  An illustration of the sort of this approach can be found in Keifer and Hohwy (2018), who emphasize a functionalist approach to understanding representation and content in the predictive error minimization framework.

13

W. M. Ramsey

actually play a representational role, and much less upon more esoteric questions about content. Researchers recognize philosophers have important contributions to make in this regard, but when philosophers join the conversation emphasizing Twin Earth, “fleebees”, non-supervenience and unicorns, empirical investigators are often left scratching their heads. A more collaborative approach would be the one I am suggesting, in which philosophical attention would be focused upon explaining representational mechanisms in the brain, and the kinds of neurological, computational and functional roles that make such mechanisms possible.14 There are several philosophical questions that need to be addressed in understanding this role, and we should focus more of our energy upon answering them. But regardless of the prospects of cross-disciplinary fit, philosophers should avoid paths of investigation that go off in counter-productive directions. As I’ve argued, in their efforts to understand mental representation, philosophers have pursued a flawed avenue. We need to redirect our explanatory and investigative attention more in the direction of understanding representation function. It is a course correction that is overdue. Acknowledgments  Earlier versions of this paper were presented at the University of Nevada, Las Vegas Philosophy Colloquium, March, 2022, University of California, Davis, Philosophy Colloquium, May, 2022, and the Workshop on the Borders of Cognition, Bergamo, Italy, June 2022. Feedback from these audiences was extremely helpful. I am also grateful to Lel Jones and two anonymous reviewers for their helpful comments and suggestions. Funding  There is no noteworthy funding

Declarations  Competing Interests  There are no conflicts of interest.

References   Allen, C., M. Bekoff, and G.V. Lauder, eds. 1998. Nature’s Purposes: Analyses of Function and Design in Biology. Cambridge, MA: The MIT Press. Allen, C. and Neal, J. 2020. "Teleological Notions in Biology", The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), URL = https://​plato.​stanf​ord.​edu/​archi​ ves/​spr20​20/​entri​es/​teleo​logy-​biolo​gy/ Anderson, M.L., and G. Rosenberg. 2008. Content and Action: The Guidance Theory of Representation. The Journal of Mind and Behavior 29 (1 & 2): 55–86. Brentano, F. 1924. in O. von Kraus (ed.), Pschologie vom empirischen Standpunkt, Meiner Verlag, Leipzig. (English translation: in L. L. McAlister (ed.), Psychology from an Empirical Standpoint, A. C. Rancurello, D. B. Terrell and L. L. McAlister (trans.). London: Routledge & Kegan Paul, 1973. Burgess, N., and J. O’Keefe. 2002. Spatial models of the hippocampus. In The Handbook of Brain Theory and Neural Networks, 2nd ed., ed. M.A. Arbib. Cambridge, MA: MIT press.

14   For those committed to embodied and/or embedded cognition, it should be noted that, as Piccinini (2022) points out, a deeper analysis of the functionality of representations reveals that such an agenda is not only compatible with representational theory of mind, but in many ways the two are mutually supportive.

13

The Hard Problem of Content is Neither Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Chisholm, R. 1957. Perception: A Philosophical Study. Ithaca, NY: Cornell University Press. Cummins, R. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press. Dennett, D. 1978. Brainstorms. Cambridge, MA: MIT Press. Dennett, D., and J. Haugeland. 1987. Intentionality. In The Oxford Companion to the Mind, ed. R. Gregory. Oxford: Oxford University Press. Dretske, F. 1988. Explaining Behavior. Cambridge, MA: MIT Press. Egan, F. 2014. How to Think About Mental Content. Philosophical Studies 170: 115–135. Field, H. 1978. Mental Representation. Erkenntnis 13: 9–61. Fodor, J. 1975. The Language of Thought. New York, NY: Thomas Y. Crowell. Fodor, J. 1980. Methodological Solipsism Considered as a Research Strategy in Cognitive Science. Behavioral and Brain Sciences 3 (1): 63–109. Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press. Forbes, P. 2009. Dazzled and Deceived: Mimicry and Camouflage. New Haven, CT: Yale University Press. Gladziejewski, P., and M. Milkowski. 2017. Structural Representations: Causally Relevant and Different From Detectors. Biology and Philosophy 32 (3): 337–355. Goodman, N. 1968. Languages of Art: An Approach to a Theory of Symbols. Indianapolis, IN: Bobbs-Merrill. Grush, R. 2004. The Emulation Theory of Representation: Motor Control, Imagery, and Perception. Behavioral and Brain Sciences 27 (3): 377–396. Hutto, D.D., and E. Myin. 2012. Radicalizing Enactivism. Cambridge, MA: MIT Press. Keifer, A., and J. Hohwy. 2018. Content and Misrepresentation in Hierchical Generative Models. Synthese 195: 2387–2415. Kohar, M. 2023. Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience. Cham, Switzerland: Springer. Lee, J. 2021. Rise of the Swamp Creatures: Reflections on a Mechanistic Approach to Content. Philosophical Psychology 34 (6): 805–828. Mann, S.F., and R. Pain. 2022. Teleosemantics and the Hard Problem of Content. Philosophical Psychology 35 (1): 22–46. Milkowski, M. 2015. The Hard Problem of Content: Solved (Long Ago). Studies in Logic, Grammar and Rhetoric 41 (54): 73–88. Millikan, R. 1984. Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press. Millikan, R. 2009. Biosemantics. In The Oxford Handbook of Philosophy of Mind, ed. B. Mclaughlin, A. Beckermann, and S. Walter, 394–406. Oxford: Oxford University Press. Neander, K. 2017. The Mark of the Mental. Cambridge, MA: MIT Press. Piccinini, G. 2022. Situated Neural Representation: Solving the Problems of Content. Frontiers in Neurorobotics 16: 1–13. Putnam, H. 1975. The Meaning of ‘Meaning.’ In Language, Mind and Knowledge, ed. K. Gunderson, 131–193. Minnesota: University of Minnesota Press. Ramsey, W. 2007. Representation Reconsidered. Cambridge: Cambridge University Press. Ramsey, W. 2016. Untangling Two Questions About Mental Representation. New Ideas in Psychology 40: 3–12. Searle, J. 1980. Minds, Brains and Programs. Behavioral and Brain Sciences 3: 417–424. Sellars, W. 1957. “Intentionality and the Mental”, A symposium by correspondence with Roderick Chisholm. In Minnesota Studies in the Philosophy of Science, vol. II, ed. H. Feigl, M. Scriven, and G. Maxwell, 507–539. Minneapolis: University of Minnesota Press. Shea, N. 2018. Representation in Cognitive Science. Oxford: Oxford University Press. Stampe, D. 1977. Towards a Causal Theory of Linguistic Representation. Midwest Studies in Philosophy 2 (1): 42–63. Stich, S. 1983. From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA: MIT Press. Stich, S., and T. Warfield. 1994. Mental Representation: A Reader. Oxford: Basil Blackwell. Swoyer, C. 1991. Structural Representation and Surrogative Reasoning. Synthese 87: 449–508. van Gelder, T. 1995. What Might Cognition Be, If Not Computation? The Journal of Philosophy 91: 345–381.

13

W. M. Ramsey von Eckart, B. 2012. The Representational Theory of Mind. In The Cambridge Handbook of Cognitive Science, ed. K. Frankish and W. Ramsey, 29–49. Cambridge: Cambridge University Press. Wright, L. 1976. Teleological Explanation. Berkeley, CA: University of California Press. Publisher’s Note  Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

13