Mental Representation

views updated

Mental Representation

Humans are surrounded by representations. Some of these occur naturallyfor example, tree rings and footprintswhile others are artificialfor example, words and photographs. Whatever their origins, all share a fundamental feature: they stand for something. Tree rings stand for the age of the tree; footprints stand for the entity that made them; a photograph stands for that which it pictures; and a proper name stands for the person so named.

There is considerable evidence that representations are not confined to the external world; they figure in people's mental lives as well. Perceptual illusions provide intuitive motivation: a straight oar in water appears bent, amputees may experience "phantom" tactile sensations in a missing limb, and some series of tones sound as if they are continually rising in pitch. In each of these cases, it is natural to explain the phenomenon by positing mental representations (albeit incorrect ones).

History

The Western intellectual tradition has long recognized the importance of representation in thought. A dramatic example is provided by the philosopher René Descartes (15961650), who hypothesized the existence of an "evil demon" that deceived him in every way, causing all of his mental representations to be false. Descartes's emphasis on the role of mental representations in thought was influential, and subsequent thinkersJohn Locke (16321704), David Hume (17111776), George Berkeley (16851753), and John Stuart Mill (18061873), to name a handfulexpanded the discussion.

Contemporary work on mental representation owes as much to the theory of computation as it does to modern philosophy. In the mid-twentieth century, psychology was dominated by behaviorism, according to which mental entities, even if they exist, have no theoretical role in explanations of behavior. Instead, behavior was to be explained exclusively in terms of stimulus-response patterns. However, in the 1950s, opponents argued that this goal could not be achieved; one must posit internal representations. Moreover, these researchers came armed with a powerful new toolthe methods and concepts of the theory of computation.

The abstract principles of information processing upon which devices such as personal computers are based constitute the theory of computation. According to this theory, computation consists in the rule-governed manipulation of symbols. For example, consider doing long division using pencil and paper. A problem is written on the paper as a set of numerals, and solving it involves repeatedly applying a series of basic rules: division, multiplication, subtraction, writing additional symbols on the paper, and so forth. Analogously, an influential predecessor of today's computersproposed in 1936 by the mathematician Alan Turing (19121954)consists of a tape for storing symbols, a means for writing and reading symbols to and from the tape, and a controller encapsulating the rules for performing a calculation using those symbols.

Turing's machines are machines in name only; in actuality, they are abstract definitions, introduced to address a problem of mathematical logic. Regardless, he and others realized that variations on these machines could be physically realized. It was only a small step to infer that human cognitive processes are computational, involving the manipulation of mental representations according to "rules of thought."

Wielding the conceptual tools of computation, opponents of behaviorism initiated a flurry of research on mental representation, as well as creating entirely new areas of research, such as artificial intelligence and computational linguistics.

Three questions receiving considerable attention in contemporary research are: How do mental representations come to have their contents? What conditions must an entity satisfy in order to be a mental representation? And, what is the format of mental representation?

Mental Content

Concerning mental content, one hypothesis is that the content of a representation is determined by the role it plays in a person's overall system of mental representations. As an analogy, consider chess pieces: the difference between a knight and a bishop consists in the fact they have distinctive ways of movingthey play different roles in the game. Similarly, mental representations with distinct contents differ in the types of inferences they invite: believing that Tom is tall leads to different conclusions than does believing he is handsome, and each of these beliefs will be the result of different sets of prior beliefs. Perhaps, then, these differences in roles are constitutive of the content of those beliefs.

One difficulty such theories face is that, because it is unlikely that any two people will share precisely the same network of representations, no two people possess beliefs with the same contentsa counterintuitive result. Additionally, the relationship between playing a certain role and "standing for" something is unclear. How do representations come to relate to things in the external world?

An alternative theory makes more of the representation-world relationship, proposing that the content of a representation is determined by its resemblance to things in the world. Just a painting of Napoléon Bonaparte represents Napoléon (and not Abraham Lincoln) because it looks like Napoléon (and not Lincoln), a mental representation has the content it does because it resembles that which it represents. This proposal also faces challenges. For example, if for a representation to resemble something is to picture, how can abstract entities (e.g., "justice") be represented?

A descendant of the resemblance theory hypothesizes that the causal relations a representation has with the environment fix the content of that mental representation. According to a simple version of this theory, content is determined by the things that normally cause that representation to occur. For example, the "eagle" concept has the content it has because eagles (as opposed to other types of animals) cause one to think about eagles, that is, to instantiate that representation.

Despite obvious flaws in this simple theory (e.g., if a duck causes someone to think about eagles, then ducks are part of the content of the "eagle" representation), more elaborate causal theories of content remain popular.

The Nature of Mental Representation

All representations have content; but do they typically possess other features as well? One proposal is that, for an entity to be a representation, it must also be capable of standing for its object in the absence of that object. In this case, for instance, the level of mercury in a thermometer would not represent the temperature of a room because the mercury cannot stand for that temperature in its absence: if the temperature were different, the level would change.

Even when this constraint is satisfied, different types of mental representation are distinguishable. For example, throughout the day, the sunflower rotates to face the sun. Moreover, this rotation continues even when the sun is occluded. Consequently, it stands to reason that somewhere there is a physical process that represents the location of the sun and that guides the flower's rotation during those instances when the sun is not present. However, there is very little a sunflower can do with this representation besides guide its rotation. Humans, in contrast, are not subject to this limitation. For instance, when seeing a cat, a person may represent its presence at that moment. Furthermore, the person may think about the cat in its absence. But, unlike the sunflower, that person can also think arbitrary thoughts about the cat: "That cat was fat"; "That cat belonged to Napoléon," etc. It seems as if the "cat" representation can be used in the formation of largerand completely novelaggregate representations. So, while possessing content is a necessary feature of mental representation, there may be additional features as well.

Representational Format

Some representations play certain roles better than others. For example, a French sentence conveys information to a native French speaker more effectively than does the same sentence in Swahili, despite the two sentences having the same meaning. In this case, the two sentences have the same content yet differ in the way in which they represent it; that is, they utilize different representational formats. The problem of determining the correct representational format (or formats) for mental representation is a topic of ongoing interdisciplinary research in the cognitive sciences.

One hypothesis was mentioned above: human cognition consists in the manipulation of mental symbols according to "rules of thought." According to Jerry Fodor's influential version of this theory, cognition requires a "language of thought" operating according to computational principles. Individual concepts are the "words" of the language, and rules govern how concepts are assembled into complex thoughts"sentences" in the language. For example, to think that the cat is on the mat is to take the required concepts (i.e., the "cat" concept, the "mat" concept, the relational concept of one thing being on top of another, etc.) and assemble them into a mental sentence expressing that thought.

The theory enjoys a number of benefits, not least that it can explain the human capacity to think arbitrary thoughts. Just as the grammar for English allows the construction of an infinite number of sentences from a finite set of words, given the right set of rules, and a sufficient number of basic concepts, any number of complex thoughts can be assembled.

However, artificial neural networks may provide an alternative. Inspired by the structure and functioning of biological neural networks, artificial neural networks consist of networks of interconnected nodes, where each node in a network receives inputs from and sends outputs to other nodes in the network. Networks process information by propagating activation from one set of nodes (the input nodes) through intervening nodes (the hidden nodes) to a set of output nodes.

In the mid-1980s, important theoretical advances in neural network research heralded their emergence as an alternative to the language of thought. Where the latter theory holds that thinking a thought involves assembling some mental sentence from constituent concepts, the neural network account conceives of mental representations as patterns of activity across nodes in a network. Since a set of nodes can be considered as an ordered n -tuple, activity patterns can be understood as points in n -dimensional space. For example, if a network contained two nodes, then at any given moment their activations could be plotted on a two-dimensional plane. Thinking, then, consists in transitions between points in this space.

Artificial neural networks exhibit a number of features that agree with aspects of human cognition. For example, they are architecturally similar to biological networks, are capable of learning, can generalize to novel inputs, and are resistant to noise and damage. Neural network accounts of mental representation have been defended by thinkers in a variety of disciplines, including David Rumelhart (a psychologist) and Patricia Churchland (a philosopher). However, proponents of the language of thought continue to wield a powerful set of arguments against the viability of neural network accounts of cognition. One of these has already been encountered above: humans can think arbitrary thoughts. Detractors charge that networks are unable to account for this phenomenonunless, of course, they realize a representational system that facilitates the construction of complex representations from primitive components, that is, unless they implement a language of thought.

Regardless, research on artificial neural networks continues, and it is possible that these objections will be met. Moreover, there exist other candidates.

One such hypothesisextensively investigated by the psychologist Stephen Kosslynis that mental representations are imagistic, a kind of "mental picture." For example, when asked how many windows are in their homes, people typically report that they answer by imagining walking through their home. Likewise, in one experiment, subjects are shown a map with objects scattered across it. The map is removed, and they are asked to decide, given two objects, which is closest to a third. The time it takes to decide varies with the distance between the objects.

A natural explanation is that people make use of mental imagery. In the first case, they form an image of their home and mentally explore it; in the second, a mental image of the map is inspected, and the subject "travels" at a fixed speed from one object to another.

The results of the map experiment seem difficult to explain if, for example, the map were represented mentally as a set of sentences in a language of thought, for why would there then be differences in response times? That is, the differences in response times seemed to obtained "for free" from the imagistic format of the mental representations. A recent elaboration on the theory of mental imagery proposes that cognition involves elaborate "scale models" that not only encode spatial relationships between objects but also implement a simulated physics, thereby providing predictions of causal interactions as well.

Despite the potential benefits, opponents argue that a nonimagistic account is available for every phenomenon in which mental imagery is invoked, and furthermore, purported neuroscientific evidence for the existence of images is inconclusive.

Perhaps the most radical proposal is that there is no such thing as mental representation, at least not as traditionally conceived. According to dynamic systems accounts, cognition cannot be successfully analyzed by positing discrete mental representations, such as those described above. Instead, mathematical equations should be used to describe the behavior of cognitive systems, analogous to the way they are used to describe the behavior of liquids, for example. Such descriptions do not posit contentful representations; instead, they track features of a system relevant to explaining and predicting its behavior. In favor of such a theory, some philosophers have argued that traditional analyses of cognition are insufficiently robust account for the subtleties of cognitive behavior, while dynamical equations are. That is, certain sorts of dynamical systems are computationally more powerful than traditional computational systems: they can do things (compute functions) that traditional systems cannot. Consequently, the question arises whether an adequate analysis of mental representations will require this additional power. At present the issue remains unresolved.

See also Consciousness ; Mind ; Philosophy of Mind .

bibliography

Davis, Martin, ed. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions. Hewlett, N.Y.: Raven Press, 1965. This volume collects historically important papers on the logical foundations of computation.

Fodor, Jerry A. The Language of Thought. Cambridge, Mass.: Harvard University Press, 1975.

. A Theory of Content and Other Essays. Cambridge, Mass.: MIT Press, 1990.

Haugeland, John, ed. Mind Design II. Cambridge, Mass.: MIT Press, 1997. A well-rounded collection of important philosophic essays on mental representation, ranging from classic papers in artificial intelligence to more recent developments such as artificial neural networks and dynamical systems.

Hodges, Andrew. Alan Turing: The Enigma. New York: Simon and Schuster, 1983. This engrossing biography of Alan Turing offers, among many other insights, well-written informal accounts of Turing's contributions to logic and computation, in- cluding his theories on the role of mental representation in cognition.

Kosslyn, Stephen M. Image and Brain: The Resolution of the Imagery Debate. Cambridge, Mass.: MIT Press, 1994.

McCartney, Scott. Eniac: The Triumphs and and Tragedies of the World's First Computer. New York: Walker, 1999.

McClelland, James L., David E. Rumelhart, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, Mass.: MIT Press, 1986. Published in two volumes, these works played a major role in reintroducing artificial neural networks to cognitive science.

Watson, John B. "Psychology as the Behaviorist Sees It." Psychological Review 101, no. 2 (1913/1994): 248253. A classic statement of the behaviorist program in psychology by a pioneer of the field.

Watson, Richard A. Representational Ideas: From Plato to Patricia Churchland. Dordrecht, Netherlands, and Boston: Kluwer, 1995.

Whit Schonbein

More From encyclopedia.com

About this article

Mental Representation

Updated About encyclopedia.com content Print Article

You Might Also Like