Psychologists assume there must be "memory Psychologists assume there must be "memory traces". Are they right to do so?

The theory of memory traces is one that dates back to Plato and Aristotle. They spoke of memories as imprints on wax, bodily changes that accompanied experience. The modern conception of a memory trace is extensively different from this idea, but the central notion of 'a trace' still remains. Trace theorists assert that memories leave physical effects in the brain, and that these effects are in fact required for something to be called a memory. The theory's purpose is not just to explain the nature of memories, but also to say what qualifies as a memory, to provide a test to differentiate memories from other mental events, such as imaginings. Modern trace theories are mostly connectionist ones, a theory that evolved out of associationism. Part of the reason for this is perhaps the advent of computers, since the connectionists make heavy use of computer analogies. This doesn't necessarily mean anything about the theory, it is just simply an example of how we try to understand things that we cannot fully comprehend through analogy. However, it is often through doing so that problems arise. People take the analogy too far, and start treating it as a case of "mind is a computer" rather than "mind is like a computer". Both sides of the debate are guilty of this, and it is something that must be avoided.

To begin with we need to define what a memory trace is, and this is harder than it may seem. Not only are there many different conceptions of what qualifies as a valid memory trace, but most of them are somewhat vague about the actual nature of a trace. However, there seem to be several elements that most trace theories call upon. First amongst these is the idea of a causal element to memory, that the past experience must somehow cause the trace, which must then cause the memory to occur. There may be other causes involved at each step, but these are the necessary, if not sufficient, conditions. The next essential element of a trace theory is that the traces store something, that they have an informational content. This seems to be a reasonable enough idea, since traces that didn't store anything could have problems providing memories. Another element that most trace theories contain is that the traces are structurally isomorphic with the memory they contain. This concept seems somewhat strange to me, and the accusation is often levelled at it that it is only assumed because of the metaphors used to describe traces, such as ïmprinted", a word that suggests some sort of direct copy of something. I personally do not think that structural isomorphism is a particularly coherent idea, but neither do I think that it is a necessary part of a trace theory. The final part of a trace theory is that the trace must exist as an entity, usually it is asserted that it is in the brain or the central nervous system, since most trace theorists are physicalists, but the idea is not incompatible with some forms of dualism.

Having defined what a memory trace is, we must now move on to assessing whether or not it is a valid system. There are three main lines of attack on the notion of traces. The first is to take issue with the idea of a causal link being necessary in memory. The argument for this is that the idea of action at a distance is incoherent, so there must be a present-time cause for someone's recall of a memory. If there is a cause now, then it has to be something within the person, and a trace that was caused by the past event that the person is recalling is a logical choice. However, there are two possible ways of describing the link between the creation of the trace, and what is usually termed the 'activation' of it. Most theories hold that there is a continuous causal chain between the initial event and the memory recall, but Bertrand Russell postulated the idea of mnemic causation playing a role, that is causation that has a temporal gap. He thought that part of the cause of remembering something was the past event itself. This idea seems to be combining direct and indirect realism, and I am not sure what it achieves by doing so. it inherits the weaknesses of both, and only solves a few of the indirect realist's problems. The obvious argument against the causal chain idea is that of attacking the causal principle it is based on, namely that action at a distance is incoherent. However, it is a firmly held belief of most people that the past cannot directly affect the present, that only through some indirect medium can this be achieved. The principal group who would reject this are the direct realists, who claim that memory is the perception of the past event, that there is no intermediate, and no causal chain, simply a cause. However, the problems inherent in this idea have lead to its decline in support. How can we perceive the past, why are we so often wrong, and a host of other questions. It seems that until a reasonable mechanism for a non-causative theory is developed, the idea of a causal chain is the best one. Whilst some might see it as an assumption, it is based upon the central idea of causation that no cause can be distant from its effect.

The next area to take issue with is that of structural isomorphism, a concept that seems to me to be both problematic and unnecessary. What this idea entails is that a memory trace must have the same, or at least a significantly similar, structure as the memory. I find the concept itself somewhat bizarre, as though we had actual sensoria in our brains, that when we access recreate the scene for us. A better analogy is that of a record, where every change in the sound is accompanied by a change in the groove. Even this seems strange to me, not least because the mind is unlikely to be as linear as a record. To employ the computer analogy, the information stored on a hard disk has no structural isomorphism with what it is interpreted as. The image of a ball when stored does not have a round shape, it probably doesn't have any coherent shape. The only similarities between what we see and what information is actually stored is perhaps that the more information there is in the one, the more there must be in the other. However, that is not really a concept of structure, since structure is to do with the shape or format of something. If we can show that isomorphism is not necessary in this case, then why must it be employed in memories.

The next area where trace theory can be attacked is in how our recall functions. Sutton terms this problem the "four-pronged fork". The essential question is how do we recall our memories, these traces, accurately. The first straw-man solution is that there is some sort of internal homunculus who retrieves the memory, a notion that seems plainly absurd. The second is that there is some internal mechanism that tells the brain where to find the memories, but that would require an infinite regress of mechanisms, each telling the one below it how to function. It seems to me that the first attack is in fact part of this category, as the idea of a homunculus is simply a way of describing a mechanism. The third straw-man is that you could use the past event itself to check the memory is the correct one, but that obviously requires some sort of direct realism. Finally, it is argued, if you refute all these, and have no mechanism for ensuring the validity of your memories, then you are left open to scepticism and solipsism. However, it seems that this problem, as well as others, can be solved by employing distributed traces.

Distributed traces are an evolution of the trace theory that does not use singular localised memories, but rather has an integrated memory storage and processing system, where memories are a functioning part of this. This is often called the parallel distributed processing model (PDP), a name showing its strong links with computers. The idea seem to stand up to a lot of the criticisms of localised traces, and to describe a system that functions much like we see the human brain function. The essential idea is that memories do not exist independently of each other, they are superimposed, one on the other, and remembering involves reconstruction of them, rather than just simple recall. The idea also incorporates inference, since we don't always have complete memories, the ideas of interference and decay, which allows for memory inaccuracy and loss, and can also explain how we recall memories properly. Under this theory, memory works by association, in that one piece of a memory may have neural links to another, and by following these links, we can reconstruct a memory, not necessarily a complete one, but that seems to be our experience of memories anyway. The PDP model seems to function like the field theory in some ways, in that a present event will have an effect on the field as a whole, and that will effectively create a new field, and thus that goes on to affect all future fields, or it may become an insignificant part of the field and fade way. This very functionalist view of mental state and input perhaps benefits from the internal mechanisms postulated by the PDP model, since it is not quite so much of a black box.

One part of the question that may be something of a side issue, but I think bears noting, is that it is directed at the assumption of psychologists that there are memory traces. I think that despite whatever arguments can be brought against memory traces, psychologists are indeed right to assume their existence, for the simple reason that if they do not, then memory becomes something of a black box. If they are to attempt to quantify and explain memory, then some sort of physical phenomenon has to be found, and memory traces are the best current candidate. So, aside from the philosophical arguments over the validity of trace theories, there are scientific arguments for the assumption of them, until an appropriate paradigm shift occurs.

It seems then as though the division lies between causal and non-causal theories of memory. If you wish to avoid the concept of action at a distance, then a causal theory seems to involve some sort of trace, and the best variety available at the moment is the distributed type, although I'm sure that more objections will arise to that in time. However, the non-causal direct theories still hold some sway, although they do require some serious modifications to the concept of causation. As to whether psychologists need to assume the existence of memory traces, I think that they do if they wish to continue empirical research into memory with any sort of idea of where they are going. Without the concept of traces, they are left with a black box which they cannot describe except in functional ways. For philosophers, the PDP model seems to be the best descendant of Plato's wax tablet, updated for the computer age.

Bibliography

N. Malcom, Memory and Mind (Cornell; Cornell University Press, 1977) J. Sutton, Philosophy and Memory Traces (Cambridge; CUP, 1998)




File translated from TEX by TTH, version 2.92.
On 24 Mar 2002, 23:50.