Information processing (IP) is a metaphor that cognitive science uses to describe mental processes. It is derived from computer science and views the world as information to be inputted into the mind in order to be encoded for long-term memory. Because computers have inputs, outputs, processors and memory, so must humans. Right? Unfortunately for cognitive science, there is no more rationale for use of the IP to explain the mind than the fallacy expressed in the previous statement. Nevertheless, cognitivism and its right-hand-man, IP, have been very influential in describing the interworkings of the mind and deserve scrutiny.
The IP metaphor is made up of three principle components: sensory memory, short-term memory, and long-term memory. Sensory memory is made from stimulus information inputted to the mind. Selective Attention, a sort of homunculus mechanism, filters the stimuli into repositories for a very brief amount of time. The temporary information is compared with long-term memory and is either encoded for short-term memory or it disappears.
Short-term memory, though the original conception, is now a subset of what is known as working memory. Working memory is responsible for three things: manipulation, maintenance, and semantic encoding. Manipulation is using mental skills to assess, evaluate and synthesize information. Maintenance is the storage and retrieval of memory. And semantic encoding is attaching meaning the information in a way that it is retrievable long-term. Short-term memory is thought to have limited capacity for manipulating and retaining information. For example, chunking information into groups of seven, plus or minus two, is a common rule associated with the ability of short-term memory to retain information.
Long-term memory is the storage area for encoded information. Long-term encoding can happen intentionally and unintentionally. For example, my spouse may bring up a particular good meal that we both had on a date a few years ago. I didn’t commit that experience to memory intentionally but I remember it nonetheless. Conversely, I made sure to commit my anniversary date to memory so that I would never forget. I remember it, but I intentionally put effort into assuring that I would be able to recall the date as it approached in the year. Intentional encoding to long-term memory requires more work than the latter.
Encoding is thought do be done in different ways. Information can be encoded visually, verbally, aurally, tactilely, emotionally and in other ways. It has been found that if information is encoded in more than one way, it is often more easily recalled. For example, The articles for this class that I used a screen reader to listen to while I read along (as a strategy for keeping awake while studying late) I actually remember better than the ones that I just read and took notes one, despite being tired in the former case.
Another important point about long-term memory encoding is that it is not complete with detail. I get a laugh when reading in Harry Potter and he comes across the pensive, a bowl that holds memories. In each memory in the bowl, he is sucked in to see the memory in full detail as movie plays, whether the memory was his or from someone else. Long-term memory on the other hand is often lacking in detail, but more tied to meaning. For example, you can probably remember that you liked or disliked the last inaugural address, but would be hard pressed to recite the details.
So what does IP imply for human learning? In the strictest sense, learning is the process of encoding information for long-term retrieval. If this metaphor is to be followed, then learning environments and instruction should be optimized in order of facilitate the most effective encoding. A famous attempt to do this is in Gangé’s nine events of instruction (Gagné, 1985), which is a sequence of instructional events that is mapped one for one to IP metaphorical rationales. In short hand the events are Attention, Objective, Prior Knowledge, Stimulus, Guidance, Performance, Feedback, Assessment, and transfer. Each is a direct outgrowth of some relation to sensory memory, short-term memory, selective attention, or long-term memory. Funnily though, this type of instruction does not sound much like human learning as much as a the logical workflow that a programmer creates in order to map out the data flow for his or her software design. What about play? What about observation? What about interest? What about creativity? What about exploration? How do these parts of learning fit into such a deterministic set of instructional steps? I do not believe that they do fit very well, and that this incongruence with important aspects of learning is a direct result of the inability of the IP metaphor to account for the complex ways in which humans act, think, feel, and learn.
Gagné, R. M. (1985). Chapter 12: A Theory of Instruction. In The conditions of learning and theory of instruction. Holt, Rinehart and Winston.