Learning Behavior
Learning Behavior
Learning Behavior
Learning is perhaps the most important human capacity. Learning allows us to create effective
lives by being able to respond to changes. We learn to avoid touching hot stoves, to find our way
home from school, and to remember which people have helped us in the past and which people
have been unkind. Without the ability to learn from our experiences, our lives would be
remarkably dangerous and inefficient. The principles of learning can also be used to explain a
wide variety of social interactions, including social dilemmas in which people make important,
and often selfish, decisions about how to behave by calculating the costs and benefits of
different outcomes.
The study of learning is closely associated with the behaviorist school of psychology; in which it
was seen as an alternative scientific perspective to the failure of introspection. For behaviorists,
the fundamental aspect of learning is the process of conditioning—the ability to connect stimuli
(the changes that occur in the environment) with responses (behaviors or other actions).
But conditioning is just one type of learning. Other types include learning through insight, as
well as observational learning (also known as modeling).
1
Learning by Association: Classical Conditioning
Pavlov Demonstrates Conditioning in Dogs
In the early part of the 20th century, Russian physiologist Ivan Pavlov (1849–1936) was
studying the digestive system of dogs when he noticed an interesting behavioral phenomenon:
The dogs began to salivate when the lab technicians who normally fed them entered the room,
even though the dogs had not yet received any food. Pavlov realized that the dogs were
salivating because they knew that they were about to be fed; the dogs had begun to associate the
arrival of the technicians with the food that soon followed their appearance in the room.
With his team of researchers, Pavlov began studying this process in more detail. He conducted a
series of experiments in which, over a number of trials, dogs were exposed to a sound
immediately before receiving food. He systematically controlled the onset of the sound and the
timing of the delivery of the food, and recorded the amount of the dogs’ salivation. Initially the
dogs salivated only when they saw or smelled the food, but after several pairings of the sound
and the food, the dogs began to salivate as soon as they heard the sound. The animals had
learned to associate the sound with the food that followed.
As you can see in Figure-"4-Panel Image of Whistle and Dog", psychologists use specific terms
to identify the stimuli and the responses in classical conditioning.
The unconditioned stimulus (US) is something (such as food) that triggers a natural occurring
response, and the unconditioned response (UR) is the naturally occurring response (such as
salivation) that follows the unconditioned stimulus. The conditioned stimulus (CS) is a neutral
stimulus that, after being repeatedly presented prior to the unconditioned stimulus, evokes a
similar response as the unconditioned stimulus. In Pavlov’s experiment, the sound of the tone
served as the conditioned stimulus that, after learning, produced the conditioned response (CR),
2
which is the acquired response to the formerly neutral stimulus. Note that the UR and the CR
are
3
the same behavior—in this case salivation—but they are given different names because they are
produced by different stimuli (the US and the CS, respectively).
Top left: Before conditioning, the unconditioned stimulus (US) naturally produces the unconditioned response (UR).
Top right: Before conditioning, the neutral stimulus (the whistle) does not produce the salivation response. Bottom
left: The unconditioned stimulus (US), in this case the food, is repeatedly presented immediately after the neutral
stimulus. Bottom right: After learning, the neutral stimulus (now known as the conditioned stimulus or CS), is
sufficient to produce the conditioned responses (CR).
After he had demonstrated that learning could occur through association, Pavlov moved on
to study the variables that influenced the strength and the persistence of conditioning. In
some studies, after the conditioning had taken place, Pavlov presented the sound repeatedly
4
but
5
without presenting the food afterward. Figure- "Acquisition, Extinction, and Spontaneous
Recovery" shows what happened. As you can see, after the initial acquisition (learning) phase in
which the conditioning occurred, when the CS was then presented alone, the behavior rapidly
decreased—the dogs salivated less and less to the sound, and eventually the sound did not elicit
salivation at all. Extinction refers to the reduction in responding that occurs when the conditioned
stimulus is presented repeatedly without the unconditioned stimulus.
Acquisition: The CS and the US are repeatedly paired together and behavior increases. Extinction: The CS is
repeatedly presented alone, and the behavior slowly decreases. Spontaneous recovery: After a pause, when the CS
is again presented alone, the behavior may again occur and then again show extinction.
Although at the end of the first extinction period the CS was no longer producing salivation, the
effects of conditioning had not entirely disappeared. Pavlov found that, after a pause, sounding
the tone again elicited salivation, although to a lesser extent than before extinction took place.
The increase in responding to the CS following a pause after extinction is known as spontaneous
recovery. When Pavlov again presented the CS alone, the behavior again showed extinction until
it disappeared again.
In some cases, an existing conditioned stimulus can serve as an unconditioned stimulus for a
pairing with a new conditioned stimulus—a process known as second-order conditioning. In
one of Pavlov’s studies, for instance, he first conditioned the dogs to salivate to a sound, and
then repeatedly paired a new CS, a black square, with the sound. Eventually he found that the
dogs would salivate at the sight of the black square alone, even though it had never been
directly associated with the food. Secondary conditioners in everyday life include our
attractions to things that stand for or remind us of something else, such as when we feel good on
a Friday because it has become associated with the paycheck that we receive on that day, which
itself is a conditioned stimulus for the pleasures that the paycheck buys us.
Psychologists have also discovered that people do not develop phobias to just anything.
Although people may in some cases develop a driving phobia, they are more likely to develop
phobias toward objects (such as snakes, spiders, heights, and open spaces) that have been
dangerous to people in the past. In modern life, it is rare for humans to be bitten by spiders or
snakes, to fall from trees or buildings, or to be attacked by a predator in an open area. Being
injured while riding in a car or being cut by a knife are much more likely.
In classical conditioning the organism learns to associate new stimuli with natural, biological
responses such as salivation or fear. The organism does not learn something new but rather
begins to perform in an existing behavior in the presence of a new signal. Operant conditioning,
on the other hand, is learning that occurs based on the consequences of behavior and can involve
the learning of new actions. Operant conditioning occurs when a dog rolls over on command
because it has been praised for doing so in the past, when a schoolroom bully threatens his
classmates because doing so allows him to get his way, and when a child gets good grades
because her parents threaten to punish her if she doesn’t. In operant conditioning the organism
learns from the consequences of its own actions.
The most basic of Skinner’s experiments was quite similar to Thorndike’s research with cats. A
rat placed in the chamber reacted as one might expect, scurrying about the box and sniffing and
clawing at the floor and walls. Eventually the rat chanced upon a lever, which it pressed to
release pellets of food. The next time around, the rat took a little less time to press the lever, and
on successive trials, the time it took to press the lever became shorter and shorter. Soon the rat
was pressing the lever as fast as it could eat the food that appeared. As predicted by the law of
effect, the rat had learned to repeat the action that brought about the food and cease the actions
that did not.
Skinner studied, in detail, how animals changed their behavior through reinforcement and
punishment, and he developed terms that explained the processes of operant learning (Table:
"How Positive and Negative Reinforcement and Punishment Influence Behavior"). Skinner used
the term reinforcer to refer to any event that strengthens or increases the likelihood of a
behavior and the term punisher to refer to any event that weakens or decreases the likelihood of
a behavior. And he used the terms positive and negative to refer to whether a reinforcement was
presented or removed, respectively. Thus positive reinforcement strengthens a response by
9
presenting something pleasant after the response and negative reinforcement strengthens a
response by reducing or removing something unpleasant. For example, giving a child praise for
completing his homework represents positive reinforcement, whereas taking aspirin to reduce
the pain of a headache represents negative reinforcement. In both cases, the reinforcement
makes it more likely that behavior will occur again in the future.
Table: How Positive and Negative Reinforcement and Punishment Influence Behavior
Operant
conditioning term Description Outcome Example
Negative Reduce or remove Behavior is Taking away a teen’s computer after he fails
punishment a pleasant stimulus weakened an exam.
It is also important to note that reinforcement and punishment are not simply opposites. The use
of positive reinforcement in changing behavior is almost always more effective than using
punishment. This is because positive reinforcement makes the person or animal feel better,
helping create a positive relationship with the person providing the reinforcement. Types of
positive reinforcement that are effective in everyday life include verbal praise or approval, the
awarding of status or prestige, and direct financial payment. Punishment, on the other hand, is
10
more likely to create only temporary changes in behavior because it is based on coercion and
typically creates a negative and adversarial relationship with the person providing the
reinforcement. When the person who provides the punishment leaves the situation, the
unwanted behavior is likely to return.
One way to expand the use of operant learning is to modify the schedule on which the
reinforcement is applied. To this point we have only discussed a continuous reinforcement
schedule, in which the desired response is reinforced every time it occurs; whenever the dog
rolls over, for instance, it gets a biscuit. Continuous reinforcement results in relatively fast
learning but also rapid extinction of the desired behavior once the reinforcer disappears. The
problem is that because the organism is used to receiving the reinforcement after every
behavior, the responder may give up quickly when it doesn’t appear.
Most real-world reinforcers are not continuous; they occur on a partial (or intermittent)
reinforcement schedule—a schedule in which the responses are sometimes reinforced, and
sometimes not. In comparison to continuous reinforcement, partial reinforcement schedules lead to
slower initial learning, but they also lead to greater resistance to extinction. Because the
reinforcement does not appear after every behavior, it takes longer for the learner to determine that
the reward is no longer coming, and thus extinction is slower. The four types of partial
reinforcement schedules are summarized in Table, “Reinforcement Schedules".
11
Table: Reinforcement Schedules
Reinforcement
schedule Explanation Real-world example
Behavior is reinforced after a specific number Factory workers who are paid according
Fixed-ratio of responses to the number of products they produce
Behavior is reinforced for the first response after an Person who checks voice mail
Variable-interval average, but unpredictable, amount of time has passed for messages
Behaviors can also be trained through the use of secondary reinforcers. Whereas a primary
reinforcer includes stimuli that are naturally preferred or enjoyed by the organism, such as food,
water, and relief from pain, a secondary reinforcer (sometimes called conditioned reinforcer) is a
neutral event that has become associated with a primary reinforcer through classical conditioning.
An example of a secondary reinforcer would be the whistle given by an animal trainer, which has
been associated over time with the primary reinforcer, food. An example of an everyday secondary
reinforcer is money. We enjoy having money, not so much for the stimulus itself, but rather for the
primary reinforcers (the things that money can buy) with which it is associated.
John B. Watson and B. F. Skinner were behaviorists who believed that all learning could be
explained by the processes of conditioning—that is, that associations, and associations alone,
influence learning. But some kinds of learning are very difficult to explain using only
conditioning. Thus, although classical and operant conditioning play a key role in learning, they
constitute only a part of the total picture.
One type of learning that is not determined only by conditioning occurs when we suddenly find
the solution to a problem, as if the idea just popped into our head. This type of learning is known
as insight, the sudden understanding of a solution to a problem. The German psychologist
Wolfgang Köhler (1925) carefully observed what happened when he presented chimpanzees with
a problem that was not easy for them to solve, such as placing food in an area that was too high in
the cage to be reached. He found that the chimps first engaged in trial-and-error attempts at
solving the problem, but when these failed they seemed to stop and contemplate for a while. Then,
13
after this period of contemplation, they would suddenly seem to know how to solve the problem,
for instance by using a stick to knock the food down or by standing on a chair to reach it. Köhler
argued that it was this flash of insight, not the prior trial-and-error approaches, which were so
important for conditioning theories that allowed the animals to solve the problem.
Edward Tolman (Tolman & Honzik, 1930) studied the behavior of three groups of rats that were
learning to navigate through mazes. The first group always received a reward of food at the end of
the maze. The second group never received any reward, and the third group received a reward, but
only beginning on the 11th day of the experimental period. As you might expect when considering
the principles of conditioning, the rats in the first group quickly learned to negotiate the maze,
while the rats of the second group seemed to wander aimlessly through it.
The rats in the third group, however, although they wandered aimlessly for the first 10 days,
quickly learned to navigate to the end of the maze as soon as they received food on day 11. By
the next day, the rats in the third group had caught up in their learning to the rats that had been
rewarded from the beginning.
It was clear to Tolman that the rats that had been allowed to experience the maze, even without
any reinforcement, had nevertheless learned something, and Tolman called this latent learning.
Latent learning refers to learning that is not reinforced and not demonstrated until there is
motivation to do so. Tolman argued that the rats had formed a “cognitive map” of the maze but
did not demonstrate this knowledge until they received reinforcement.
The idea of latent learning suggests that animals, and people, may learn simply by experiencing
or watching. Observational learning (modeling) is learning by observing the behavior of others.
To demonstrate the importance of observational learning in children, Bandura, Ross, and Ross
14
(1963) showed children a live image of either a man or a woman interacting with a Bobo doll, a
filmed version of the same events, or a cartoon version of the events. The Bobo doll is an
inflatable balloon with a weight in the bottom that makes it bob back up when you knock it
down. In all three conditions, the model violently punched the clown, kicked the doll, sat on it,
and hit it with a hammer.
The researchers first let the children view one of the three types of modeling, and then let them
play in a room in which there were some really fun toys. To create some frustration in the
children, Bandura let the children play with the fun toys for only a couple of minutes before
taking them away. Then Bandura gave the children a chance to play with the Bobo doll.
If you guessed that most of the children imitated the model, you would be correct. Regardless of
which type of modeling the children had seen, and regardless of the sex of the model or the
child, the children who had seen the model behaved aggressively—just as the model had done.
They also punched, kicked, sat on the doll, and hit it with the hammer. Bandura and his
colleagues had demonstrated that these children had learned new behaviors, simply by observing
and imitating others.
Observational learning is useful for animals and for people because it allows us to learn
without having to actually engage in what might be a risky behavior. Monkeys that see other
monkeys respond with fear to the sight of a snake learn to fear the snake themselves, even if
they have been raised in a laboratory and have never actually seen a snake (Cook & Mineka,
1990). As Bandura put it,
The prospects for [human] survival would be slim indeed if one could learn only by suffering
the consequences of trial and error. For this reason, one does not teach children to swim,
adolescents to drive automobiles, and novice medical students to perform surgery by having
them discover the appropriate behavior through the consequences of their successes and
Failures. The more costly and hazardous the possible mistakes, the heavier is the reliance
on observational learning from competent learners. (Bandura, 1977, p. 212)
Although modeling is normally adaptive, it can be problematic for children who grow up in
violent families. These children are not only the victims of aggression, but they also see it
15
happening to their parents and siblings. Because children learn how to be parents in large part by
modeling the actions of their own parents, it is no surprise that there is a strong correlation
between family violence in childhood and violence as an adult. Children who witness their
parents being violent or who are themselves abused are more likely as adults to inflict abuse on
intimate partners or their children, and to be victims of intimate violence. In turn, their children
are more likely to interact violently with each other and to aggress against their parents.
16
17