Facilitating Learning PROF - ED 306: Behaviorism: Pavlov, Thordike, Skinner
Facilitating Learning PROF - ED 306: Behaviorism: Pavlov, Thordike, Skinner
Facilitating Learning PROF - ED 306: Behaviorism: Pavlov, Thordike, Skinner
FACILITATING LEARNING
PROF.ED 306
MODULE 6
BEHAVIORISM: PAVLOV, THORDIKE, SKINNER
FATIMA R. ASI
Instructress
I. INTRODUCTION
The theory of Behaviorism focuses on the study of observable and measurable
behavior. It emphasizes that behavior is mostly learned through conditioning and
reinforcement (reward and punishment). It does not give much attention to the mind and
the possibility of thought processes occurring in the mind. Contributions in the
development of the behaviorist theory largely came from Pavlov, Watson, Thorndike and
Skinner.
II. OBJECTIVES
ACTIVITY
1. Think of a teacher that’s most unforgettable to you in your elementary or high
school.
2. Are there things “go back to the past” and recall this teacher? What are these things?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
3. What kind of rewards and punishments did she/he apply in your class? For what student
behaviors were the rewards and punishments for?
ANALYSIS
BEHAVIORISM
Ivan Pavlov
Ivan Pavlov, a Russian physiologist, is well known for his work in
classical conditioning or stimulus substitution. Pavlov was renowned
experiment involved meat, a dog and a bell. Initially, Pavlov was
measuring the dog’s salivation in order to study digestion. This is when he
stumbled upon classical conditioning.
Pavlov’s experiment
Before conditioning, ringing the bell (neutral stimulus) caused no
response from dog. Placing food (unconditioned stimulus) in front of the
dog initiated salivation (unconditioned response). During conditioning, the
bell was rung a few seconds before the dog was presented with food. after
conditioning, the ringing of the bell (conditioned stimulus) alone produced
salivation (conditioned response). This is classical conditioning. See the
pictures below:
Classical Conditioning
Somehow you were conditioned to associate particular objects with your teacher. So, at present,
when you encounter the objects, you are also reminded of your teacher. This is an example of classical
conditioning.
Pavlov also had the following findings:
Stimulus generalization. Once the dog has learned to salivate at the sound bell, it will salivate at
other similar sounds.
Extinction. If you stop pairing the bell with the food, salivation will eventually cease in response
to the bell.
Spontaneous Recovery. Extinguished responses can be recovered after elapsed time, but will
soon extinguish again if the dog is not presented with food.
Discrimination. The dog could learn to discriminate between similar bells (stimuli) and discern
which bell would result in the presentation of food and which would not.
Higher-order conditioning. Once the dog has been conditioned to associate the bell with food, another
unconditioned stimulus, such as a light may be flashed at the same time that the bell is rung. Eventually,
the dog will salivate at the flash of the light without the sound of the bell.
Edward Thorndike
Edward Thorndike’s connectionism theory gave us the original S-R
framework of behavioral psychology. More than a hundred years ago he
wrote a text book entitled, Educational Psychology. He was the first one to
use this term. He explained that learning is the result of associations forming
between stimuli (S) and Responses (R). such associations or “habits” become
strengthened or weakened by the nature and frequency of the S-R pairings.
The model for S-R theory was trial and error learning in which certain
responses came to be repeated more than others because of rewards. The main
principles of connectionism (like the behavioral theory) were that learning
could be adequate explained without considering any unobservable internal
states.
Thorndike’s theory on connectionism, states that learning has taken place when a strong
connection on bond between stimulus and responses is formed. He came up with three primary
laws:
Law of effect. The law of effects states that a connection between a stimulus and response is
strengthened when the consequence is positive (reward) and the connection between the stimulus
and the response is weakened when the consequence is negative. Thorndike later on, revised this
“law” when he found that negative rewards (punishment) do not necessarily weaken bonds, that
some seemingly pleasurable consequences do not necessarily motivate performance.
Law of exercise. This tells us that the more an S-R (Stimulus-response) bond is practiced the
stronger it will become. “practice makes perfect” seem to be associated with this. However, like
the law of effect, the law of exercise also had to be revised when Thorndike found that practice
without feedback does not necessarily enhance performance.
Law of readiness. This states that the more readiness the learner has respond to the stimulus, the
stronger will be the bond between them. When a person is ready to respond to a stimulus and is
not made to respond, it becomes annoying to the person. For example, if the teacher says, “okay
we will now watch the movie (stimulus) you’ve been waiting for.” And suddenly the power goes
off. The students will feel frustrated because they were ready to respond to the stimulus but was
prevented from doing so. Likewise, if the person is not all ready to respond to a stimuli and is
asked to respond, that also becomes annoying. For instance, the teacher calls a student to stand
up and recite, and then the teacher asks the question and expects the students to respond right
away when he is still not ready. This will be annoying to the student. That is why teacher should
remember to say question first, and wait for a few seconds before calling on anyone to answer.
Principles derived from Thorndike’s Connectionism:
1. Learning requires both practice and rewards (law of effect/exercise)
2. A series of S-R connections can be chained together if they belong to the same action
sequence (law of readiness).
3. Transfer of learning occurs because of previously encountered situations.
4. Intelligence is a function of the number of connections learned.
John B. Watson
Was the first American psychologist to work with Pavlov’s ideas.
He too was initially involved in animal studies, then later became involved
in human behavior research.
He considered that humans are born with a few reflexes and the emotional
reactions of love and rage. All other behavior is learned through stimulus-
response associations through conditioning. He believed in the power of
conditioning so much that he said that if he is given a dozen healthy
infants he can make them into anything you want them to be, basically
through making stimulus-response connections through conditioning.
Operant conditioning is based upon the notion that learning is a result of change in
overt behavior. Changes in behavior are the result of an individual’s response to events (stimuli)
that occur in the environment. A response produces a consequence such as defining a word,
hitting a ball, or solving math problem. When a particular stimulus-response (S-R) pattern is
reinforced (rewarded), the individual is conditioned to respond.
Reinforcement is the key element in Skinner’s S-R theory. A reinforce is anything that
strengthens the desired response. There is a positive reinforcer and negative reinforce.
A positive reinforcer is any stimulus that is given or added to increase the response. An
example of positive reinforcement is when a teacher promises extra time in the play area to
children who behave well during the lesson. Another is a mother who promises a new cellphone
for her son who gets good grades. Still, other examples include verbal praises, star stamps and
stickers.
A negative reinforcer is any stimulus that results in the increased frequency of a
response when it is withdrawn or removed. A negative reinforce is not a punishment, in fact it is
a reward. For, instance, a teacher announces that a student who gets an average grade of 1.5 for
two grading periods will no longer take the final examination. The negative reinforce is
“removing” the final exam, which we realize is a form of reward for working hard and getting an
average grade of 1.5.
Negative reinforcer is different from punishment because punishment is a consequence
intended to result in reduced responses. An example would be a student who always comes late
is not allowed to join a group work that has ready began (punishment) and, therefore, loses
points for that activity. The punishment was done to reduce the response of repeatedly coming to
class late.
Skinner also looked into extinction or non-reinforcement: responses that are not
reinforced are not likely to be repeated. For example, ignoring a student’s misbehavior may
extinguish that behavior.
Shaping of behavior. An animal on a cage may take a very long time to figure out that
pressing a lever will produce food. to accomplish such behavior, successive approximations of
the behavior are reward until the animal learns the association between the lever and the food
reward. To begin shaping, the animal may be rewarded for simply turning in the direction of the
lever, and finally for pressing the lever.
Behavioral chaining. Comes about when a series of steps are needed to be learned. The
animal would master each step in sequence until the entire sequence learned. This can be applied
to a child being taught to tie a shoelace. The child can be given reinforcement (rewards) until the
entire process of tying the shoelace is learned.
Reinforcement schedules. Ones the desired behavioral response is accomplished,
reinforcement does not have to be 100%; in fact, it can be maintained more successfully through
what Skinner referred to as partial reinforcement schedules. Partial reinforcement schedules
include interval schedules and ration schedules.
Fixed interval schedules. The target response is reinforced after a fixed amount of time
has passed since the last reinforcement. Example, the bird in a cage is given food (reinforce)
every 10 minutes, regardless of how many times it presses the bar.
Variable interval schedules. This is similar to fixed interval schedule but the amount of
the time that must pass between reinforcement varies. Example, the bird may receive food
(reinforcer) different intervals, not every ten minutes.
Fixed Ratio Schedules. A fixed number of correct responses must occur before
reinforcement may recur. Example, the bird will be given food (reinforcer) every time it presses
the bar 5 times.
Variable Ration Schedule. The number of correct repetitions of the correct response for
reinforcement varies. Example, the bird is given food (reinforcer) after it presses the bar 3 times,
then after 10 times, then after 4 times. So the bird will not be able to predict how many times it
needs to press the bar before it gets food again.
Variable interval and especially, variable ratio schedules produce steadier and more
persistent rates of response because the learners cannot predict when the reinforcement will
come although they know that they will eventually succeed. An example of this is why people
continue to buy lotto tickets even when an almost negligible percentage of people actually win.
While it is true that very rarely there is big winner, but once in a while somebody hits the jackpot
(reinforcement). People cannot predict when the jackpot can be gotten (variable interval) so they
continue to buy tickets (repetition of response)
APPLICATION
1. Observation
a. Choose a place where you can observe adult-child interactions – such as in a mall, in church, at
the playground, etc. Spend one hour observing such adult-child interactions.
Focus your attention on the stimulus-response-consequence patterns you observe.
_____________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
b. Describe the consequences you observe. (It is better to write or scribble the details on the spot
or as soon as you finish your observation).
_____________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
2. What kinds of behaviors on the part of children elicit reinforcement and punishment
consequences from the adult?
_______________________________________________________________________
_______________________________________________________________________
5. Given this experience, what are your thoughts about operant conditioning? Do you think
children reinforce and punish adults as adults reinforce and punish them? How might the
two be interdependent?
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
2. Thordike’s Connectionism
a. Choose a topic you want to teach.
b. Think of ways you can apply the three primary laws while you teach the topic.
Law of Readiness
ASSESSMENT TASK/S
General Instructions: Write your answer in a long bond paper and use the standard format, send
your answers on our Official Google Classroom. Read with comprehension the activities you
need to comply. Plagiarism or copying your classmates’ answer is PROHITED, 20 points
deduction immediately in the students’ score every module. You have one week to finish the
given tasks.
Material
Online platforms
Book
References: Lucas and Corpuz, 2014, Faciltating Learning: A Metacognitive Process Page 79 –
90.
Prepared by:
FATIMA R. ASI
Instructress
Checked by:
Approved by: