Presentation is loading. Please wait.

Presentation is loading. Please wait.

Psychological Science ©2017 W. W. Norton & Company, Inc.

Similar presentations


Presentation on theme: "Psychological Science ©2017 W. W. Norton & Company, Inc."— Presentation transcript:

1 Psychological Science ©2017 W. W. Norton & Company, Inc.
Michael S. Gazzaniga Psychological Science SIXTH EDITION Chapter 6 Learning ©2017 W. W. Norton & Company, Inc.

2 How Do We Learn? Learning Objectives Define learning.
Identify three types of learning processes. Describe the nonassociative learning processes: habituation and sensitization. Explain the significance of each. 2

3 6.1 Learning Results from Experience
Learning: a relatively enduring change in behavior, resulting from experience Associations develop through conditioning, a process in which environmental stimuli and behavioral responses become connected.

4 Nonassociative Learning
Learning theory arose in the early twentieth century in response to Freudian and introspective approaches. John B. Watson argued that only observable behavior was a valid indicator of psychological activity, and that the infant mind was a tabula rasa, or blank slate. He stated that the environment and its effects were the sole determinants of learning. Behaviorism was the dominant paradigm into the 1960s, and it had a huge influence on every area of psychology.

5 Nonassociative Learning
Nonassociative learning: responding after repeated exposure to a single stimulus, or event 5

6 Associative Learning Associative learning: linking two stimuli, or events, that occur together 6

7 Observational Learning
Observational learning: acquiring or changing a behavior after exposure to another individual performing that behavior 7

8 FIGURE 6.3 Types of Learning
8

9 6.2 Habituation and Sensitization Are Models of Nonassociative Learning
Habituation: a decrease in behavioral response after repeated exposure to a stimulus This is especially true if the stimulus is neither harmful nor rewarding. Dishabituation: an increase in a response because of a change in something familiar 9

10 Insert Figure 6.4 Here FIGURE 6.4 Types of Nonassociative Learning 10

11 Insert Figure 6.5 Here FIGURE 6.5 Habituation
Suppose you live or work in a noisy environment. You learn to ignore the constant noise because you do not need to respond to it. 11

12 6.2 Habituation and Sensitization Are Models of Nonassociative Learning
Sensitization: an increase in behavioral response after exposure to a stimulus Stimuli that most often lead to sensitization are those that are threatening or painful. 12

13 How Do We Learn Predictive Associations?
Learning Objectives Define classical conditioning. Differentiate between the UR, US, CS, and CR. Describe acquisition, second-order conditioning, generalization and discrimination, extinction, and spontaneous recovery. Describe the Rescorla-Wagner model of classical conditioning, including the role of prediction error and dopamine in the strength of associations. 13

14 6.3 Behavioral Responses Are Conditioned
We learn predictive associations through conditioning, the process that connects environmental stimuli to behavior. Psychologists study two types of associative learning: classical conditioning operant conditioning 14

15 Insert Figure 6.8 Here FIGURE 6.8 Two Types of Associative Learning 15

16 6.3 Behavioral Responses Are Conditioned
Watson was influenced by Ivan Pavlov’s research on the salivary reflex, an automatic response when a food stimulus is presented to a hungry animal. Pavlov won a Nobel Prize in 1904 for his research on the digestive system. Pavlov noticed the dogs salivated as soon as they saw the bowls that usually contained food, suggesting a learned response.

17 FIGURE 6.9 Pavlov’s Apparatus and Classical Conditioning
(a) Ivan Pavlov, pictured here with his colleagues and one of his canine subjects, conducted groundbreaking work on classical conditioning. (b) Pavlov’s apparatus collected and measured a dog’s saliva. 17

18 6.3 Behavioral Responses Are Conditioned
Classical (Pavlovian) conditioning: a neutral object comes to elicit a response when it is associated with a stimulus that already produces that response

19 6.3 Behavioral Responses Are Conditioned
A typical Pavlovian experiment involves: Conditioning trials: a neutral stimulus and an unconditioned stimulus are paired to produce a reflex (e.g., salivation) Neutral stimulus: anything the animal can see or hear as long as it is not associated with the reflex being tested (e.g., a ringing bell) Unconditioned stimulus (US): a stimulus that elicits a response, such as a reflex, without any prior learning (e.g., food)

20 6.3 Behavioral Responses Are Conditioned
Test trials: the neutral stimulus alone is tested, and the effect on the reflex is measured Pavlov used the metronome sound to measure the salivary reflex. 20

21 6.3 Behavioral Responses Are Conditioned
Unconditioned response (UR): a response that does not have to be learned, such as a reflex Unconditioned stimulus (US): a stimulus that elicits a response, such as a reflex, without any prior learning

22 6.3 Behavioral Responses Are Conditioned
Conditioned stimulus (CS): a stimulus that elicits a response only after learning has taken place Conditioned response (CR): a response to a conditioned stimulus; a response that has been learned 22

23 The Methods of Psychology Pavlov’s Classical Conditioning
23

24 6.4 Learning Is Acquired and Persists Until Extinction
Pavlov was influenced by Charles Darwin and believed that conditioning is the basis of adaptive behaviors. Acquisition: the gradual formation of an association between the conditioned and unconditioned stimuli The critical element in the acquisition of a learned association is time, or contiguity.

25 6.4 Learning Is Acquired and Persists Until Extinction
The CR is stronger when there is a very brief delay between the CS and the US. Scary music begins to play right before a frightening scene in a movie—not during or after. 25

26 Insert Figure 6.11 a, b, c, d Here
[Text p. 214] FIGURE 6.11 Acquisition, Extinction, and Spontaneous Recovery 26

27 Second-Order Conditioning
Second-order conditioning: A CS becomes associated with other stimuli associated with the US. This phenomenon helps account for the complexity of learned associations. Second-order conditioning helps account for the complexity of learned associations, especially in people.

28 Generalization and Discrimination
Stimulus generalization: learning that occurs when stimuli that are similar, but not identical, to the conditioned stimulus produce the conditioned response Stimulus discrimination: a differentiation between two similar stimuli when only one of them is consistently associated with the unconditioned stimulus 28

29 Insert Figure 6.12 FIGURE 6.12 Stimulus Generalization and Stimulus Discrimination 29

30 Extinction Extinction inhibits the associative bond, but does not eliminate it. Animals must learn when associations are no longer adaptive. Extinction: a process in which the conditioned response is weakened when the conditioned stimulus is repeated without the unconditioned stimulus 30

31 Extinction Spontaneous recovery: a process in which a previously extinguished conditioned response reemerges after the presentation of the conditioned stimulus The recovery will fade unless the CS is again paired with the US. 31

32 6.5 Learning Is Based on Evolutionary Significance
Pavlov’s original explanation for classical conditioning was that any two events presented in contiguity would produce a learned association. Pavlov and his followers believed that the association’s strength was determined by factors such as the intensity of the conditioned and unconditioned stimuli. 32

33 6.5 Learning Is Based on Evolutionary Significance
However, in the mid-1960s, a number of challenges to Pavlov’s theory suggested that some conditioned stimuli were more likely than others to produce learning. In other words, contiguity was not sufficient to create CS-US associations. 33

34 Conditioned Taste Aversions
Psychologist John Garcia and colleagues showed that certain pairings of stimuli are more likely to become associated than others. Conditioned taste aversion: the association between eating a food and getting sick A response occurs even if the illness was caused by a virus or some other condition. This is especially likely to occur if the food was not part of the person’s usual diet. A food aversion can be formed in one trial. 34

35 Conditioned Taste Aversions
Animals that associate a certain flavor with illness, and therefore avoid that flavor, are more likely to survive and pass along their genes. Learned adaptive responses may reflect the survival value that different auditory and visual stimuli have based on potential dangers associated with the stimuli. 35

36 Biological Preparedness
Biological preparedness: Psychologist Martin Seligman argued that animals are genetically programmed to fear specific objects. People are predisposed to wariness of outgroup members. 36

37 FIGURE 6.15 Biological Preparedness
Animals have evolved to be able to detect threats Thus, (a) we will quickly see the snake in this group of images, and (b) we will have a harder time detecting the flowers in this group. In both cases, the snakes grab our attention (Hayakawa, Kawai, & Masataka, 2011). 37

38 6.6 Learning Involves Expectancies and Prediction
Classical conditioning is a way that animals come to predict the occurrence of events that prompted psychologists to try to understand the mental processes that underlie conditioning. Robert Rescorla argued that for learning to take place, the conditioned stimulus must accurately predict the unconditioned stimulus. 38

39 6.6 Learning Involves Expectancies and Prediction
Rescorla-Wagner model: a cognitive model of classical conditioning; it holds that the strength of the CS-US association is determined by the extent to which the unconditioned stimulus is unexpected 39

40 Prediction Errors Prediction error: the difference between the expected and actual outcomes A positive prediction error strengthens the association between the CS and the US. A negative prediction error weakens the CS-US relationship. 40

41 FIGURE 6.16 Rescorla-Wagner Model
The Rescorla-Wagner model of learning emphasizes prediction error. (a) Here a dog associates the sound of an electric can opener with the arrival of food. (b) With the substitution of a manual can opener for the electric one, the dog is initially surprised. What happened to the reliable predictor of the dog’s food? (c) This prediction error causes the dog to check the environment for a new stimulus. When the dog comes to associate the manual can opener with the arrival of food, the new stimulus has become the better predictor of the expected event: time to eat! 41

42 Role of Dopamine Dopamine and Predication Error
What biological mechanisms are in effect during such learning? Researchers examined how dopamine neurons respond during conditioning. Prediction error signals alert us to important events in the environment. Researchers have recently found support for the error prediction model using optogenetics. By using optogenetics to activate dopamine neurons, researchers actually overcame the blocking effect. 42

43 FIGURE 6.17 Prediction Error and Dopamine Activity
Dopamine activity in the brain signals the receipt of a reward. (a) The blue line clearly shows a spike in dopamine activity. This activity resulted from a positive prediction error after the unexpected arrival of the US. (b) Once the US was associated with the CS, the spike in dopamine activity occurred after the arrival of the CS but not after the arrival of the expected US. (c) Dopamine activity continued after the arrival of the CS. However, once the US no longer appeared, negative prediction error resulted in decreased dopamine activity. 43

44 6.7 Phobias and Addictions Have Learned Components
Classical conditioning helps explain many behavioral phenomena. Among the examples are phobias and addictions.

45 Phobias Phobia: an acquired fear out of proportion to the real threat of an object or of a situation Fear conditioning: the process of classically conditioning animals to fear neutral objects; the responses include specific physiological and behavioral reactions Freezing: may be a hardwired response to fear that helps animals deal with predators

46 Phobias Fear conditioning: a type of classical conditioning that turns neutral stimuli into feared stimuli

47 Phobias: Case Study of “Little Albert”
In 1919, J. B. Watson became one of the first researchers to demonstrate the role of classical conditioning in the development of phobias by devising the “Little Albert” case study. At the time, the prominent theory of phobias was based on Freudian ideas about unconscious repressed sexual desires. Watson proposed that phobias could be explained by simple learning principles, such as classical conditioning.

48 Phobias: Case Study of “Little Albert”
“Little Albert” (who was 11 months old) was presented with neutral objects (a white rat, rabbit, dog, and costume masks) that provoked a neutral response. During conditioning trials, when Albert reached for the white rat (CS), a loud clanging sound (US) scared him (UR).

49 Phobias: Case Study of “Little Albert”
Results: Eventually, the pairing of the rat (CS) and the clanging sound (US) led to the rat’s producing fear (CR) on its own. The fear response generalized to other stimuli presented with the rat initially, such as the costume masks. Conclusion: Classical conditioning can cause people to fear neutral objects. 49

50 Phobias: Case Study of “Little Albert”
Watson planned to conduct extinction trials to remove the learned phobias, but Albert’s mother removed the child from the study. Is this type of research ethical? Watson’s colleague, Mary Cover Jones, used classic conditioning techniques to develop effective behavioral therapies to treat phobias in 3-year-old Peter. Counterconditioning: exposing a patient to small doses of the feared stimulus while he or she engages in an enjoyable task

51 FIGURE 6.18 Case Study of “Little Albert”
(a) In Watson’s experiment, Little Albert was presented with a neutral object—a white rat—that provoked a neutral response. Albert learned to associate the rat with a loud clanging sound that scared him. Eventually he showed the conditioned fear response when he saw the previously neutral rat. (b) The fear response generalized to other stimuli presented with the rat, such as costume masks. 51

52 Insert Figure 6.19 Here FIGURE 6.19 Little Albert as an Adult
The best evidence indicates that Watson’s study participant Little Albert was William Albert Barger, who lived to be 87.

53 Drug Addiction Classical conditioning also plays an important role in drug addiction. Environmental cues associated with drug use can induce conditioned cravings. Unsatisfied cravings may result in withdrawal, an unpleasant state of tension and anxiety, coupled with changes in heart rate and blood pressure. The sight of drug cues leads to activation of the prefrontal cortex and various regions of the limbic system and produces an expectation that the drug high will follow. 53

54 Drug Addiction Psychologist Shepard Siegel believed exposing addicts to drug cues was an important part of treating addiction. He believed exposure helps extinguish responses to the cues and prevents them from triggering cravings.

55 Drug Addiction Siegel and his colleagues conducted research into the relationship between drug tolerance and situation. The body has learned to expect the drug in that location and compensates by altering neurochemistry or physiology to metabolize it. Conversely, if addicts take their usual large doses in novel settings, they are more likely to overdose because their bodies will not respond sufficiently to compensate. 55

56 How Do the Consequences of an Action Shape Behavior?
Learning Objectives Define operant conditioning. Distinguish between positive reinforcement, negative reinforcement, positive punishment, and negative punishment. Distinguish between schedules of reinforcement. Identify biological and cognitive factors that influence operant conditioning.

57 6.8 Operant Conditioning Involves Active Learning
Operant conditioning (Instrumental conditioning): a learning process in which the consequences of an action determine the likelihood that it will be performed in the future B. F. Skinner chose the term operant to express the idea that animals operate on their environments to produce effects 57

58 Law of Effect Edward Thorndike performed the first reported carefully controlled experiments in comparative animal psychology using a puzzle box. Law of Effect: any behavior that leads to a “satisfying state of affairs” is likely to occur again, and any behavior that leads to an “annoying state of affairs” is less likely to occur again. 58

59 Reinforcement Increases Behavior
Thirty years after Thorndike, Skinner developed a more formal learning theory, based on the law of effect. He objected to the subjective aspects of Thorndike’s law of effect: States of “satisfaction” are not observable empirically.

60 Reinforcement Increases Behavior
Skinner believed that behavior occurs because it has been reinforced Reinforcer: a stimulus that follows a response and increases the likelihood that the response will be repeated 60

61 Reinforcement Increases Behavior
An operant chamber that allowed repeated conditioning trials without requiring interaction from the experimenter Contained one lever connected to a food supply and another connected to a water supply

62 Insert Figure 6.21 Here FIGURE 6.21 B. F. Skinner
B. F. Skinner studies an animal’s operations on its laboratory environment. 62

63 FIGURE 6.22 Thorndike’s Puzzle Box
(a) Thorndike used puzzle boxes, such as the one depicted here, (b) to assess learning in animals. 63

64 Insert Figure 6.23 Here FIGURE 6.23 Operant Chamber
This diagram shows B. F. Skinner’s operant chamber. 64

65 Shaping Sometimes animals take a long time to perform the precise desired action. What can be done to make them act more quickly? Shaping: an operant-conditioning technique that consists of reinforcing behaviors that are increasingly similar to the desired behavior Successive approximations start with any behavior that even slightly resembles the desired behavior.

66 6.9 How Do Superstitions Start?
Most superstitions are harmless, but some can interfere with daily living when they get too extreme. As a critical thinker who understands psychological reasoning, you should be aware of the tendency to associate events with other events that occur at the same time. 66

67 FIGURE 6.25 Superstitions According to superstition, bad luck will come your way if (a) a black cat crosses your path or (b) you walk under a ladder. 67

68 6.9 How Do Superstitions Start?
Seeing Relationships That Do Not Exist: How Do Superstitions Start? The list of people’s superstitions is virtually endless Culture influences specific superstitions In North America and Europe, the number 13, and in China, Japan, Korea, and Hawaii, the number 4 Many sports stars, including Michael Jordan and Wade Boggs, engage in superstitious behaviors 68

69 6.9 How Do Superstitions Start?
The Scientific Study of Superstition B. F. Skinner started the scientific study of superstitious behavior in 1948, using pigeons as subjects. The pigeons developed a number of superstitious behaviors that they normally would not perform. Because these pigeons were performing particular actions when the reinforcers were given, their actions were accidentally reinforced. This type of learning is called autoshaping. 69

70 6.9 How Do Superstitions Start?
Associating Events That Occur Together in Time Both animals and humans have a tendency to associate events that occur together in time. This tendency is extremely strong because the brain is compelled to understand things. Pigeons develop behaviors that look like superstitions and people look for reasons to explain outcomes; the observed association serves that purpose. 70

71 6.9 How Do Superstitions Start?
Associating Events That Occur Together in Time Critical thinking requires us to understand psychological reasoning and be aware of the tendency to associate events with other events that occur at the same time. 71

72 6.10 There Are Many Types of Reinforcement
Primary reinforcers: satisfy biological needs such as food or water Secondary reinforcers: events or objects established through classical conditioning that serve as reinforcers but do not satisfy biological needs (e.g., money or compliments) 72

73 Reinforcer Potency David Premack theorized about how a reinforcer’s value could be determined. The key is the amount of time an organism, when free to do anything, engages in a specific behavior associated with the reinforcer. Premack principle: using a more valued activity can reinforce the performance of a less valued activity.

74 Positive and Negative Reinforcement
Reinforcement—positive or negative— increases the likelihood of a behavior Positive reinforcement: the administration of a stimulus to increase the probability of a behavior being repeated Negative reinforcement: the removal of a stimulus to increase the probability of a behavior being repeated

75 FIGURE 6.26 Positive Reinforcement and Negative Reinforcement
(a) In positive reinforcement, the response rate increases because responding causes the stimulus to be given. (b) In negative reinforcement, the response rate increases because responding causes the stimulus to be removed. 75

76 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
How often should reinforcers be given? Continuous reinforcement: a type of learning in which behavior is reinforced each time it occurs Partial reinforcement: a type of learning in which behavior is reinforced intermittently The effect of partial reinforcement on conditioning depends on the reinforcement schedule

77 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
Partial reinforcement can be administered according to either the number of behavioral responses or the passage of time. Ratio schedule: Reinforcement is based on the number of times the behavior occurs. Interval schedule: Reinforcement is provided after a specific unit of time. Ratio reinforcement generally leads to greater responding than does interval reinforcement.

78 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
Partial reinforcement can also be given on a fixed schedule or a variable schedule. Fixed schedule: Reinforcement is provided after a specific number of occurrences or after a specific amount of time. Variable schedule: Reinforcement is provided at different rates or at different times.

79 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
Fixed Interval Schedule (FI): occurs when reinforcement is provided after a certain amount of time has passed Variable Interval Schedule (VI): occurs when reinforcement is provided after the passage of time, but the time is not regular 79

80 Insert Figure 6.27 Here FIGURE 6.27 Fixed Interval Schedule
Imagine a cat learning to perform “feed me” behaviors right before the two feeding times each day. The reinforcer (slash mark) is the food. 80

81 Insert Figure 6.28 Here FIGURE 6.28 Variable Interval Schedule
Imagine yourself checking for texts and s frequently throughout the day. The reinforcer (slash) is a message from a friend. 81

82 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
Fixed Ratio Schedule (FR): occurs when reinforcement is provided after a certain number of responses have been made Variable Ratio Schedule (VR): occurs when reinforcement is provided after an unpredictable number of responses 82

83 Insert Figure 6.29 Here FIGURE 6.29 Fixed Ratio Schedule
Imagine factory workers who are paid based on making a certain number of objects. The reinforcer (slash mark) is payment. 83

84 Insert Figure 6.30 Here FIGURE 6.30 Variable Ratio Schedule
Imagine putting a lot of money into a slot machine in the hope that eventually you will win. The reinforcer (slash mark) is a payoff. 84

85 6.11 Operant Conditioning Is Influenced by Schedules of Reinforcement
Continuous reinforcement is highly effective for teaching a behavior. If the reinforcement is stopped, however, the behavior will extinguish quickly Partial-reinforcement extinction effect: the greater persistence of behavior under partial reinforcement than under continuous reinforcement This explains why gambling is so addictive

86 6.12 Punishment Decreases Behavior
Reinforcement and punishment have the opposite effects on behavior. Reinforcement increases a behavior’s probability; punishment decreases its probability. 86

87 Positive and Negative Punishment
Punishment reduces the probability that a behavior will recur Positive punishment: the administration of a stimulus to decrease the probability of a behavior recurring Negative punishment: the removal of a stimulus to decrease the probability of a behavior recurring 87

88 Insert Figure 6.31 Here FIGURE 6.31 Negative and Positive Reinforcement, Negative and Positive Punishment Use this chart to help solidify your understanding of these very important terms. 88

89 Effectiveness of Parental Punishment
For punishment to be effective, it must be reasonable, unpleasant, and applied immediately so the relationship between the unwanted behavior and the punishment is clear. Punishment often fails to offset the reinforcing aspects of the undesired behavior. 89

90 Effectiveness of Parental Punishment
Research indicates that physical punishment is often ineffective, compared with grounding and time-outs. Many psychologists believe that positive reinforcement is the most effective way of increasing desired behaviors while encouraging positive parent-child bonding. 90

91 FIGURE 6.32 Legality of Spanking
These maps compare (a) the United States and (b) Europe in terms of the legality of spanking children. 91

92 6.13 How Can Behavior Modification Help You Get in Shape?
Behavior modification: the use of operant- conditioning techniques to eliminate unwanted behaviors and replace them with desirable ones. Token economies operate on the principle of secondary reinforcement. Tokens are earned for completing tasks and lost for bad behavior. Tokens can later be traded for objects or privileges.

93 6.13 How Can Behavior Modification Help You Get in Shape?
Daily exercise steps to consider: Identify a behavior you wish to change. Set goals. Monitor your behavior. Select a positive reinforcer and decide on a reinforcement schedule. Reinforce the desired behavior. Modify your goals, reinforcements, or reinforcement schedules, as needed. 93

94 6.14 Biology and Cognition Influence Operant Conditioning
Behaviorists such as Skinner believed that all behavior could be explained by straightforward conditioning principles. However, a great deal about behavior remains unexplained. Biology constrains learning, and reinforcement does not always have to be present for learning to take place.

95 Biological Constraints
Animals have a hard time learning behaviors that run counter to their evolutionary adaptation. Marian Breland and Keller Breland used operant- conditioning techniques to train animals but ran into difficulty when they chose tasks that were incompatible with innate adaptive behaviors.

96 Biological Constraints
Conditioning is most effective when the association between the response and the reinforcement is similar to the animal’s built-in predispositions. Psychologist Robert Bolles argued that animals have built-in defense reactions to threatening stimuli. 96

97 Acquisition/Performance Distinction
Tolman’s studies involved rats running through mazes. Cognitive map: a visual/spatial mental representation of an environment The presence of reinforcement does not adequately explain insight learning, but it helps determine whether the behavior will be subsequently repeated. 97

98 Acquisition/Performance Distinction
Tolman argued that learning can take place without reinforcement. Latent learning takes place in the absence of reinforcement. Insight learning is a solution that suddenly emerges after a period of either inaction or contemplation.

99 FIGURE 6.35 Tolman’s Study of Latent Learning
Rats that were regularly reinforced for correctly running through a maze (Group 2) showed improved performance over time compared with rats that did not receive reinforcement (Group 1). Rats that were not reinforced for the first ID trials but were reinforced thereafter showed an immediate change in performance (Group 3). Note that between days 11 and 12 Group 3’s average number of errors decreased dramatically. 99

100 6.15 Dopamine Activity Underlies Reinforcement
People often use the term reward as a synonym for positive reinforcement. Skinner and other traditional behaviorists defined reinforcement strictly in terms of whether it increased behavior. The neurotransmitter dopamine is involved in addictive behavior and plays an important role in reinforcement. 100

101 6.15 Dopamine Activity Underlies Reinforcement
When hungry rats are given food, they experience an increased dopamine release in the nucleus accumbens, a structure that is part of the limbic system. The greater the hunger, the greater the dopamine release. More dopamine is released under conditions of deprivation than under conditions of no deprivation. 101

102 6.15 Dopamine Activity Underlies Reinforcement
In operant conditioning, dopamine release sets the value of a reinforcer, and blocking dopamine decreases reinforcement. Dopamine blockers can also help people with Tourette’s syndrome regulate their involuntary body movements. 102

103 6.15 Dopamine Activity Underlies Reinforcement
Psychologists Terry Robinson and Kent Berridge introduced an important distinction between the wanting and liking aspects of reward. For example, a smoker may want a cigarette yet may not especially enjoy it. Dopamine appears to be especially important in wanting a reward. 103

104 How Do We Learn from Watching Others?
Learning Objectives Define observational learning. Generate examples of observational learning, modeling, and vicarious learning. Discuss contemporary evidence regarding the role of mirror neurons in learning. 104

105 6.16 Learning Can Occur Through Observation and Imitation
Observational learning: the acquisition or modification of a behavior after exposure to another individual performing that behavior (also known as social learning) Observational learning is a powerful adaptive tool for humans and other animals.

106 Bandura’s Observational Studies
Psychologist Albert Bandura’s studies suggest that exposing children to violence may encourage them to act aggressively.

107 FIGURE 6.38 Bandura’s Bobo Doll Studies
In Bandura’s studies, two groups of preschool children were shown a film of an adult playing with a large inflatable doll called Bobo. One group saw the adult play quietly (not shown here), and the other group saw the adult attack the doll (shown in the top row here). When children were allowed to play with the doll later, those who had seen the aggressive display were more than twice as likely to act aggressively toward the doll. 107

108 Modeling (Demonstration and Imitation)
Modeling: the imitation of observed behavior Modeling is effective only if the observer is physically capable of imitating the behavior. Imitation is much less common in nonhuman animals than in humans. Adolescents who associate smoking with admirable figures are more likely to start smoking. 108

109 Insert Figure 6.40 Here FIGURE 6.40 Movie Smoking and Adolescent Smoking This double-Y-axis graph compares the declining rate of smoking in movies with the declining rate of adolescent smoking. 109

110 Vicarious Learning (Reinforcement and Conditioning)
Vicarious learning: learning the consequences of an action by watching others being rewarded or punished for performing the same action A key distinction in learning is between the acquisition of a behavior and its performance. Learning a behavior does not necessarily lead to performing that behavior. 110

111 FIGURE 6.41 Two Types of Observational Learning
111

112 6.17 Watching Violence in Media May Encourage Aggression
The extent to which media violence impacts aggressive behavior in children is debatable. Some studies demonstrate desensitization to violence after exposure to violent video games. However, it is difficult to draw the line between “playful” and “aggressive” behaviors in children. Most research in the area of TV and aggression shows a relationship between exposure to violence on TV and aggressive behavior.

113 FIGURE 6.42 Media Use by Young Americans
This bar graph shows the results of a study sponsored by the Kaiser Family Foundation, which provides information about health issues. “Total media use” means total hours individuals spent using media. Sometimes these individuals used more than one category of media at once. 113

114 Insert Figure 6.43 Here FIGURE 6.43 Media and Violent Behavior
Studies have shown that playing violent video games desensitizes children to violence. 114

115 6.18 Fear Can Be Learned Through Observation
Mineka noticed that lab-reared monkeys were not afraid of snakes to the same extent as monkeys in the wild. Her research demonstrated that animals’ fears can be learned through observation. Social forces also play a role in fear learning in humans.

116 6.19 Mirror Neurons Are Activated by Watching Others
Mirror neurons: neurons in the brain that are activated when one observes another individual engage in an action and performs a similar action They may serve as the basis of imitation learning, but the firing of mirror neurons does not always lead to imitative behavior. They are possibly the neural basis for empathy and may play a role in humans’ ability to communicate through language.


Download ppt "Psychological Science ©2017 W. W. Norton & Company, Inc."

Similar presentations


Ads by Google