1 / 30

Chapter 7 Operant Conditioning:

Chapter 7 Operant Conditioning:. Schedules and Theories Of Reinforcement. Now that we have discussed reinforcement. It is time to discuss just HOW reinforcements can and should be delivered In other words, there are other things to consider than just WHAT the reinforcer should be!.

Télécharger la présentation

Chapter 7 Operant Conditioning:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7Operant Conditioning: Schedules and Theories Of Reinforcement

  2. Now that we have discussed reinforcement . . . . • It is time to discuss just HOW reinforcements can and should be delivered • In other words, there are other things to consider than just WHAT the reinforcer should be!

  3. Think about this! • If you were going to reinforce your puppy for going to the bathroom outside, how would you do it? • Would you give him a Liv-a-Snap every time? Some of the time? • Would you keep doing it the same way or would you change your method as you go along?

  4. What is a schedule of reinforcement? • A schedule of reinforcement is the response requirement that must be met in order to obtain reinforcement. • In other words, it is what you have to do to get the goodies!

  5. Continuous A continuous reinforcement schedule (CRF) is one in which each specified response is reinforced Intermittent An intermittent reinforcement schedule is one in which only some responses are reinforced Continuous vs. IntermittentReinforcement

  6. When you want to reinforce based on a certain number of responses occurring (for example, doing a certain number of math problems correctly), you can use a ratio schedule When you want to reinforce the first response after a certain amount of time has passed (for example when a teacher gives a midterm test), you can use an interval schedule Intermittent Schedules

  7. Ratio Schedules Fixed Ratio Variable Ratio Interval Schedules Fixed Interval Variable Interval Four Types of Intermittent Schedules

  8. Fixed Ratio Schedule • On a fixed ratio schedule, reinforcement is contingent upon a fixed, predictable number of responses • Characteristic pattern: • High rate of response • Short pause following each reinforcer • Reading a chapter then taking a break is an example • A good strategy for “getting started” is to start with an easy task

  9. Fixed Ratio, continued • Higher Ratio requirements result in longer post-reinforcement pauses • Example: The longer the chapter you read, the longer the study break! • Ratio Strain – a disruption in responding due to an overly demanding response requirement • Movement from “dense/rich” to “lean” schedule should be done gradually

  10. Fixed Ratio: FR • Fixed Ratio is abbreviated “FR” and a number showing how many responses must be made to get the reinforcer is added: • Ex. FR 5 (5 responses needed to get a reinforcer)

  11. Variable Ratio Schedule • On a variable ratio schedule, reinforcement is contingent upon a varying, unpredictable number of responses • Characteristic pattern: • High and steady rate of response • Little or no post-reinforcer pausing • Hunting, fishing, golfing, shooting hoops, and telemarketing are examples of behaviors on this type of schedule

  12. Other facts aboutVariable Ratio Schedules • Behaviors on this type of schedule tend to be very persistent • This includes unwanted behaviors like begging, gambling, and being in abusive relationships • “Stretching the ratio” means starting out with a very dense, rich reinforcement schedule and gradually decreasing the amount of reinforcement • The spouse, gambler, or child who is the “victim” must work harder and harder to get the reinforcer

  13. Variable Ratio: VR • Variable Ratio: VR • Variable Ratio is abbreviated “VR” and a number showing an average of how many responses between 1 and 100 must be made to get the reinforcer is added: • Ex. VR 50 (an average of 50 responses needed to get a reinforcer – could the the next try, or it could take 72! • Gambling is the classic example!

  14. Fixed Interval Schedules • On a fixed interval schedule, reinforcement is contingent upon the first response after afixed, predictable period of time • Characteristic pattern: • A “scallop” pattern produced by a post-reinforcement pause followed by a gradually increasing rate of response as the time interval draws to a close • Glancing at your watch during class provides an example! • Student study behavior provides another!

  15. Fixed Interval: FI • Fixed Interval is abbreviated “FI” and a number showing how much time must pass before the reinforcer is available: • FI 30-min (reinforcement is available for the first response after 30 minutes have passed) • Ex. Looking down the tracks for the train if it comes every 30 minutes

  16. Variable Interval Schedule • On a variable interval schedule, reinforcement is contingent upon the first response after avarying, unpredictable period of time • Characteristic pattern: • A moderate, steady rate of response with little or no post-reinforcement pause. • Looking down the street for the bus if you are waiting and have no idea how often it comes provides an example!

  17. Variable Interval: VI • Variable Interval is abbreviated “VI” and a number showing the average time interval that must pass before the reinforcer is available: • VI 30-min (reinforcement is available for the first response after an average of 30 minutes has passed) • Ex. Hilary’s boyfriend, Michael, gets out of school and turns on his phone some time between 3:00 and 3:30 – the “reward” of his answering his phone puts her calling behavior on a VI schedule, so she calls every few minutes until he answers

  18. Noncontingent Reinforcement • What happens when reinforcement occurs randomly, regardless of a person or animal’s behavior? • Weird Stuff! • Like what?

  19. Superstitious Behavior • Examples include: • Rituals of gamblers, baseball players, etc. • Elevator-button-pushing behavior • Noncontingent reinforcement can sometimes be used for GOOD purposes (not just weird or useless behaviors!)

  20. Good, useful examples • Giving noncontingent attention to children • Some bad behaviors like tantrums are used to try to get attention from caregivers • These behaviors can be diminished by giving attention noncontingently Children need both contingent AND non-contingent attention to grown up healthy and happy!

  21. Theories of Reinforcement • In the effort to answer the question, “What makes reinforcers work?”, theorists have developed some . . . . . THEORIES!!!!!

  22. So here’s the first one: • If you are hungry and go looking for food and eat some, you will feel more comfortable because the hunger has been reduced. • The desire to have the uncomfortable “hunger drive” reduced motivates you to seek out and eat the food

  23. Drive Reduction Theory • So this is one thing that can make reinforcers work: • An event is reinforcing to the extent that it is associated with a reduction in some type of physiological drive • This type of approach may explain some behaviors (like sex) but not others (like playing video games)

  24. Incentive Motivation • Sometimes, we just do things because they are FUN! • When this happens, we can say that motivation is coming from some property of the reinforcer itself rather than from some kind of internal drive • Examples include playing games and sports, putting spices on food, etc.

  25. We can also think about how we use reinforcers. • We can use a behavior we love (high probability behavior) to reinforce a behavior we don’t like to do very much (low probability behavior). • This is sometimes called “Grandma’s Principle” • Bobby, you can read those comic books once you have mowed the grass! • To use this theory, you have to know the “relative probability” of each behavior

  26. What do you do if you only know the “probability” for one? • You can use the next theory! • Let’s say you know that a person likes to play video games. You can use playing video games as a reinforcer IF you: • Restrict access to playing • Make sure the person is getting to play less frequently than they prefer to

  27. This is the “Response Deprivation Hypothesis” • Any behavior that you can restrict access to and keep it below the person or animal’s preferred level of doing it can be used as a reinforcer • Think of some examples!

  28. Behavioral Bliss Point • The Response Deprivation Hypothesis makes an assumption that there is an optimal or best level of behavior that a person or animal tries to maintain • If you could do ANYTHING at all you wanted to do, how would you distribute your time? • This would tell you your “behavioral bliss point” for each activity or behavior

  29. Behavioral Bliss Point cont’d • An organism that has free access to alternative activities will distribute its behavior in such a way as to maximize overall reinforcement • In other words, if you can do anything you want, you will spend time on each thing you do in a way that will give you the most pleasure

  30. But this is real life! • This means that you can almost never achieve your “behavioral bliss point” • So you have to compromise by coming as close as you can, given your circumstances • No wonder we hate to leave our childhoods behind!

More Related