Skip to main content
Back

Operant Conditioning: Principles, Processes, and Schedules (Chapter 6 Study Notes)

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Operant Conditioning

Introduction to Operant Conditioning

Operant conditioning is a fundamental concept in learning theory, describing how behaviors are shaped and maintained by their consequences. This form of learning is also known as instrumental learning, as responses are controlled by the outcomes they produce.

  • Definition: Operant conditioning is a type of learning in which voluntary behaviors are influenced by the consequences that follow them.

  • Key Figure: Edward L. Thorndike (1913) introduced the law of effect:

    • If the presence of a stimulus leads to satisfying effects, the response is strengthened.

  • Example: A cat learns to escape a puzzle box faster over repeated trials, as shown in Thorndike's learning curve.

B.F. Skinner and the Principle of Reinforcement

B.F. Skinner expanded on Thorndike's work, emphasizing the role of reinforcement in shaping behavior. Skinner's research utilized the operant chamber (Skinner box) to systematically study behavioral responses.

  • Principle of Reinforcement: Occurs when an event following a response increases an organism's tendency to make that response.

  • A rewarding consequence strengthens the initial response.

  • Reinforcement is subject to individual differences; what is reinforcing for one organism may not be for another.

  • Example: A rat presses a lever to receive food in a Skinner box.

Types of Reinforcers

Primary vs. Secondary Reinforcers

Reinforcers are stimuli that increase the likelihood of a behavior being repeated. They are classified as primary or secondary based on their origin and association.

  • Primary Reinforcers: Inherently reinforcing because they satisfy basic biological needs.

    • Examples: Food, water, sex.

  • Secondary Reinforcers: Acquire reinforcing qualities through association with primary reinforcers.

    • Examples: Money, praise, flattery.

  • Immediate reinforcement produces the fastest conditioning.

Basic Processes in Operant Conditioning

Acquisition, Shaping, and Extinction

Operant conditioning involves several key processes that determine how behaviors are learned and maintained.

  • Acquisition: The initial stage of learning when a response is first established.

  • Shaping: Reinforcement of closer and closer approximations of a desired response.

  • Extinction: Gradual weakening and disappearance of a response when it is no longer followed by a reinforcer.

  • Renewal Effect: Extinct response reappears in a new context.

Stimulus Control: Discrimination and Generalization

  • Stimulus Discrimination: An organism responds to one stimulus but not to another similar one.

  • Stimulus Generalization: A new stimulus is responded to as if it were the original.

Comparison of Classical and Operant Conditioning

The following table summarizes the basic processes in classical and operant conditioning:

Process and Definition

Description in Classical Conditioning

Description in Operant Conditioning

Acquisition

The initial stage of learning; CS and UCS are paired, gradually resulting in CR.

Responding gradually increases because of reinforcement, possibly through shaping.

Extinction

The gradual weakening and disappearance of a conditioned response tendency; CS is presented alone until it no longer elicits CR.

Responding gradually slows and stops after reinforcement is terminated.

Stimulus Generalization

Responding increases to new stimuli that resemble the original CS.

Responding increases in the presence of new stimuli that resemble the original discriminative stimulus.

Stimulus Discrimination

Responding decreases to new stimuli that resemble the original CS.

Responding does not increase in the lack of resemblance between new stimuli and the original discriminative stimulus.

Schedules of Reinforcement

Types of Reinforcement Schedules

Reinforcement schedules determine how and when a behavior is followed by a reinforcer. These schedules influence the rate and stability of learned behaviors.

  • Continuous Reinforcement: Every instance of a designated response is reinforced.

  • Intermittent (Partial) Reinforcement: Only some responses are reinforced.

Ratio Schedules

  • Fixed-Ratio (FR): Reinforcer is given after a fixed number of nonreinforced responses.

    • Example: Financial bonus after 5 business deals are closed.

  • Variable-Ratio (VR): Reinforcer is given after a variable number of nonreinforced responses.

    • Example: Slot machine player wins, on average, every 6 tries.

Interval Schedules

  • Fixed-Interval (FI): Reinforcer is for the first response after a fixed time interval has elapsed.

    • Example: Unproductive government employee gets paid biweekly.

  • Variable-Interval (VI): Reinforcer is for the first response after a variable time interval has elapsed.

    • Example: Surfer gets rewarded after waiting around for a big wave.

Patterns of Response

  • Fixed schedules tend to produce a pause after reinforcement, followed by rapid responding.

  • Variable schedules produce steady, high rates of responding.

Positive vs. Negative Reinforcement

Increasing a Response

Reinforcement can be classified as positive or negative, depending on whether a stimulus is presented or removed following a behavior.

  • Positive Reinforcement: Response is followed by the presentation of a rewarding stimulus.

    • Example: Giving a child a toy for good behavior.

  • Negative Reinforcement: Response is followed by the removal of an aversive stimulus.

    • Example: Giving a child a toy to stop whining.

  • Escape Learning: Behavior removes an aversive stimulus (e.g., rat presses lever to turn off shock).

  • Avoidance Learning: Behavior prevents an aversive stimulus (e.g., rat runs when light comes on).

Key Equations and Concepts

  • Law of Effect (Thorndike):

    • If leads to a satisfying effect, then is strengthened.

  • Reinforcement Rate:

    • Additional info: This equation is a conceptual representation; actual reinforcement rates depend on schedule and context.

Summary Table: Schedules of Reinforcement

Schedule

Definition

Example

Response Pattern

Fixed-Ratio (FR)

Reinforcer after fixed number of responses

Bonus after 5 sales

High rate, brief pause after reinforcement

Variable-Ratio (VR)

Reinforcer after variable number of responses

Slot machine wins

High, steady rate

Fixed-Interval (FI)

Reinforcer for first response after fixed time

Biweekly paycheck

Scalloped pattern, pause after reinforcement

Variable-Interval (VI)

Reinforcer for first response after variable time

Surfer rewarded after waiting for wave

Low, steady rate

Applications and Examples

  • Operant conditioning principles are widely used in education, animal training, therapy, and behavior modification.

  • Understanding reinforcement schedules helps in designing effective interventions for learning and behavior change.

Additional info: These notes provide a comprehensive overview of operant conditioning, suitable for introductory psychology students preparing for exams or assignments.

Pearson Logo

Study Prep