Approach to human learning based on the premise that human intelligence and will operate on the environment rather than merely respond to the environment's stimuli.
Operant conditioning is an elaboration of classical conditioning. Operant conditioning holds that human learning is more complex than the model developed by Ivan Pavlov (1849-1936) and involves human intelligence and will operating (thus its name) on its environment rather than being a slave to stimuli.
The Pavlovian model of classical conditioning was revolutionary in its time but eventually came to be seen as limited in its application to most human behavior, which is far more complex than a series of automatic responses
|The frequency of a behavior is increased because of the behavior of the subject.||When a person receives reinforcement after engaging in some behavior, the person is likely to repeat that behavior.||When a person experiences a negative state and does something to eliminate the undesired state, the person is likely to repeat that behavior.|
|The frequency of a behavior is decreased because of the behavior of the subject.||When a person engages in a behavior and something negative is applied as a result, that behavior is less likely to be repeated.||When a person engages in a behavior and something positive is taken away, that behavior is less likely to be repeated.|
to various stimuli. B.F. Skinner (1904-1990) elaborated on this concept by introducing the idea of consequences into the behaviorist formula of human learning. Pavlov's classical conditioning explained behavior strictly in terms of stimuli, demonstrating a causal relationship between stimuli and behavior. In Pavlov's model, humans responded to stimuli in specific, predictable ways. According to Skinner, however, behavior is seen as far more complex, allowing for the introduction of choice and free will. According to operant conditioning, the likelihood that a behavior will be repeated depends to a great degree on the amount of pleasure (or pain) that behavior has caused or brought about in the past. Skinner also added to the vocabulary of behaviorism the concepts of negative and positive reinforcer and of punishment.
According to the Skinner model of operant conditioning humans learn behaviors based on a trial and error process whereby they remember what behaviors elicited positive, or pleasurable, responses and which elicited negative ones. He derived these theories from observing the behaviors of rats and pigeons isolated in what have come to be known as Skinner boxes. Inside the boxes, rats that had been deprived of food were presented with a lever that, when pushed, would drop a pellet of food into the cage. Of course, the rat wouldn't know this, and so the first time it hit the lever, it was a purely accidental, the result of what Skinner called random trial and error behavior. Eventually, however, the rat would "learn" that hitting the lever resulted in the appearance of food and it would continue doing so. Receiving the food, then, in the language of operant conditioning, is considered the reinforcer while hitting the lever becomes the operant, the way the organism operates on its environment.
Skinner's model of operant conditioning broke down reinforcements into four kinds to study the effects these various "schedules of reinforcement" would have on behavior. These schedules are: fixed interval, variable interval, fixed ration, and variable ration. In a fixed interval schedule experiment, the lever in the rat's box would only provide food at a specific rate, regardless of how often the rat pulled the lever. In other words, food would be provided every 60 seconds. Eventually, the rat adapts to this schedule, pushing the lever with greater frequency approximately every 60 seconds. In variable interval experiments, the lever becomes active at random intervals. Rats presented with this problem adapt by pressing the lever less frequently but at more regular intervals. An experiment using a fixed ratio schedule uses a lever that becomes active only after the rat pulls it a specific number of times, and in a variable ration experiment the number of pulls between activity is random. Behavior of the rats adapts to these conditions and is adjusted to provide the most rewards.
The real-world ramifications of operant conditioning experiments are easy to imagine, and many of the experiments described would probably sound very familiar to parents who use such systems of rewards and punishments on a daily basis with their children regardless of whether they have ever heard of B.F. Skinner. His model has been used by learning theorists of various sorts to describe all kinds of human behaviors. Since the 1960s, however, behaviorism has taken a back seat to cognitive theories of learning, although few dispute the elementary tenets of operant conditioning and their use in the acquisition of rudimentary adaptive behaviors.
Blackman, Derek E. Operant Conditioning: An Experimental Analysis of Behaviour. London: Methuen, 1974.
Mackintosh, Nicholas John. Conditioning and Associative Learning. New York: Oxford University Press, 1983.
Smith, Terry L. Behavior and Its Causes: Philosophical Foundations of Operant Psychology. Boston: Kluwer Academic Publishers, 1994.