The Management Theory Toolbox

Episode 15: Shaping Behavior—The Unique Power of Each Reinforcement Schedule with Dr. Hank Schlinger, Jr.

Season 2 Episode 15

Ever wonder why your team behaves the way they do? In this episode of The Management Theory Toolbox, we pull back the curtain on the psychology of reinforcement schedules—techniques that could be the secret sauce to boosting your team’s motivation and performance. Whether you're looking to fine-tune your leadership style or curious about the science behind behavior, this episode is a must-listen!

Episode Summary:
Welcome back to The Management Theory Toolbox! I'm your host, Travis Mallett, and today, we're wrapping up our series on operant conditioning with a deep dive into the powerful concept of reinforcement schedules. These are the invisible forces that shape behavior in the workplace—and beyond.

Joined by Dr. Hank Schlinger, a renowned expert in behavior analysis from California State University, we explore different types of reinforcement schedules, including fixed and variable intervals and ratios, and their profound impact on employee behavior. We’ll discuss real-world applications, like how slot machines and even smartphones use these principles to keep us hooked, and how you can ethically apply them in your organization to boost morale and productivity.

We also address a critical question: Can manipulating reinforcement schedules turn employees into mere machines? Dr. Schlinger sheds light on this ethical dilemma, arguing that when done right, these techniques can actually make your team feel more valued and motivated.

Guest Resources:

Key Takeaways:

  • Variable Ratio Schedules: These schedules can lead to high levels of persistence in behavior, making them powerful tools in both motivation and habit formation.
  • Ethical Considerations: Reinforcement schedules aren’t just about getting what you want; they can also be used to make employees feel acknowledged and valued.
  • Real-World Applications: From casinos to classrooms, understanding the science behind reinforcement can help you design more effective and ethical management strategies.

Call to Action:
This week, take a moment to reflect on the reinforcements you’re offering in your workplace. Are you consistently rewarding the behaviors you want to see, or are you leaving it up to chance? Try experimenting with a variable ratio schedule and see if it leads to better results. And remember, it’s not just about productivity—it's about creating a work environment where your team feels genuinely valued.

Thank you for joining us on this episode of The Management Theory Toolbox. Don’t forget to subscribe, leave a review, and share this episode with fellow managers and leaders who are keen to unlock the full potential of their teams. Stay tuned for our next episode, where we’ll explore the impact of social interactions on employee learning. Until then, keep learning, keep growing, and keep adding to your management theory toolbox!

Hank Schlinger:

And that's free. It doesn't cost the manager anything. In that sense, you're consciously and intentionally using reinforcement, but I don't think it reduces your employee to anything. I think it makes them feel more valued and more worthy, etc.

Travis Mallett:

Welcome back to the Management Theory Toolbox. I'm your host, travis Mallett, and I'm thrilled to have you join me on this journey of continuous learning and growth as we navigate the dynamic world of management. Now, this isn't your typical management podcast. Yes, there are plenty of resources out there that will give you the ABCs of how to run a meeting, hire someone or even how to fake a sick day without getting caught, but here we like to talk about the behind-the-scenes topics, those concepts and ideas which transcend specific management practices, rather than simply restate them. We aren't going to give you specific tips and tricks for becoming an effective manager.

Travis Mallett:

Here at the Management Theory Toolbox, we're interested in the why behind it all the discoveries of behavioral science, psychology, business and economics that will open our eyes to what's happening behind the scenes. In episodes 11 through 14, we've been learning about operant conditioning, which is one of the ways people learn or have their behavior shaped. One of the reasons we've been studying this is because the ability to learn is vital to organizational survival and maintaining competitive advantage in an ever-evolving business landscape. Today, we're going to wrap up our study of operant conditioning by talking about schedules of reinforcement. To help understand this topic, let's listen to Dwight Schrute's scheme of positive reinforcement from the Office. Listen up, come to the center of the room please. This is a Schrute buck. When you have done something good, you will receive one Schrute buck.

Hank Schlinger:

1,000 Schrute bucks equals an extra five minutes for lunch.

Travis Mallett:

What is the cash value of a Schrute buck? Excellent question, pam.

Hank Schlinger:

One one-hundredth of a cent, so 10,000 of your dollars is worth one real dollar.

Travis Mallett:

Just zip your lid. Now let us discuss precipitation, Stanley. When rainfall occurs, does it usually fall in a liquid, solid or gaseous state? Liquid, Very good, you have earned one shroot buck. I don't want it. Then you have been have earned one shroot buck. I don't want it. Then you have been deducted 50 shroot bucks. Make it 100.

Hank Schlinger:

Don't you want to earn shroot bucks? No, in fact, I'll give you a billion Stanley Nichols if you never talk to me again. What's the ratio of Stanley Nichols to shroot bucks? The same as the ratio of unicorns to leprechauns. Okay, that's it. Blast, cancel. Everybody out. No wait, what are you doing?

Travis Mallett:

I'm punishing them. Aside from the fact that Dwight's Shroop Bucks hold essentially no real value, and doesn't take into account whether the Shroop Buck is really a positive reinforcer for everyone. This scheme is interesting because there's no indication of how many good behaviors it takes to get rewarded with a shroop buck. Maybe Dwight intends to reward every single instance of good behavior with a shroop buck, which would be called a continuous schedule of reinforcement. Or, more likely, it's up to the whims of Dwight and the shroop bucks will be handed out in an unpredictable and random fashion. If we think about it, there's all sorts of ways we could design how often we give people reinforcements to encourage or shape their behavior. What if we provided the reinforcement every three times they do a desired behavior? Or what if the reinforcement is provided randomly but happens on average every three times? Each of these would be a different schedule of reinforcement and could elicit a different response from the receiving person, and this makes me wonder if there's a best schedule of reinforcement, one which gives us the highest probability of desired behavior.

Travis Mallett:

I actually stumbled across some of these differences while playing with my four-year-old son. See, he loves giving high fives over and over, but he especially loves it if he gives a big high five and gets some dramatic response you got me, you got me, okay. I've noticed that if I give him that crazy overreacting response every single time, he gives me a solid high five. That's only rewarding those high fives which are good, solid ones. He has a particular pattern of response. This is called a continuous reinforcement schedule. Every successful attempt is rewarded.

Travis Mallett:

As you might expect from the tenets of operant conditioning, his response is indeed to focus his attempts on giving good, solid high fives rather than weak side swipes, since it's the solid ones which earn him a reaction that he finds hilarious. Interestingly, he does this with a very specific pattern of response in terms of frequency and intensity of the attempts. But I've also tried using a variable schedule of reinforcement. Even if he gives a good, solid high five, I don't always react, only reacting maybe every third time on average, sometimes waiting until after five solid high fives, other times giving him the response after only two.

Travis Mallett:

Sure enough, this also resulted in him focusing his high fives on trying to achieve good, solid ones, since those are the ones that yield a funny reaction. But the pattern of his attempts in terms of frequency and intensity is vastly different from the continuous schedule of reinforcement. You'll find out what the difference in his response is after our guest interview. Speaking of which, there are a lot of different schedules of reinforcement and they all elicit various patterns of responding. So we're going to need some help sorting through all this, and we're fortunate enough to have with us an expert on this topic, dr Hank Schlinger. Hi, hank, and welcome to the show.

Hank Schlinger:

Thank you for having me on, I appreciate it.

Travis Mallett:

Great, I'm really excited to talk to you today. There are lots of different types of schedules of reinforcement and, honestly, it's a bit difficult to keep track of them all, so I'm really looking forward to your help in sorting through it. But before we get started, go ahead and introduce yourself and tell us a bit about your background and work.

Hank Schlinger:

My name is Hank Schlinger. I'm a professor of psychology at California State University. I previously directed the master's program in applied behavior analysis. Currently I'm director and coordinator of the undergraduate BCABA program, so it's a certificate program for undergraduates in behavior analysis. Historically I have conducted research in basic learning processes, including schedules of reinforcement, and a lot of theoretical work in various aspects of psychology, including intelligence, consciousness etc. All from a behavior analytic or scientific perspective, and I've published four books. Two are introductory psych books, one is a book on child development from a behavioral perspective and my most recent book is a parenting book titled how to Build Good Behavior and Self-Esteem in Children.

Travis Mallett:

Excellent, and thanks again for joining us. As I mentioned, we're talking about schedules of reinforcement For our listeners who might be new to the topic children. Excellent, and thanks again for joining us. As I mentioned, we're talking about schedules of reinforcement For our listeners who might be new to the topic. Could you provide a brief overview of what schedules of reinforcement are?

Hank Schlinger:

Sure, I think to begin with it's probably important to talk about what reinforcement is, because it's misunderstood, which may be the fault of people like myself in terms of disseminating it. But reinforcement is a basic law of behavior and what reinforcement states is that consequences of behavior. And by consequences I don't mean bad things, because when I was growing up my parents would say you misbehavebehavior going to get the consequences. I just mean any result of behavior determines the future probability of that behavior, and the probabilities can either remain the same or they can increase or decrease depending upon the consequences. So, in general, a reinforcer is a consequence or a stimulus that follows a response and increases the probability of similar responses, responses that are similar to that response under similar circumstances. On the one hand, it's very simple and self-evident. On the other hand, a thorough understanding is not that simple. Schedules of reinforcement are simply rules by which reinforcers are delivered, so that's maybe a little simpler.

Hank Schlinger:

The idea came, like all great scientific discoveries, by accident. I think the story goes like this BF Skinner, who was really the father of behavior analysis, who originally discovered the basic principles that other people built upon. I think the story goes that he was working in the lab with rats, and the weekend came and he realized that he didn't have enough food pellets for the rats, and so the stores were closed and he couldn't buy any more. So he had to make do with the pellets that he had, and so he had to stretch them out, and by doing so he discovered that when he gave food pellets, instead of for every single response, which we call a continuous reinforcement schedule, he had to give them for every certain number of responses, which I'll talk about as a fixed ratio schedule. He did that initially just to save pellets, so that he didn't run out over the weekend, and by doing so he discovered some interesting phenomena, interesting patterns of behavior in his rats, and, like all great scientists, that led him into a different direction, which was basically investigating schedules of reinforcement.

Travis Mallett:

Excellent. That's an interesting origin story for the topic, though I'm not surprised to hear BF Skinner brought into this, since I think you're now the fifth guest in a row to mention him. But let's go through this systematically, one at a time, and start with fixed ratios. What is a fixed ratio, and how does that differ from interval schedules?

Hank Schlinger:

So Skinner discovered that you can deliver a reinforcer after a certain number of responses and that would be the ratio of responses to reinforcers. So one reinforcer to one response, that would be an FR1, fixed ratio one or continuous reinforcement. Or you could have one reinforcer for 10 responses. One reinforcer for every 10 responses would be a fixed ratio, 10. So a fixed ratio, then, is the ratio schedule in which reinforcement occurs after a set or fixed number of responses. So FR10, every 10th response produces a reinforcer. Now all schedules of reinforcement produce certain patterns and rates of responding. So the rates can vary from fairly low to extremely high, and by rate I simply mean number of responses over time. And the patterns are patterns that Skinner discovered because he used a device called a cumulative recorder and basically what it did was record the animal's responses cumulatively and that enabled him to see, moment to moment, how reinforcement affected the animal's behavior. So it was really like a microscope onto the behavior of his lab animals. So a fixed ratio schedule produces a very interesting pattern of response and it produces this in all organisms in which it has been used, with one possible exception. Once the animal or human starts responding, they respond extremely quickly. When the reinforcer is delivered, they pause for a period of time. The pause is called a post-reinforcement pause, and all that means is that after reinforcement the animal will stop responding for a period of time, and so that period of time during which the animal's not responding varies directly with the size of the upcoming ratio. So if it's a very short ratio the animal has to do, then the animal's not responding varies directly with the size of the upcoming ratio. So if it's a very short ratio the animal has to do, then the animal will only pause briefly. If there's a lot of work, it's a large ratio the animal has to complete, then the animal pauses longer. So the pause is simply a period of time with no responding. Now, the length of the pause is directly related to the size of the ratio, and also there are other factors involved too, which I don't need to get into. But, for example, it differs depending on the species you use, how food deprived the animal is, how big the reinforcer is. There are a lot of other variables that determine the length of the pause, but the primary one for our purposes would be the size of the ratio. For example, a very small ratio like an FR1 or an FR5 or an FR10, depending on the animal. Small ratio like an FR1 or an FR5 or an FR10, depending on the animal produces a very short pause. The pause is not related to fatigue and it's not related to food deprivation. It's related only to the size of the ratio. So, for example, let's talk about rats. For a rat, an FR10 produces a very short pause. An FR50, on the other hand, produces a much longer pause. The way to look at it is that it's a lot of work. So when the rat finishes the 50 responses, gets the reinforcer. You could look at it this way the rat thinks to himself or herself what do I have to do now to get to the next reinforcer? Oh, I've got to do 50 responses. I liked it much better when it was 10 responses. Obviously, the rat's not talking to him or herself like that, but that's what it looks like. And then they will eventually start responding and once they do, they respond very quickly in their 50 responses for the next reinforcer.

Hank Schlinger:

And the relevance of fixed ratio schedules with the pause is what we humans would call procrastination. So when we have only a little bit of work ahead of us, we're less likely to procrastinate when we have a lot of work ahead of us. Interestingly, we're more likely to procrastinate. I see this in my students. When they only have a short quiz to do for the next day, they go home and study it pretty quickly. When they have a big test the next day, other things become very reinforcing Washing the dishes, vacuuming the floor. But once they sit down to start studying, they study. So vacuuming the floor. But once they sit down to start studying, they study. So that's the fixed ratio schedule. I can't think of too many real world examples with humans where fixed ratio schedules are used, except possibly in some industries where a worker has to complete a fixed or set number of things before they get a unit of pay. So there's your fixed ratio schedule.

Travis Mallett:

Got it. So now the contrast to a fixed ratio is a variable ratio. What is a variable ratio? Schedule of reinforcement.

Hank Schlinger:

Okay, a variable ratio. Again, it's the ratio of responses to reinforcers. In this case, instead of the ratio being fixed, the ratio is variable. It's based on an average number of responses. So, talking about fixed ratio and a fixed ratio, 10 would be every 10th response produces a reinforcer. A variable ratio, 10 means on average the 10th response produces a reinforcer. So it could be one response, it could be 20 responses, 5, 30, and as long as when you add them all up and divide by the number, you get 10.

Hank Schlinger:

The difference between the variable and fixed ratio is remarkable. The variable ratio, for our purposes today, eliminates the post-reinforcement pausing, and the reason it does so is because the reinforcer is unpredictable. The animal doesn't know when the next reinforcer will come. So if they complete a long ratio and get a reinforcer, the very next reinforcer could be had by just responding one time. So the difference between a variable ratio and a fixed ratio.

Hank Schlinger:

By the way, all of these schedules are called intermittent schedules. One of the main differences between the variable and fixed ratio schedule is that the variable ratio schedule produces very persistent responding. That means almost no post-reinforcement pausing. And we see variable ratio schedules, or something like them, in slot machines and other types of gambling and iPhones for that matter. There's been a lot of discussion lately, especially by I forget the guy's name at Google who's talked about how various apps for phones are programmed to keep you on the phone and they're programmed according to something like a variable ratio schedule because you never know when the reinforcer is going to come. And that's the case with slot machines. People sitting in slot machines 99% of their responses produce nothing and yet they sit there for hours responding and putting money in, and that's because their behavior is on something very close to a variable ratio schedule. It generates very persistent responding. People persevere a lot on variable ratio schedules and mostly that's because of the unpredictability of the reinforcer. There are many other examples of variable ratio schedules in real life.

Travis Mallett:

I'm curious if there's any sort of burnout with that persistence over time, Like if I'm a manager and have dialed into a highly effective variable ratio schedule of reinforcement. Is there a risk of burnout due specifically to this schedule of reinforcement? I get that there might be other factors that influence burnout, but I'm curious about specific to this schedule of reinforcement.

Hank Schlinger:

No, there's no burnout and the evidence. Just go to a casino and look at people sitting in front of slot machines or gambling or being on their phones for hours on end. There is what you could call burnout. On a fixed ratio schedule, if you make the ratio high enough, fast enough, if an animal is responding, let's say, on a fixed ratio of five, and you move them to a fixed ratio of 50 immediately, they'll never make it to 50, right, so we call that extinction, so their behavior will stop. But you could get them up to 50 gradually if you move from 5 to, let's say, 8 to 12, to 20 to 30. You can get them where they can respond 50. But if you go immediately from 5 to 50, then, yes, burnout will occur. They will stop responding. That is very unlikely to occur on variable ratio scheduling.

Travis Mallett:

Extinction for our listeners who might have missed it is something that we dedicated the entirety of episode 14 to, with Dr Michael Domián. Now, next on the list is fixed interval schedules. What are some of the characteristics and effects of fixed interval schedules?

Hank Schlinger:

So ratio schedules are defined by the ratio of responses to reinforcers. Interval schedules depend upon two things. One, an interval of time must pass, but two, a response must occur. And there's a misunderstanding about interval schedules and I see this in my students all the time. They think that an interval schedule, you just get a reinforcer after a period of time. But that's not true. You have to make the desired response.

Hank Schlinger:

So in a fixed interval schedule the reinforcer is delivered after the first response, after a fixed or set amount of time. So a fixed interval one minute schedule, for example one minute has to pass and then a response has to occur. And it doesn't matter if any responses occur during that interval. You only have to respond once after the interval. So what I ask my students is what's the minimum number of responses that must occur on a fixed interval schedule? And they usually correctly say one. But almost no animal ever just makes one response.

Hank Schlinger:

So after animals have been on a schedule like this for a while, there's a very consistent pattern that develops. It's called a scallop, and what that looks like is after the reinforcer occurs for the response, then there's a period of time which looks like a post-reinforcement pause. There's a period of time when the animal doesn't respond, but then they respond slowly, and then faster and faster, until they're responding at a breakneck speed right before the interval ends and the reinforcer is delivered. Well, only one response is required for reinforcement. So why is the animal engaging in all that other responding? That's a phenomenon which Skinner called superstitious behavior, because the animal is engaging in that very high rate of behavior right when the reinforcement is delivered. The reinforcer is not dependent on that, but it's correlated with it. So you get this scallop pattern. And the reason that happens is that early on, when the animal is responding on a fixed interval schedule, if they respond immediately after reinforcement which they all do, every animal will do that you respond once you get food and then you start responding, but there's no food there because the 60 second interval hasn't timed out yet.

Hank Schlinger:

Eventually, responses right after reinforcement stop. That's due to a process called extinction. But since animals can't tell time as well as we can they don't have little watches they don't know when the 60 seconds is up. So they try Is it up yet? No, not yet. What about now? Now, now, no. And so by the time it is up, they're responding quickly Like where is that reinforcer. And then the reinforcer occurs and all that responding gets adventitiously or accidentally reinforced. That's why you get that kind of pattern of responding.

Travis Mallett:

Of course, the contrast to fixed interval schedules is variable interval schedules. What are variable interval schedules?

Hank Schlinger:

So a variable interval is to a fixed. So a variable interval is to a fixed interval as a variable ratio is to a fixed ratio. So instead of a fixed interval of time there's an average interval of time, a fixed interval. One minute schedule means the first response after every minute will be reinforced. Under a variable interval schedule it's the first response after an average interval of time and just like variable ratio schedules, variable interval schedule, it's the first response after an average interval of time and just like variable ratio schedules, variable interval schedules generate very persistent responding. The difference is variable ratio schedules generate fairly high persistent responding. Variable interval schedules generate fairly low persistent responding. And if you think about the way the schedules are programmed, it makes sense Because on a variable ratio schedule and a fixed ratio for that matter the animal really controls when they get the reinforcer. So if you're on a fixed ratio of 10, you can respond as quickly or as slowly as you want. You determine when the reinforcer comes. You have to make 10 responses. Same is true on a variable ratio schedule. On a variable interval schedule, the animal doesn't control that. It depends on whether a reinforcement has been set up by a clock. Here's a good example Fishing.

Hank Schlinger:

People go out and fish. You don't catch fish depending on how many times you throw your line in the water. That's not how it works. It's not every 10 times I throw my line in the water I get a fish, or on the average of every 10 times. It depends whether there are fish there and whether there are fish swimming underneath your boat or wherever you are. That translates into time. If time has passed and a fish happens to be there when you throw your line in the water, and of course the fish is hungry or whatever, then the fish might bite and so it's unpredictable. You don't know when the fish are there, unless, of course, you can see in the water. But you don't know when the fish are there, so that keeps you doing that time and time again.

Hank Schlinger:

Checking email is another example. If you don't have an email program that alerts you when your emails come and you check your email yourself, this is what I find. I could be working on a paper or doing something, working creating a test or something, and every few minutes I'll go check my email. Very persistent. I don't check it quickly, it's not fast, it's very slow, very persistent behavior, and the odd thing about that is what's the reinforcer? For me? The reinforcer is I get an email, but most of the emails I get are junk. The chances of getting a really valuable email, like the one I got from you, for example, that's a pretty low probability. So but there I am, checking just like the rat pressed the lever, checking at a very low but very steady and persistent rate.

Travis Mallett:

So one particular situation I'm interested in is verbal interactions, for example, how a manager might choose their words, like giving praise at particular moments or at certain times, to reinforce desired behaviors. Are there ways to use schedules of reinforcement to optimize results in our verbal interactions at work?

Hank Schlinger:

Yes, but not just verbal behaviors, other kinds of behaviors in business and industry. If you understand about reinforcement schedules, I assume that many managers would like their employees to respond fairly persistently when they're working, that is, don't take a lot of breaks. So if you want them to respond in a fairly consistent, persistent kind of way, then a variable schedule is obviously your choice. The problem is translating from the basic animal laboratory to a complex human setting like a business or an industry.

Hank Schlinger:

Certainly, all of our verbal behaviors are reinforced by listeners and they're reinforced on some kinds of reinforcement schedules. It's not clear what they are all the time but for example, as I talk to you I can see you nodding. Or if I didn't have videos on then you might ask me a question, but you don't do that for every word. I say right, and sometimes if I started talking about things that were unpleasant or uncomfortable for you, you might stop nodding or looking at me or responding. So all of our verbal interactions are reinforced by the people that listen to us and generally they're not reinforced on continuous reinforcement schedules. They're probably reinforced on variable reinforcement schedules. So're probably reinforced on variable reinforcement schedules. So I think it's not just verbal behaviors in the workforce, but it's also, whatever the behaviors are that are required for that particular setting, that the employers want the employees to carry out.

Travis Mallett:

Now, across these schedules of reinforcement, I get the sense that variable ratios seem to produce the best performance, and actually there are people in the field who argue that certain schedules of reinforcement yield higher or lower probabilities of reinforced behavior, which might be interpreted as showing that one is better than the others. Do you believe that that's a constructive way to rank the overall effectiveness of different schedules of reinforcement?

Hank Schlinger:

Not really. I think if you understand the rates and patterns that each schedule generates, then you could decide If you, for example, you wanted some individual to respond really quickly in a short burst and you didn't mind if they took a pause afterwards. The variable ratio is easier to program because the variable interval schedule you have to program a reinforcer after an average amount of time. If you're explicitly programming schedules of reinforcement it's easier to program a VR schedule than a VI schedule. But I suppose that in normal everyday interactions either of those schedules probably are at play in human interactions.

Travis Mallett:

I feel like we have a small philosophical tangent brewing underneath this conversation, more of a moral or ethical concern that might underline all of our studies of organizational behavior and psychology in general, especially when we start to get very quantitative about this controlling the schedule of reinforcement to elicit a particular response, drawing diagrams, figuring out okay, here's the optimal schedule of reinforcement to get the pattern of behavior I want. Are we in danger of just treating people as machines to be programmed? Just figure out the optimal inputs to get the outputs we want? And if so, could that backfire in other ways? For example, I don't really like the thought of being treated as a machine that someone else can program to get what they want. So I wonder if this aspect of management could backfire somehow.

Hank Schlinger:

I think the way to think about reinforcement is this. It's like the way that we would think about other laws of science. They're at play whether we want them to be or believe them to be or know them to be or not. Reinforcement is at play in all human interactions. You're reinforcing my behavior, I'm reinforcing your behavior to the extent that we continue to converse with each other. Now, I'm not explicitly or consciously trying to reinforce your behavior, and I don't think you are trying to do mine that way either. And those are happening also in the workplace too, between supervisors and employees or managers and employees, whether we know it or not, or whether we like it or not. The question is this in a workplace, do employers or managers want to maximize the productivity of their employees? And I think the answer is almost generally yes, because that maximizes profits, et cetera. So what's the way to do that? The way to do that is to make sure that the behaviors that you want in your employees make sure that you acknowledge those behaviors, like I do with my son. So when my son does something that I want him to do, I'll praise him or I'll allow him access to his iPad or to a TV show, and I think the difference is maybe the difference between what we call positive versus negative reinforcement.

Hank Schlinger:

Positive reinforcement generally means that you get something for doing something In the workplace. What do you get? You typically get money right because you're paid for what you do. But if you talk to a lot of people, I think what do you get? You typically get money right Because you're paid for what you do, but if you talk to a lot of people, I think what people really like and what they'll work for is just being acknowledged by their manager. Hey man, I noticed that you did this. What a great job you did. That's just terrific. That's just great that you did that.

Hank Schlinger:

That's free, and it's amazing how few managers and supervisors actually acknowledge what their employees do, what they're doing, what they want them to do, and so I don't know that that has to be programmed according to a certain schedule of reinforcement. Obviously, if you do it every single time, then it might be less effective, because then if you don't do it, then there's like well, where is it? So that is generally probably better programmed out of the ER schedule. You don't want to tell them every time, every few times, every 10, 15 times On average. The reinforcers don't have to be tangible, they don't have to be monetary, they can just be acknowledgement. And I think that a lot of employees will complain that when they do their job the best that they can, no one ever notices, no one ever points it out, no one acknowledges it. And that's free, it doesn't cost the manager anything. In that sense, you're consciously and intentionally using reinforcement, but I don't think it reduces your employee to anything. I think it makes them feel more valued and more worthy, et cetera.

Travis Mallett:

That makes me think of a manager I knew, and this was more of a reminder for him to give positive feedback, since I think it's, as you said, really easy for managers to just focus on the problems that need to be fixed and not give the praise when it's due. But it seems related to the schedules of reinforcement. So he would put five pennies in his right pocket and then, throughout the day, whenever he would acknowledge a positive thing an employee had done, he would keep track by moving a penny from his right pocket to his left one, aiming to have given his five compliments or praises by the end of the day. And when I first heard about that I thought this sounds really useful and very intentional, but it doesn't really track any specific schedule of reinforcement. It might be a good way to engage in some positive psychology, but I'm wondering if it lacks some of the effectiveness because it isn't targeting specific behaviors at specific times to intentionally elicit a specific pattern of response. It's a bit more haphazard, curious. What your thoughts are on this?

Hank Schlinger:

I think that, based on the definition of reinforcement that I gave at the beginning, you have to look to see whether it actually increases the productivity of your employees. Just doing something like that may seem like it's a good idea, but if it doesn't increase the productivity, if you don't see a change in their behaviors, then it's not a reinforcer, then you're not doing reinforcement, and that's an important feature about reinforcement. What some people think might be a reinforcer may not be one. Maybe one employee doesn't really care if you praise them and acknowledge them, maybe there's something else you could do. And so I think there's an intentionality and as a parent, you want to intentionally make sure you acknowledge and reinforce behavior of your kids that you like and don't do it when they are engaging in behavior you don't like. And the same thing is true in the workplace. But you have to know that what you think is a reinforcer actually is increasing the behavior that you want to see, and that's when the quantitative part helps.

Hank Schlinger:

You take data. Here's the productivity of our employees. It's pretty low, or one employee or whatever. Now we're going to institute this procedure where we're going to provide praise or some kind of acknowledgement and you continue taking data and are these behaviors increasing or are they not? And if they're not, stop doing it and do something else and then keep taking data so that what works and what doesn't. And there are a number of people in my field who do organizational behavior management, who do this kind of thing and have done it for decades in various business and industries, and it's extremely effective.

Travis Mallett:

Excellent, thank you. We're about out of time, so I just want to say thank you for joining us and teaching us about schedules of reinforcement. Before we sign off, can you tell our listeners how they can find you and your work?

Hank Schlinger:

I'm all over the internet mostly, I think in good, positive ways. They can check me out on Google Scholar at California State University, los Angeles. All my books are on Amazon. My most recent book has a website, wwwbuildgoodbehaviorcom, so if anybody wants to email me, they can find my email address through Cal State LA, and I'm happy to hear from people.

Travis Mallett:

Perfect and thank you very much.

Hank Schlinger:

Thank you, travis, I appreciate being on.

Travis Mallett:

Wow, that was a lot of information. So many different ways to give reinforcements and get different responses. So did we get an answer to our question is there a best schedule of reinforcement? Actually, when I was writing this episode and preparing for it, I found contradictory answers. Some organizational behavior textbooks, along with other online resources, claim that variable ratio schedules are the best or the most effective, and I wrote my interview questions with this assumption in mind. Dr Schlinger challenged that assumption and pointed out that it's more complicated than that, as we've come to expect from our journey on this show.

Travis Mallett:

Whether one schedule of reinforcement is better or worse depends on the situation and the pattern of responding you're hoping to achieve. But you may still be wondering about how my son responds to the different schedules of reinforcement. When we're playing, when I use continuous reinforcement, giving a big, dramatic reaction for every big high five, he responds by giving high fives in an almost consistent rhythm. They're solid high fives, but nothing too dramatic. But when I use a variable ratio reinforcement schedule, giving him that funny reaction every three times on average, he furiously speeds up giving solid high fives as fast as he possibly can, and he laughs far more hysterically when he gets the anticipated response than when he gets it consistently every time. It's quite a dramatic difference in his reaction between the two schedules of reinforcement and I can confidently say, based on this very unscientific experiment, that perhaps if we're intentional with how we dole out rewards or other reinforcers, we might find ourselves getting closer to what we're looking for and we might just find ourselves with more enthusiastic and happy employees as a result.

Travis Mallett:

This week, think about some of the rewards or reinforcers you offer in your workplace. This could include praise, bonuses, time off, public recognition or other incentives. What kind of schedule and reinforcement is currently in place, if any? Are we just like Dwight, giving out random rewards haphazardly? Could we experiment with a different schedule of reinforcement to try to get better results? As usual, these management theories are intended to give you tools to adapt and mold to your specific circumstances. So with that, thank you for joining us on another episode of the Management Theory Toolbox. Stay tuned for our next episode, where we broaden our horizon from operant conditioning and talk about how social interactions impact employee learning. Until then, keep learning, keep growing and keep adding to your management theory toolbox, thank you.

People on this episode