Three myths about dishonest behaviour

Reports of problems with turnstiles keep on coming. You, Compliance Officer of a large company, with facilities in several states in Brazil, don’t know what to do anymore.

In recent months, the Audit team identified that a large number of employees had the habit of circumventing the company’s clock in and out system in order to clock in a little later and leave a little earlier. A considerable loss for the organization, which required a quick response from the Compliance area.

The problem seemed clear to solve: the system of controls needed to be modernized and improved, and the employees be trained again. After all, it was easy to bypass the “old system”.

Two months later, it was time to find out whether the changes had the expected effect. The Compliance team was confident about the new compliance measures implemented. After all, the new Compliance training had been well attended, and all the turnstiles had been replaced by a much more modern fingerprint identification system.

Everything went right! The data indicated that the employees were no longer cheating the system. The problem was solved. Everyone was satisfied.

Sometime later, however, there were new rumors that some employees had started using silicone molds, the kind that can easily be purchased on the internet, to clock in for their colleagues. After a brief investigation, it was found that some employees were doing this to bypass the system.

The response from the Compliance team was not long in coming: cameras were installed, employees were fired, new training sessions were held, and awareness campaigns were conducted. Would not be the case to install an even more modern system, such as facial recognition?

We have finally come to the present day. One’s perplexity makes sense: you did everything that the traditional manuals, your intuition and logic indicated, but things didn’t turn out the way you expected. You did everything “right”, and yet things “went wrong”.

Why did it happen?

The answer lies in the fact that often our intuitions about how people make decisions is unrealistic. As much as we may not realize it, we tend to think of compliance measures that will work for a very specific type of people. More controls and more information would be appropriate measures if, and only if, their subjects were strongly rational people with an unlimited capacity to pay attention and assimilate information, and always needed external incentives to act honestly.

But how do people really behave?

We all can imagine one or another very rational person who makes decisions in a “cold and calculating” way. However, when we look around to the great majority of people, it becomes clear that these “cold and calculating” people are the exception. The reality is that (and this is a fundamental point to think about regarding the effectiveness of compliance measures) most people are emotional and easily distracted, especially when they take complex decisions in a context of haste and pressure typical of the organisations.

Next, we will present three myths about dishonest behaviour that arise because we use the exception as if it were the rule, a minority as if it were the majority. Our objective is not to give an in-depth explanation, as in the book MUITOS[1], rather just to present them.

Understanding the existence of these myths may be the key for us to first diagnose the reasons why many compliance measures do not work, and finally to think about what we can do to make them more effective.

MYTH 1: “Crime pays”

Every time we think about making interventions to change behaviour, even without stopping to reflect on it, we adopt a set of assumptions about how their subjects make decisions.

We can, for example, begin by assuming that people take reciprocity very much into account, or perhaps that they are very influenced by what they think others are doing. The idea is that we will always create interventions that are adapted and appropriate for the kind of people we have in mind.

It so happens that, in general, we tend to think of a very particular type of person when developing our compliance measures. More than being social, reciprocal beings, among so many other possibilities, we assume that people are moved, almost exclusively, by calculations of pros and cons, that is, that they will opt for the most advantageous alternative in each situation.

And why do we think like that?

It is important to realize that our perceptions result from a combination of what we learn in books and courses with our intuitions and life experiences.

If we look closely, books and manuals in several areas of the Social Sciences, such as Law or Administration, are strongly influenced by the assumptions of the Rational Choice Theory. Under this perspective, people possess full rationality, have complete, transitive and continuous preferences, among other characteristics that we typically associate with “cold and calculating” decision making.

It so happens that Rational Choice Theory is not, and has never intended to be, a descriptive theory about how people make decisions. It is, in fact, a set of assumptions that aims to simplify reality and allow the construction of predictive models in many different areas. Rational Choice Theory, therefore, serves more as a benchmark of how we would make decisions if we were rational agents, something useful for thinking about public and organizational policies in some contexts, but not as a descriptive theory of how people actually make decisions.

As if the strong influence of the Rational Choice Theory in academic circles was not enough, the perception that people make decisions in a “cold and calculating” way also receives support in the common sense, in the way we intuitively explain other people’s actions.

In our hypothetical example, if we think, for instance, about why people have bypassed the turnstiles, we can easily think of incentive-related reasons (e.g., the person is not afraid of being caught or of the size of the punishment itself if caught) and we hardly think of issues outside this area (e.g., the subject may feel that this is something common and acceptable in that context, or feel that this is a fair compensation for some injustice he/she has suffered from his/her employers).

What can we do, then?

The first point is to understand that, although we know some homo economic out there, and that all of us may reason this way from time to time, in reality this is not a realistic way to describe human behaviour.

What the Behavioural Sciences show us is a very different scenario: first, we are not always rational, predictable, or act selfishly; second, there are factors other than typically economic issues that influence decision making, namely cognitive issues (our heuristics and biases), social issues (the influence of social norms), and contextual issues (influences of small changes in the decision-making architecture).

MYTH 2: “It’s the bad apples that are the problem”

We often hear that there are honest and dishonest people, and that integrity problems will be solved if we can identify and neutralize dishonest people.

Is it really the case?

No. The problem is not the few “bad apples”, but the vast majority of ordinary people, like you and me, who consider themselves to be honest but commit ethical irregularities all the time.

Let’s remember our hypothetical case of people bypassing the turnstiles: Try to imagine that you are in front of the footage at the exact moment a person is using a silicone finger to bypass the system. The person looks to one side and to the other, uses the fake finger, and with a “straight face” proceeds to his workstation. It’s hard to contain our bewilderment or feeling of anger: “How can someone has such a bad character like that?”

As of that wasn’t enough, hours later, this same person posts on his social networks that he is against corruption and writes about the importance of teaching his children ethical values. We therefore think: “It has to be a crazy person, maybe a psychopath or a serial killer in disguise.” Probably none of these options. 

The great contribution of the Behavioural Sciences to the psychology of dishonesty is to realize that people manage to reconcile what should be irreconcilable: acting dishonestly and perceiving oneself as an honest person. This is possible, in general, for two reasons: (i) the ethical blind spots, it means, the contexts in which we perform ethical deviations without being able to identify the moral dilemma of our action; and (ii) the mechanisms of rationalization which we use to justify our actions.

Therefore, although we may think that this is a dishonest person – that knows it and doesn’t care about it – it is most likely a person who considers himself to be honest, but who (i) either doesn’t realize that he has done something reprehensible or (ii) realizes that he has done something reprehensible, but has used his creativity to lessen his psychological discomfort and sense of guilt.

The important point here is that, with few exceptions such as people on the psychopathic spectrum, we all consider ourselves to be honest – yes, even that corrupt politician on the front page of the newspapers. And, perhaps the most important point, considering oneself to be honest is no guarantee that people will stop committing ethical deviations.

The truth is that, because of ethical blind spots and mechanisms of rationalization – not to mention our enormous difficulty in identifying our own mistakes – we, the “Many” of the book MUITOS, can always be the “bad apple” in other people’s eyes.

MYTH 3: “The more controls, the better”.

We find it very difficult to perceive the side effects of creating excessive controls on people’s motivation.

The compliance professional is often faced with a dilemma: on the one hand, he or she needs to implement sufficiently strict controls to prevent all kinds of deviations; on the other hand, he or she knows that the implementation of these controls has a negative impact on employee well-being and motivation. 

It doesn’t have to be that way.

Behavioral Sciences shows us that this is a false dilemma. The compliance professional doesn’t have to make a tough choice between the “lesser of two evils”, but rather reconcile the best of both worlds: create controls tough enough to deter a minority of ill-intentioned characters, but in a way that doesn’t undermine the satisfaction, nor diminish the motivation of the vast majority of people, who already intended, naturally, to act in the right way.

To understand this point, we need to understand why controls cause so many problems. The first step is to identify that there are different types of motivation and to understand how they interact.

People have, therefore, two types of motivation. In general, they can be motivated to perform some activity geared toward a motivation that is more autonomous, which comes “from within” – the so-called intrinsic motivation; or geared toward a more controlled type of motivation, which comes “from the outside” – the so-called extrinsic motivation.

We are usually autonomously (or intrinsically) motivated to perform activities we find interesting or important, such as, for example, learning a new instrument – or acting honestly in everyday life. It means, we are motivated to perform the action regardless of whether there is an economic or reputational incentive.

On the other hand, for activities that we don’t find interesting or relevant, we need something different to motivate us. In these cases, we need extrinsic incentives, such as the possibility of a fine, a reward, or the threat of punishment, to become – and stay – motivated to perform the activity. 

The problem arises when we mix the two types of motivation. When we add extrinsic incentives (e.g., a control) to activities that people already perform autonomously because they are intrinsically motivated.

In this case, intuitively, we can think that it is a perfect combination. After all, people now have double the motivation they had before to perform the activity: intrinsic motivation (which they have always had) plus extrinsic motivation (promoted by external incentives).

However, this is not quite how it happens in practice. The evidence from Behavioural Sciences shows us that the addition of external incentives may not only harm people’s performance – the phenomenon known to economists as crowding-out of intrinsic motivation – but also their satisfaction and well-being in the medium and long term – the phenomenon known to psychologists as the undermining effect. In addition, there is the perverse effect of creating the dependence of people’s performance connected to the maintenance of incentives.

Why does this happen?

The problem arises because the two types of motivation do not go well together. Intrinsic and extrinsic motivation, instead of adding up, or just coexisting, may end up cancelling each other out.

What we can observe is that the addition of external incentives ends up corrupting the way a person perceives the activity. In the case of compliance controls aimed to dissuade ethical deviations, the risk is that the receiver of the control begins to perceive the issues related to ethics (something non-negotiable in his private life as a matter of principle) as a business or economic issue, something subject to a pros and cons examination.

Therefore, the challenge for the compliance professional, in reality, is not in choosing between more controls or more motivation, but in the way to reconcile both. That is, to create controls that are strict, but that do not generate so many side effects on the motivation and well-being of their recipients.

It can be done! And the way is to adjust control measures to people’s so-called Basic Psychological Needs for Autonomy, Competence and Connection. A talk for a future article.