How a simple bit of trust can foster worker honesty
But for trust to be effective, it requires a departure from traditional control systems that are still common in today’s business world
In Hong Kong, we like to think of ourselves as honest, and by and large we are. But both here and around the world there is a problem. Keeping honest people honest is not the easiest of tasks.
From fiddling expenses to taking home a laptop and “forgetting” where it has gone, most offices have seen this type of low-level dishonesty and managers have to deal with it on a daily basis.
So just how do you keep the “honest majority” honest in the face of temptations to give themselves a little perk?
The standard model of control is that organisations use a combination of a carrot and stick to prevent their employees from pursuing dysfunctional or inefficient behaviours.
However, new research from the UNSW Business School suggests a simple, trusting approach with your staff can work well.
While there are always a few bad apples try to find loopholes to put their hand in the till, most people do not carefully plot their way to maximise their self-interests. Instead, most people are inherently honest and will feel uncomfortable committing acts of dishonesty.
As an example, consider this. Suppose you are walking down to Causeway Bay one afternoon and you spot a HK$100 note (US$12.82) on the ground – it looks like someone has carelessly dropped their money. There is no one else there to witness what you do next. What would you do?
For many of us, we will be tempted to pocket the money. However, we are also likely to feel a sense of discomfort if we do so – it does not “feel right” to take the money.
This feeling of conflict – what psychologists call cognitive dissonance – is the result of the inconsistency between our behaviour (taking money that is not ours) and our belief (that we are honest, and honest people do not take money that does not belong to them).
Anticipating the discomfort that comes with taking the HK$100, we may decide to leave the money where it is.
Drawing on this notion of cognitive dissonance, we decided to run an experiment to see whether organisations can reduce opportunism by highlighting the anticipated psychological discomfort associated with opportunistic behaviours.
What we have done is examine the practice of self-certification, in other words an informal, trust-based control mechanism whereby managers sign their names to accept responsibility for a decision but know that there are no penalties associated with the decision because the information on which it is based is private and not monitored.
Consider a scenario – which many managers in Hong Kong must have seen – where an investment project in a portfolio is underperforming.
If the manager terminates the project, the company will be better off because the resources allocated for this project could have been deployed more profitably elsewhere. However, the loss of face from this would be large. They would look incompetent in front of their superior and it will damage their career prospect.
Consider the alternative, of not drawing attention to this investment in their portfolio, so no one in the company will know about its underperformance. This is a classic what-to-do decision dilemma when personal economic interests conflict with company interests.
We tested this scenario and found that by requiring participants to sign their initials next to their decision – an informal, trust-based control that we call “self-certification” – they become less likely to continue the unprofitable investment.
Importantly, although the signature itself does not change the economics of the decision, it “binds” the decision more closely to the participants’ self-identity and heightens the conflict between maintaining personal integrity and pursuing opportunism.
As a result, those who have to self-certify are less likely to continue an unprofitable investment for personal gains than their no-signature-required counterparts.
Therefore, the lesson to be learned for all managers is that this trust-based mechanism can work quite well: the mere signing of one’s initials to a request or statement considerably improves truthfulness.
This is nothing new. Self-certification is often used in activities that, while by no means trivial, are less than critical to organisational functioning, such as submitting claims for remuneration of discretionary expenses or making absence-from-work requests.
It is quite common in universities, where self-certification has been used with conflict-of-interest disclosures and teaching-assessment reviews.
However, our research at UNSW did discover that self-certification might be a double-edged sword.
For a small subset of participants who decide to continue the unprofitable investment despite the self-certification requirement, they become even more likely to escalate the investment as it deteriorates further in the next decision period.
Once the inner moral voice is overruled, the self-certification process increases the participants’ tendency to commit yet more further dysfunctional behaviours to justify their earlier decision.
A classic example of this comes from Singapore, where Nick Leeson tried to cover up unprofitable unauthorised speculative trading and put more money in to try to “trade his way out” of losses.
It was self-certified by his own back office and resulted in the collapse of Barings Bank in 1995. He tried to justify it by saying he was trying to help the bank.
Second, the effect of self-certification disappears when it is combined with a formal monitoring system.
In a follow-up experiment, we tell participants that their investment decision is subject to a random internal audit by the company. In such a scenario, the requirement to self-certify their investment decision no longer affect participants’ decisions.
It appears that formal monitoring “crowds out” people’s internal drive to maintain their integrity; monitoring may have caused the participants to become more concerned with “will I get caught?” than “am I doing the right thing?”.
The result of our research made it clear that self-certification can make things worse because it makes it harder for you to override your inner moral voice. Once you have self-certified, it makes it ever more important for you to justify this difficult decision to reduce the feeling of dissonance, and so you end up increasing your commitment to your action by repeating or escalating it.
Specifically, if a manager had continued a poor investment despite self-certification, they would feel the urge to continue this investment to demonstrate (if only to themselves) that it was not a mistake to have overridden their moral standard in the first place.
To make this more “bearable” while knowingly does something questionable, the manager may also start to come up with other justifications. This makes it clear why Leeson did what he did – he thought he was doing it to help the company, not because of his own self-interest.
However, our experiment shows that it is possible to design a control system to leverage individuals’ preferences for honesty in order to achieve goal congruence within organisations.
Even in the absence of intricate and potentially costly monitoring mechanisms, by combining a strong positive organisation culture with small “behavioural nudges” such as a self-certification process, organisations can bring out the best in their employees.
For trust to be effective requires a departure from traditional control systems that are common in today’s business world.
The chief takeaway from this is managers should can carry on with self-certification, which works well in most circumstances if confidentiality is guaranteed. However, self-certification is not effective when combined with formal monitoring.
Mandy Cheng is the head of school of accounting at the UNSW Business School. Julian Lorkin also contributed to this article