Posts Tagged ‘cognitive bias’

Risk and Creeping Conservatism – The Death of a Company

November 16, 2010

Some readers have asked me to expand a bit on what I meant by “Conservatism” as being part of what leads to corporate death – they were worried that I meant that their political views were being cast in a negative light. While it certainly would be interesting to see if political Conservatism was related to risk aversion, in decision-making the term “conservatism” relates to risk aversion and how that winds up leading to paralysis and some very bizarre behaviour under stress.

It’s all about the price of eggs

Some years ago Daniel Putler of the USDA noticed that customer behaviour towards egg prices was asymmetrical in what he called a “reference price effect”(Putler 1992).
When egg prices rose, as predicted by standard economical theory, demand dropped, but what was inexplicable by economic theory was that when egg prices subsequently dropped, the purchasing did not respond in equal measures – people reacted more strongly to the price rise than they did to price drop.

This response asymmetry turns out to be the result of a deep-seated, probably evolutionary, cognitive bias towards risk. As a species, the behavioural economics theory goes, we are more attentive and react more strongly to risk than reward and this plays out whenever we have already set an expectation or reference (Chen, Lakshminarayanan et al. ; Tversky and Kahneman 1991; McDermott, Fowler et al. 2008).
It’s easy to imagine how this could develop evolutionarily – the sound of tall savannah grass moving might be just the wind, it could be somebody bringing us a nice melon, or it might be somebody or something creeping up on us. If one typically reacted to it as a threat and were wrong it would just mean we burned up a little energy and appeared jumpy. If on the other hand we typically assumed it was something nice and were wrong, then we would more often end up as lunch.

Jumpy but living people leave more offspring than relaxed but dead people.

Nightmare Auctions

There are some amusing experiments that show how this works in everyday life, one is the Bazerman auction in which non-rationally escalating commitments result from loss-aversion (Bazerman and Neale 1992; Bazerman 2001). Prof. Max Bazerman has routinely carried out the following piece of trickery on unsuspecting students.

Bazerman offers an amount of money out of his wallet for auction, say $20. The rules of the auction are atypical but fairly straightforward: bids must be increments of a certain minimum value ($1), and the winner is obviously the person with the highest bid.

So far so good.

The payout however is a little different – the runner-up also pays their own bid, but only the winner gets the prize.

Behavior is fairly typical, bids come in thick and fast until the bids approach a significant proportion of the prize, at which point most players drop out, leaving the leader and runner-up alone in the game.
Bids slow but keep climbing until the fateful point is reached where the next bid will be $21 for a $20 prize. At this point, rather than capitulate, the typical outcome is further furious bidding as each player ups the ante and tries to avoid loss – In many trials reaching $180 bids for the $20 prize.

Before you think that this is all rather contrived and of no importance to your firm, consider Nick Leeson.

Because Leeson was biased in this same way, he piled debt upon debt trying to recover an initial loss, and the result of this put Barings Bank out of business (Nicholson 1998; Hoch and Kunreuther 2001; Goto 2007).

The phenomenon has been seen in countless examples ranging from big lies to cover small lies (President Clinton?) to people who sell their good shares and hold onto those that are plummeting – a trait shared by monkeys.

It even overpowers actual gains.

Take exchange rates: Some time ago the Australian dollar was at 0.96 to the US dollar against a typical exchange rate of closer to 0.7. A person holding AUD could thus make a tidy profit by paying the $20 fee and moving a large quantity of AUD into USD. However, in many cases, when the rate dropped to 0.94, people, having pegged expectations to 0.96, now regarded 0.94 as a loss, and instead of selling and getting the benefit of 0.94 against a typical 0.7, held onto their AUD in the hopes that it would climb back to 0.96.

However, it proceeded to drop further, triggering even more angst, and resulting in those people tending to perceive and even big loss, and even more desire to see it climb back up before they wanted to sell, … and so on.

This is the way people lose great fortunes on the stock market, in gambling, and in business ventures in general.

Risk aversion also has a twin brother – Aversion to Change

Change Reluctance

Since most random change is harmful, risk aversion equates to a reluctance to change what is tried and true, and herein lies the real rub – while good professional practices can limit the harm done due to Bazerman Auction situations and can embed safeguards against scenarios such as Barings Bank, there is another problem that no amount of procedural interlocks and policies can prevent – external changes.

Even if a company has an absolutely perfect market approach, externalities cause unpredictable changes in the business terrain that require adaptation and course corrections that will inevitably require novelty and innovation.

This inverse link between risk-aversion and innovation has been the subject of books (Hunt and Hazel 2003) and has even spurred some research suggesting that it is tied to low cognitive ability (Dohmen, Falk et al.) We might in a snide moment say that corporations that are risk averse are just stupid, but that is unkind and untrue.

Which brings us back to the kind of Conservatism that leads to corporate death.

Conclusion

Even though in mature companies the cultural & existential narratives tend to be “onwards and upwards” in aspiration, the more mundane “How things are done here” or ground truths tend to show that there is often a gap between declared vs operant behaviour and goals, and that actual behaviour is risk averse rather than innovative.

The difficulty is that to gain the benefits of experimentation without the risks that catastrophic failure might bring, one has to build an environment in which frequent small risks can be taken without jeopardizing the survival of the organization. In order to do this, one has to be open to experimentation and wilfully expend resources to experiment. In order to have that, one has to have a mindset and corporate culture in which “playing” is allowed, and as companies shed their youthful “go-go” character during their entrepreneurial stage, caution and change reluctance grow.

~~~

Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.

Bibliography

Bazerman, M. and M. Neale (1992). “Nonrational escalation of commitment in negotiation.” European Management Journal
10(2): 163-168.

Bazerman, M. H. (2001). Smart money decisions: why you do what you do with money (and how to change for the better), Wiley.

Chen, K., V. Lakshminarayanan, et al. “The evolution of our preferences: Evidence from capuchin monkey trading behavior.”

Dohmen, T., A. Falk, et al. “Are risk aversion and impatience related to cognitive ability?” The American Economic Review
100(3): 1238-1260.

Goto, S. (2007). “The Bounds of Classical Risk Management and the Importance of a Behavioral Approach.” Risk Management and Insurance Review
10(2): 267-282.

Hoch, S. J. and H. C. Kunreuther (2001). “A complex web of decisions.” Wharton on making decisions: 1-14.

Hunt, B. and G. Hazel (2003). The Timid Corporation: why business is terrified of taking risk, J. Wiley.

McDermott, R., J. H. Fowler, et al. (2008). “On the evolutionary origin of prospect theory preferences.” The Journal of Politics
70(02): 335-350.

Nicholson, N. (1998). “How hardwired is human behavior?” Harvard Business Review
76: 134-147.

Putler, D. S. (1992). “Incorporating reference price effects into a theory of consumer choice.” Marketing Science
11(3): 287-309.

Tversky, A. and D. Kahneman (1991). “Loss aversion in riskless choice: A reference-dependent model.” The Quarterly Journal of Economics
106(4): 1039-1061.

 

Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.

~~~

Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.

 

Advertisements

Externalization and Avoidance – How we ignore and avoid facing our own faults

September 22, 2010

Externalization and avoidance are key manifestations of how we ignore and avoid facing our own faults and project them onto others, and has ramifications in how organizational learning does (or doesn’t) take place.

The Problem

In a journal article about experiences with professional business-consultants, Chris Argyris describes a senior manager asking a group of business consultants what problems they encountered and what changes they could make to their practices to improve service. In response, several of the consultants explained what the customers could do to improve consultant’s access to information or client acceptance of advice. The manager tried several more times to solicit ideas on changes the individuals could make to their consulting practices, but was again given details on things customers and others could do.
The consultants seemed oblivious to how painstakingly they avoided discussing their own practices.

“As long as efforts at learning and change focused on external organizational factors—job redesign, compensation programs, performance review, and leadership training—the professionals were enthusiastic participants. Indeed, creating new systems and structures was precisely the kind of challenge that well-educated, highly motivated professionals thrived on.
And yet the moment the quest for continuous improvement turned to the professionals’ own performance, something went wrong.”
(Argyris 1991)

This externalization behavior is explained by the concept of “Single-Loop Learning” or “Model-1” behavior and defenses people employ to objections or errors in Model-I actions operating under single-loop learning processes, in which they “create defensiveness, self-fulfilling prophecies, self-sealing processes, and escalating error” (Argyris
2000)

Not only are the errors unmentionable, but the very unmentionability is unmentionable also, hence “self-sealing”. (Argyris 1999)

Argyris poses this as a defense mechanism against embarrassment and feelings of threat which is deeply entrenched, finely practiced, and in which the individual is highly skilled to the degree that the mechanisms are automatic, instantaneous, and spontaneous. (Argyris
1990)

When people are challenged about these self-sealing actions or an unmentionable is mentioned, they “become defensive , screen out criticism, and put the ‘blame’ on anyone and everyone but themselves” (Argyris 1999), and thus inclined to scapegoat rather than engage in either self-reflection or analysis of the “tried and true” methods of received wisdom.

An example of this in practice is a discussion on LinkedIn by self-declared HR professionals debating what they “really hate in a CV“.

Rather than the more pertinent question of how they should go about identifying and acquiring better human assets than their competition, they found fault with external actors and produced a long shopping-list of what applicants should be doing differently rather than what they should be doing to address the core challenge of obtaining better assets. Many framed the issue in terms of how the applicants should “sell” themselves, or in what self-promotion the applicants should engage – and thereby ignore the obvious question of what recruiters should do to ensure that they procure the best candidates in spite of obstacles.

If asked why they are critiquing the resume-writing skills of applicants rather than trying to change their own practices to increase hiring quality, they again turn the discussion to failings of the applicants and frame the discussion in terms of the shortcomings of applicant’s self-promotional techniques.

Further probing or argument about why they choose to focus on resume-writing errors rather than acquisition problems, leads predictably to complaints of rudeness, general defensiveness, and further explication of what applicants should do to be more acceptable to recruiters.

In short, ego-defenses.

What to do?

Argyris argues in his early works in much the same vein as Freud – that making people aware of their defensive mechanisms will lead to mastery over them and resolution – self-revelation as a therapeutic bulwark against systematic Model-1 behavior.

Unfortunately this has not proven to be a very successful approach, and as I discuss in an earlier blog entitled “Dealing With Failure”, escape is not at all easy since these are both deeply ingrained and perhaps inborn defensive mechanisms.

I propose three main approaches for mitigation

  1. Engineer the human out wherever possible
  2. Place procedural traps to trigger corrective actions
  3. Copy aspects of the scientific method of critical tests

Engineering

This is an old trick that comes from safety & quality practices, and involves building processes and procedures that simply eliminate the human in situations where they are prone to error, or engineering the risk downwards by providing guards, interlocks, and other devices for keeping fingers away from blades, and eyes away from flying debris. Build processes that reduce the opportunity for human error.

Traps & Triggers

Stuff happens, and despite all your engineering, people will still have cognitive biases and be prone to avoid noticing them – so build traps in the procedures that will catch them in the act and trigger self-correcting processes. With all the will in the world, you cannot wish away your biases, but you can plan ahead by setting traps for yourself.

Critical Tests

Science has had the principle of critical tests for some 400 years now, perhaps the concept should be adopted in more areas of human activity. If you believe that your skills at sizing up character traits in job interviews gets you good employees, then you must record what you thought at the time and test against actual outcomes. It simply isn’t enough to believe it to be true – you must follow up with measurements that could theoretically prove you wrong. You must also test the converse; you must sample some of those candidates whom you rejected to see what became of them. If you don’t, you could well be systematically excluding the best candidates and would never know. If the people you rejected went on to become Nobel Laureates, CEOs, and celebrated practitioners of their craft, then perhaps you are biased in how you reject candidates – but if you don’t sample your rejects you would never know.

Conclusion

Simple discussion is incapable of penetrating the protective layers and well-rehearsed defenses of Model-1 behaviors, and to avoid embarrassment people will defend positions by use of well-oiled and very subtle mechanisms such as reframing the issue to externalize its causes. However, protecting processes by engineering out human biases, and exposing behavior to traps and triggers and subjecting it to critical tests, can significantly lower the risk of falling prey to the many cognitive and behavioral biases to which humans are prone.

Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.

Bibliography

Argyris, C. (1990). Overcoming Organizational Defenses, Prentice Hall.

Argyris, C. (1991). “Teaching Smart People How to Learn.” Harvard Business Review
69(3): 99-109.

Argyris, C. (1999). On Organizational Learning, Blackwell.

Argyris, C. (2000). Flawed Advice and the Management Trap, Oxford Press.

~~~

Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.


%d bloggers like this: