Posts Tagged ‘risk aversion’

Risk and Creeping Conservatism – The Death of a Company

November 16, 2010

Some readers have asked me to expand a bit on what I meant by “Conservatism” as being part of what leads to corporate death – they were worried that I meant that their political views were being cast in a negative light. While it certainly would be interesting to see if political Conservatism was related to risk aversion, in decision-making the term “conservatism” relates to risk aversion and how that winds up leading to paralysis and some very bizarre behaviour under stress.

It’s all about the price of eggs

Some years ago Daniel Putler of the USDA noticed that customer behaviour towards egg prices was asymmetrical in what he called a “reference price effect”(Putler 1992).
When egg prices rose, as predicted by standard economical theory, demand dropped, but what was inexplicable by economic theory was that when egg prices subsequently dropped, the purchasing did not respond in equal measures – people reacted more strongly to the price rise than they did to price drop.

This response asymmetry turns out to be the result of a deep-seated, probably evolutionary, cognitive bias towards risk. As a species, the behavioural economics theory goes, we are more attentive and react more strongly to risk than reward and this plays out whenever we have already set an expectation or reference (Chen, Lakshminarayanan et al. ; Tversky and Kahneman 1991; McDermott, Fowler et al. 2008).
It’s easy to imagine how this could develop evolutionarily – the sound of tall savannah grass moving might be just the wind, it could be somebody bringing us a nice melon, or it might be somebody or something creeping up on us. If one typically reacted to it as a threat and were wrong it would just mean we burned up a little energy and appeared jumpy. If on the other hand we typically assumed it was something nice and were wrong, then we would more often end up as lunch.

Jumpy but living people leave more offspring than relaxed but dead people.

Nightmare Auctions

There are some amusing experiments that show how this works in everyday life, one is the Bazerman auction in which non-rationally escalating commitments result from loss-aversion (Bazerman and Neale 1992; Bazerman 2001). Prof. Max Bazerman has routinely carried out the following piece of trickery on unsuspecting students.

Bazerman offers an amount of money out of his wallet for auction, say $20. The rules of the auction are atypical but fairly straightforward: bids must be increments of a certain minimum value ($1), and the winner is obviously the person with the highest bid.

So far so good.

The payout however is a little different – the runner-up also pays their own bid, but only the winner gets the prize.

Behavior is fairly typical, bids come in thick and fast until the bids approach a significant proportion of the prize, at which point most players drop out, leaving the leader and runner-up alone in the game.
Bids slow but keep climbing until the fateful point is reached where the next bid will be $21 for a $20 prize. At this point, rather than capitulate, the typical outcome is further furious bidding as each player ups the ante and tries to avoid loss – In many trials reaching $180 bids for the $20 prize.

Before you think that this is all rather contrived and of no importance to your firm, consider Nick Leeson.

Because Leeson was biased in this same way, he piled debt upon debt trying to recover an initial loss, and the result of this put Barings Bank out of business (Nicholson 1998; Hoch and Kunreuther 2001; Goto 2007).

The phenomenon has been seen in countless examples ranging from big lies to cover small lies (President Clinton?) to people who sell their good shares and hold onto those that are plummeting – a trait shared by monkeys.

It even overpowers actual gains.

Take exchange rates: Some time ago the Australian dollar was at 0.96 to the US dollar against a typical exchange rate of closer to 0.7. A person holding AUD could thus make a tidy profit by paying the $20 fee and moving a large quantity of AUD into USD. However, in many cases, when the rate dropped to 0.94, people, having pegged expectations to 0.96, now regarded 0.94 as a loss, and instead of selling and getting the benefit of 0.94 against a typical 0.7, held onto their AUD in the hopes that it would climb back to 0.96.

However, it proceeded to drop further, triggering even more angst, and resulting in those people tending to perceive and even big loss, and even more desire to see it climb back up before they wanted to sell, … and so on.

This is the way people lose great fortunes on the stock market, in gambling, and in business ventures in general.

Risk aversion also has a twin brother – Aversion to Change

Change Reluctance

Since most random change is harmful, risk aversion equates to a reluctance to change what is tried and true, and herein lies the real rub – while good professional practices can limit the harm done due to Bazerman Auction situations and can embed safeguards against scenarios such as Barings Bank, there is another problem that no amount of procedural interlocks and policies can prevent – external changes.

Even if a company has an absolutely perfect market approach, externalities cause unpredictable changes in the business terrain that require adaptation and course corrections that will inevitably require novelty and innovation.

This inverse link between risk-aversion and innovation has been the subject of books (Hunt and Hazel 2003) and has even spurred some research suggesting that it is tied to low cognitive ability (Dohmen, Falk et al.) We might in a snide moment say that corporations that are risk averse are just stupid, but that is unkind and untrue.

Which brings us back to the kind of Conservatism that leads to corporate death.

Conclusion

Even though in mature companies the cultural & existential narratives tend to be “onwards and upwards” in aspiration, the more mundane “How things are done here” or ground truths tend to show that there is often a gap between declared vs operant behaviour and goals, and that actual behaviour is risk averse rather than innovative.

The difficulty is that to gain the benefits of experimentation without the risks that catastrophic failure might bring, one has to build an environment in which frequent small risks can be taken without jeopardizing the survival of the organization. In order to do this, one has to be open to experimentation and wilfully expend resources to experiment. In order to have that, one has to have a mindset and corporate culture in which “playing” is allowed, and as companies shed their youthful “go-go” character during their entrepreneurial stage, caution and change reluctance grow.

~~~

Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.

Bibliography

Bazerman, M. and M. Neale (1992). “Nonrational escalation of commitment in negotiation.” European Management Journal
10(2): 163-168.

Bazerman, M. H. (2001). Smart money decisions: why you do what you do with money (and how to change for the better), Wiley.

Chen, K., V. Lakshminarayanan, et al. “The evolution of our preferences: Evidence from capuchin monkey trading behavior.”

Dohmen, T., A. Falk, et al. “Are risk aversion and impatience related to cognitive ability?” The American Economic Review
100(3): 1238-1260.

Goto, S. (2007). “The Bounds of Classical Risk Management and the Importance of a Behavioral Approach.” Risk Management and Insurance Review
10(2): 267-282.

Hoch, S. J. and H. C. Kunreuther (2001). “A complex web of decisions.” Wharton on making decisions: 1-14.

Hunt, B. and G. Hazel (2003). The Timid Corporation: why business is terrified of taking risk, J. Wiley.

McDermott, R., J. H. Fowler, et al. (2008). “On the evolutionary origin of prospect theory preferences.” The Journal of Politics
70(02): 335-350.

Nicholson, N. (1998). “How hardwired is human behavior?” Harvard Business Review
76: 134-147.

Putler, D. S. (1992). “Incorporating reference price effects into a theory of consumer choice.” Marketing Science
11(3): 287-309.

Tversky, A. and D. Kahneman (1991). “Loss aversion in riskless choice: A reference-dependent model.” The Quarterly Journal of Economics
106(4): 1039-1061.

 

Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.

~~~

Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.

 

Knowledge Management Issues: Dealing with Failure.

March 13, 2010

Professionals often excel at dealing with expediencies, but perform quite poorly when it comes to deeper root-causes. This may be a side-effect of their expectations of success – having rarely failed throughout their educational and professional backgrounds and careers. When single-loop strategies do not perform as expected, these persons often become defensive and seek a ‘scapegoat’. This is discussed in relation to the broader concept of organisational learning.

Abstract

Both Science and Business systematically pay more attention to successful outcomes than unsuccessful outcomes due to structural mechanisms which drive this behaviour. One particular author noting the bias towards success and who is cited across many domains of practice is Chris Argyris, whose depiction of “double-loop learning” involving learning about learning has had great influence.
Argyris details the psychological tendency for people to remain cemented in “single-loop” strategies and the risks that  poses.
A recent trend to correct the “winner” bias can be seen in various domains where efforts are underway to use failures as warning signals to trigger the double-loop learning and Model-II strategies as described by Argyris. The greatest benefits of this reflexive corrective action, and the focus on what does not work are perhaps less wasted effort, and a protection against systematically faulty reasoning.

Introduction

Fulmer reports a study undertaken by the Dutch oil giant, Shell, which showed an average expected corporate lifespan of less than 40 years – caused in their view by corporate “learning disabilities” (Fulmer, 1998:8).

Among the common properties they discerned in those companies surviving beyond that average is the ability to tolerate novelty and innovation.

This creates a tension since with novelty and innovation comes a high risk of failure, and studies indicate that businesses are failure averse.

Aversion to failure

As a norm, we pay a lot of attention to success – we admire those who succeed, we publish those research projects which were successful, and those papers that describe successful experiments or findings. This amounts to a bias towards only documenting things that are successful.
An intolerance for failure (or admission of failure) may however prevent us from gaining new insights or saving us from future failure. This can become institutionalized and prevent leaders especially from seeing their own failing methods for what they are because they are unused to failure, and also because they are surrounded by people and structures that continue to obscure both the causes and results of maladaptive behaviour. (Burke, 2006)

This is not restricted to business, but is also present in science. For example The Royal Society of Chemistry which justifiably claims to be “the largest organisation in Europe for advancing the chemical sciences” states in its guide to authors that “In general there is no need to report unsuccessful experiments”[1].

Discussion

In his earlier work on “Action Science”, Argyris noted a tendency of people for seeking out and selecting data to fit or confirm what they already believe, and are “predisposed to attribute the behavior of others (but not their own) to dispositional traits[2] (Argyris, 1985:96), this he tied to what he coined as embedded “Single Loop Learning” strategies.

In this Model-I archetype, he lists what he perceives as the operant rules or “theory-in-use” vs “espoused theory”:

–          Remain in unilateral control

–          Win, don’t lose

–          Suppress negative feelings

–          Act as rationally as possible

He describes the “Single Loop” process in terms of a thermostat in a heating system. Information is not solicited, nor is the system capable of self-awareness or of changing the control inputs or norms. If the temperature drops below a set threshold, action is initiated to return the temperature to nominal, but the nominal setting itself is persistent and the heating mechanism unchangeable.

In this sense, objections or errors in Model-I actions operating under single-loop learning processes “create defensiveness, self-fulfilling prophecies, self-sealing processes, and escalating error” (Argyris, 2000:5).

Not only are the errors unmentionable, but the very un-mentionability is unmentionable also, hence “self-sealing”. (Argyris, 1999:58)

Argyris poses this as a defense mechanism against embarrassment and feelings of threat (Argyris, 1990:10) which is deeply entrenched, finely practiced, and in which the individual is highly skilled to the degree that the mechanisms are automatic, instantaneous, and spontaneous. (Argyris, 1990 ch2)

When people are challenged about these self-sealing actions or an unmentionable is mentioned, they “become defensive , screen out criticism, and put the ‘blame’ on anyone and everyone but themselves” (Argyris, 1999:127), and thus inclined to scapegoat rather than engage in either self-reflection or analysis of the “tried and true” methods of  received wisdom.

This externalization of blame or “an enemy out there” attitude is echoed in a parallel view given by Senge in his description of management teams and how disagreement with expectations it is usually demonstrated in a fashion that “lays blame, polarizes opinion, and fails to reveal the underlying differences …” (Senge, 1990:24).

To address these second-order problems where the norms or approaches need to be changed, Argyris proposes his Model-II “Double Loop Learning” in which he lists a new set of governing values (Argyris, 2000:98) :

–          Valid Information Seeking

–          Free and Informed Choice

–          Internal Commitment

This would seem like the solution and the end of the process, but drawing on his concept of “System Domain Defenses” Bain speaks of how organizations “… avoid change wherever possible…” and have a tendency for regression over time back to faulty operant behaviour even after corrective changes had been put into place. He attributes this to the fact that organizations are typically part of bigger communities of practice that support the original behavioural models, and that like organizations, they too are averse to change. (Bain, 1998:416).

This reluctance to change brings us back to the “Learning Disabilities” that de Geus of Royal Dutch Shell articulated with regard to failed companies.

If the first-order actions of Model-I Single Loop Learning are thus unable to solve second-order problems, and we require second-order Model-II “Double Loop Learning” but are also averse to the change and the effort cost, then clearly we would need a mechanism to trigger Double Loop Learning when needed, and cultural attitude to act on it.

This discussion suggests an approach using external informational input to break through the organizational system domain fabric and to embrace failure as a “signal from nature” that the mechanism itself is in error. This approach is already mature in the sciences under the framework of theoretical falsifiability and critical tests

The Sciences

One of the foundations of modern science is the concept of falsifiability as articulated by Sir Karl Popper and embodied by the logical form of modus tollens (Kemerling, 2002).
In this form we discover a truth from the combination of a critical test (Thornton, 2006) and the failure of an assertion.

If my car is white, then no number of white objects in my parking bay can enable me to conclude that what is there must be my car, however if what is parked in my spot is not white, then I am sure that it cannot be my car.[3]

 

This allows nature to dictate and to break theories that are in error by providing information external to our system domain.

Looking for false outcomes is thus foundational to science, but we may ask if it is common amongst the scientists themselves?

In an experimental study, Kerns, Mirels, and Hinshaw demonstrated that a large proportion of career scientists were unable to identify valid propositional logic statements. (Kerns 1983) and were frequently unable to use modus tollens correctly.

We can see therefore how science itself is structured to combat this conformational bias and how it seeks external information and has an operant culture of reacting to falsification as suggested earlier, but that Bain’s regression process described earlier is driving this back into Model-I “skilled incompetence” (Argyris, 2006:41).

Several attempts are being made to address this by organizations in many different communities of practice ranging from Oncology (Kern, 2007), Biomedicine (Olsen, 2007), Computer Science (Prechelt, 2006), ecology and evolutionary biology (Blank et al, 2007), Natural Language Processing and Machine Learning (Dale et al, 2007), and Qualitative and Quantitative Results in the Social Sciences (Biesma et al, 2007).

Conclusion

Model-I behaviour is our natural highly learned and skilled state, and it may be difficult or impossible to maintain Model-II behaviour over long periods of time. When confronted with evidence of Model-I failures, our natural reaction will be defensive and to seek external agents to blame in order to maintain systemic homeostasis and avoid embarrassment and feelings of loss of control. It is however possible to use Double-Loop Learning to make systematic changes to address Model-I problems that Single-Loop Learning simply entrenches. One mechanism to engage Model-II activity is to place deliberate triggers in our processes either with quality procedures or with exposure to external thinking.

Further study is necessary into how effort-reduction may play a role in regression to Model-I behaviour archetypes and to bring neuropsychology and behavioural models into synch with Argyris’s views on Single Loop Learning, and with Bain’s observations that organizational changes tend to regress in time back to the dysfunctional but protective Domain Fabric.[1]

Further Reading

1.       “Does double loop learning create reliable knowledge?” 
Author(s):Deborah Blackman, James Connelly, Steven Henderson
The Learning Organization; Volume: 11   Issue: 1; 2004 Research paper

2.       “Transcending organisational autism in the UN system response to HIV/AIDS
in Africa”
Author(s):John G.I. Clarke
Kybernetes; Volume: 35   Issue: 1/2; 2006 Conceptual paper

3.       “The effect of downsizing strategy and reorientation strategy on a
learning orientation”
Author(s):Mark Farrell, Felix T. Mavondo
Personnel Review; Volume: 33   Issue: 4; 2004 Research paper

4.       “Towards a new approach to understanding service encounters: establishing,
 learning from and reconciling different views” 
Author(s):Mark N.K. Saunders, Christine S. Williams
Journal of European Industrial Training; Volume: 24   Issue: 2/3/4; 2000 

5.       “Circular organizing and triple loop learning”
Author(s):A. Georges L. Romme, Arjen van Witteloostuijn
Journal of Organizational Change Management; Volume: 12   Issue: 5; 1999 

6.       “A supplier development programme: the SME experience”
Author(s):Sharon Williams
Journal of Small Business and Enterprise Development; Volume: 14   Issue: 1;

7.       “We will teach you the steps but you will never learn to dance”
Author(s):Jane Turner, Sharon Mavin, Sonal Minocha
The Learning Organization; Volume: 13   Issue: 4; 2006 Case study

8.       “Narratives in ERP systems evaluation”
Author(s):Jonas Hedman, Andreas Borell
Journal of Enterprise Information Management; Volume: 17   Issue: 4;
2004 General review

9.       “Grief and educative leadership” 
Author(s):R.J.S. Macpherson, Barbara Vann
Journal of Educational Administration; Volume: 34   Issue: 2;
1996 Case study

References

  1. Argyris, 1985 “Action Science”, Chris Argyris, Robert Putnam, Diana McLain-Smith, Published 1985 Jossey Bass.
  2. Argyris, 1990 “Overcoming Organizational Defenses”, Chris Argyris, Prentice Hall 1990.
  3. Argyris, 1999, “On Organizational Learning”, 2nd edition, Chris Argyris, Blackwell 1999
  4. Argyris, 2000, “Flawed Advice and the Management Trap”, Oxford Press 2000
  5. Argyris, 2004 “Reasons and Rationalizations: The Limits to Organizational Knowledge”, Chris Argyris, Oxford Press 2004
  6. Bain, 1998 “Social defenses against organizational learning”, Human Relations, vol 51, no. 3, pp. 413-429
  7. Biesma et al, 2007 : Biesma, Regien et al website “The Journal of Spurious Correlationshttp://www.jspurc.org/subm2.htm last accessed 5 Apr 07
  8. Blank et al, 2007 : Jochen Blank, Michael J. Stauss, Jurgen Tomiuk, Joanna Fietz and Gernot Segelbacher  “Journal of Negative Results”  http://www.jnr-eeb.org/ last accessed 5 Apr 07
  9. Burke, 2006, “Why leaders fail: exploring the darkside”, Ronald J. Burke
  10. Dale et al, 2007 : Dale, Robert  website “Natural Language Processing and Machine Learninghttp://jinr.site.uottawa.ca/ last accessed 5 Apr 07
  11. Fulmer, 1998 “The second generation learning organizations: new tools for sustaining competing advantage”, Fulmer RM, Gibbs P, Keys JB, Organizational Dynamics, vol. 27, no.2, pp. 7-20
  12. Gough, 2006, “Women See Friends, Men See Foes”,Nancy Gough
  13. International Journal of Manpower; Volume: 27   Issue: 1; 2006
  14. Kemerling, 2002,  “Philosophy Pageshttp://www.philosophypages.com/dy/m9.htm#mt Last accessed 5 Apr 07
  15. Kern 2007, Website “Journal of Negative Observations in Genetic Oncology” at http://www.path.jhu.edu/NOGO/ last accessed 5 Apr 07
  16. Kern, 1993, “Scientists’ Understanding of Propositional Logic: An Experimental Investigation “ Leslie H. Kern, Herbert L. Mirels, Virgil G. Hinshaw
  17. Olsen, 2007 : Olsen, Bjorn Website “Journal of Negative Results in Biomedicine” http://www.jnrbm.com last accessed 5 Apr 07
  18. Prechelt, 2006 : Prechelt, Lutz Website “Forum for Negative Results” http://page.inf.fu-berlin.de/~prechelt/fnr/ last accessed 5 Apr 07
  19. Sabrina M. Tom, Craig R. Fox, Christopher Trepel, and Russell A. Poldrack Science 26 January 2007 315: 515-518 [DOI: 10.1126/science.1134239]
  20. Science 2 June 2006 312: 1281 [DOI: 10.1126/science.312.5778.1281c]
  21. Senge, 1990 “The Fifth discipline: the art and practice of the learning organization”, Doubleday 1990, pp. 19-25
  22. Social Studies of Science, Vol. 13, No. 1 (Feb., 1983), pp. 131-146
  23. Thornton , 2006, “Karl Popper”, Thornton, Stephen, In The Stanford Encyclopedia of Philosophy (Winter 2006 Edition), Edward N. Zalta (ed.), At http://plato.stanford.edu/archives/win2006/entries/popper/, accessed last April 20, 2007.
  24. Tom et al, 2007 “The Neural Basis of Loss Aversion in Decision-Making Under Risk

 


[1]

  It may be interesting to examine why gossip or “informal social communication” is mostly about negative outcomes, and why traffic accidents get our attention. This is perhaps an evolutionary byproduct of attention to danger since ignoring one true danger can be fatal, whereas running away from a false alarm is usually not. This is evident by the asymmetry in the neurology of risk evaluation (Tom et al, 2007).


[1] See RCS author’s guidelines at  http://www.rsc.org/Publishing/ReSourCe/AuthorGuidelines/ArticleLayout/sect1.asp: Last accessed 7 March 2010

[2] This is also known as the “fundamental attribution bias”

[3] Sadly, Popper was undone in part by the Duhem-Quine thesis which showed that rejecting an hypothesis in this way was not foolproof, since other reasons may exist why it failed. In this case, perhaps somebody painted my car!

~~~~~~~~~

Matthew Loxton is the director of Knowledge Management & Change Management at Mincom, and blogs on Knowledge Management. Matthew’s LinkedIn profile is on the web, and has an aggregation website at www.matthewloxton.com
Opinions are the author’s and not necessarily shared by Mincom, but they should be.

Technorati Tag