Tuesday, June 15, 2010

The BP oil spill and the disaster estimations - Part 1/3

The BP oil spill is already the biggest oil spill in the US and is on its way to becoming an unprecedented industrial disaster, given the environmental impact of millions of barrels of oil gushing into the Gulf of Mexico. Even the most hardened of carbon lovers cannot but be moved at the sight of the fragile wildlife in the Gulf literally soaking in the oil. The ecosystem of the Gulf states which were already ravaged by unrestrained development and the odd super-cyclone is now being struck a death blow by the spewing gusher.

Could the specific chain of events leading up to this spill have been predicted? The answer is no. But that doesn't mean that the outcome could not have been anticipated. Given the technological complexity that some of the deep-sea oil drilling operations typically involve, there was always a measurable probability that one of the intermeshing systems and processes would give way and result in an oil-well that was out of control. As Donald Rumsfeld, Secretary of Defense in the Bush II administration put it, stuff happens. But where there has been an abject failure of human science and industrial technology has been in underestimating the impact of this kind of an event on a habitat and overestimating the power of technology to fix these kinds of problems.

Fundamentally, the science of estimating the impact of disasters can be broken down into three estimations:
one, an estimation that failure occurs
second, the damage expected as a result of the failure
the third, (which is probably a function of the second) are our capabilities in fixing the failure or mitigating the impact of the failure.

In this post, I will discuss the first part of the problem - estimating the probability of failures occurring.

There is a thriving industry and a branch of mathematics that works on the estimation of these extremely low probability events known as Disaster Science. The techniques that the disaster scientists or statisticians use are based on the understanding of the specific industry (nuclear reactors, oil drilling, aerospace, rocket launches, etc.) and is constantly refreshed with the our increasing understanding of the physics or science in general underlying some of these endeavours. The nuclear-power industry's approach analyzes the engineering of the plant and tabulates every possible series of unfortunate events that could lead to the release of dangerous radioactive material, including equipment failure, operator error and extreme weather. Statisticians tabulate the probability of each disastrous scenario and add them together. Other industries, such as aviation, use more probability based models given the hundreds of thousands of data points available on a weekly basis. Then there are more probabilistic approaches such as tail probability estimation or extreme event estimation which uses math involving heavy-tailed distributions for the probability estimation of such events occurring. Michael Lewis in his inimitable style talked about this in an old New York Times article called In Nature's Casino.

One variable that is a factor and often the contributing factor in many such disasters is human error. Human error is extraordinarily difficult to model, just based on past behaviour because there are a number of factors that could just confound such a read. For instance, as humans encounter fewer failures, our nature is to become less vigilant and therefore at greater risk of failing. Both lack of experience and too much experience (especially without having encountered failures) are risky. The quality of the human agent is another variable that has wide variability. At one time, NASA had the brightest engineers and scientists from our best universities join. Now, the brightest and the best go to Wall Street or other private firms and it is often the rejects or the products of second-rung universities that make it to NASA. This variable of human quality is difficult to quantify or sometimes difficult to measure in a way that does not offend people on grounds like race, national origin, age and gender. Let us suppose that the brightest and the best joining NASA previously came from colleges or universities where admission standards required higher scores on standardized tests. Now we know that standardized test scores are correlated with the socio-economic levels of the test takers and hence to variables such as income, race, etc. So now if NASA goes to lower rung colleges, does it mean that it was being more exclusive and discriminatory before (by taking in people with average higher scores) and is now more inclusive now? And can we conclude that the drop in quality now is a direct function of becoming more inclusive on the admission criteria front? It is never easy to answer these questions or even tackle the question without feeling queasy about what one is likely to find while answering the question.

Another variable, again related to the human factor is the way we interact with technology. Is the human agent at ease with the technology confronting him or does he feel pressured and unsure from a decision making standpoint? I have driven stick-shift cars before and I have been more comfortable and at ease with the decision making around gear changes when the car-human interface was relatively simpler and spartan. In my most recent car, as I interact with multiple technology features such as the nav system, the bluetooth enabled radio, the steering wheel, the paddle shifter, the engine revs indicator, I find my attention diluted and I have seen that the decision making around gear changes is not as precise as it used to be.

Thursday, June 3, 2010

On Knightian Uncertainty

An interesting post appeared recently attempting to distinguish between risk and uncertainty. The view was proposed by an economist called Frank Knight. The theory proposed by Knight is that risk is something where the outcome is unknown but whose odds can be estimated. But when the odds become inestimable, risk turns to uncertainty. In other words, risk can be measured and uncertainty cannot.

There are economists who argue that Knight's distinction only applies in theory. In the world of the casino, where the probability of a 21 turning up or the roulette ball landing on a certain number can be estimated, it is possible to have risk. But anything outside simple games of probability becomes uncertainty because it is difficult to measure the uncertainty. The real world out there is so complex that it is indeed difficult to make even reasonably short term projections, let alone the really long term ones. So what is really the truth here? Does risk (as defined by Knight) even exist in the world today? Or as the recent world events (be it 9/11, the Great Recession, the threatened collapse of Greece, the oil spill in the Gulf of Mexico, the unpronounceable Icelandic volcano) have revealed, it is a mirage to try and estimate the probability of something playing out with remotely close to the kinds of odds we initially estimate.

I have a couple of reactions. First, my view is that risk can be measured and outcomes predicted more or less accurately under some conditions in the real world. When forces are more or less in equilibrium, it is possible to have some semblance of predictability about political and economic events. And therefore an ability to measure the probability of outcomes happening. When forces disrupt that equilibrium and the disruptions may be caused by the most improbable and unexpected causes, then all bets are off. Everything we have learnt from the time when Knightian risk applied is no longer true and Knightian uncertainty takes over.

Second, this points to the need for the risk management philosophy (as it is applied to a business context) to not only consider what the system knows and can observe but also the risks that the system doesn't even know exist out there. That's where good management practices such as constantly reviewing positions, eliminating extreme concentrations (even if they appear to be value-creating concentrations), constantly questioning the cognitive thinking - can lead to a set of guardrails that a business can stay within. Now these guardrails may be frowned up and even may invite derision from those interested in growing the business during good times, as the nature of these guardrails are always going to be to try and avoid too much of a good thing. However, it is important for the practitioners of risk management to stay firm to their convictions and make sure the appropriate guardrails are implemented.

Sitemeter