The BP oil spill is already the biggest oil spill in the US and is on its way to becoming an unprecedented industrial disaster, given the environmental impact of millions of barrels of oil gushing into the Gulf of Mexico. Even the most hardened of carbon lovers cannot but be moved at the sight of the fragile wildlife in the Gulf literally soaking in the oil. The ecosystem of the Gulf states which were already ravaged by unrestrained development and the odd super-cyclone is now being struck a death blow by the spewing gusher.
Could the specific chain of events leading up to this spill have been predicted? The answer is no. But that doesn't mean that the outcome could not have been anticipated. Given the technological complexity that some of the deep-sea oil drilling operations typically involve, there was always a measurable probability that one of the intermeshing systems and processes would give way and result in an oil-well that was out of control. As Donald Rumsfeld, Secretary of Defense in the Bush II administration put it, stuff happens. But where there has been an abject failure of human science and industrial technology has been in underestimating the impact of this kind of an event on a habitat and overestimating the power of technology to fix these kinds of problems.
Fundamentally, the science of estimating the impact of disasters can be broken down into three estimations:
one, an estimation that failure occurs
second, the damage expected as a result of the failure
the third, (which is probably a function of the second) are our capabilities in fixing the failure or mitigating the impact of the failure.
In this post, I will discuss the first part of the problem - estimating the probability of failures occurring.
There is a thriving industry and a branch of mathematics that works on the estimation of these extremely low probability events known as Disaster Science. The techniques that the disaster scientists or statisticians use are based on the understanding of the specific industry (nuclear reactors, oil drilling, aerospace, rocket launches, etc.) and is constantly refreshed with the our increasing understanding of the physics or science in general underlying some of these endeavours. The nuclear-power industry's approach analyzes the engineering of the plant and tabulates every possible series of unfortunate events that could lead to the release of dangerous radioactive material, including equipment failure, operator error and extreme weather. Statisticians tabulate the probability of each disastrous scenario and add them together. Other industries, such as aviation, use more probability based models given the hundreds of thousands of data points available on a weekly basis. Then there are more probabilistic approaches such as tail probability estimation or extreme event estimation which uses math involving heavy-tailed distributions for the probability estimation of such events occurring. Michael Lewis in his inimitable style talked about this in an old New York Times article called In Nature's Casino.
One variable that is a factor and often the contributing factor in many such disasters is human error. Human error is extraordinarily difficult to model, just based on past behaviour because there are a number of factors that could just confound such a read. For instance, as humans encounter fewer failures, our nature is to become less vigilant and therefore at greater risk of failing. Both lack of experience and too much experience (especially without having encountered failures) are risky. The quality of the human agent is another variable that has wide variability. At one time, NASA had the brightest engineers and scientists from our best universities join. Now, the brightest and the best go to Wall Street or other private firms and it is often the rejects or the products of second-rung universities that make it to NASA. This variable of human quality is difficult to quantify or sometimes difficult to measure in a way that does not offend people on grounds like race, national origin, age and gender. Let us suppose that the brightest and the best joining NASA previously came from colleges or universities where admission standards required higher scores on standardized tests. Now we know that standardized test scores are correlated with the socio-economic levels of the test takers and hence to variables such as income, race, etc. So now if NASA goes to lower rung colleges, does it mean that it was being more exclusive and discriminatory before (by taking in people with average higher scores) and is now more inclusive now? And can we conclude that the drop in quality now is a direct function of becoming more inclusive on the admission criteria front? It is never easy to answer these questions or even tackle the question without feeling queasy about what one is likely to find while answering the question.
Another variable, again related to the human factor is the way we interact with technology. Is the human agent at ease with the technology confronting him or does he feel pressured and unsure from a decision making standpoint? I have driven stick-shift cars before and I have been more comfortable and at ease with the decision making around gear changes when the car-human interface was relatively simpler and spartan. In my most recent car, as I interact with multiple technology features such as the nav system, the bluetooth enabled radio, the steering wheel, the paddle shifter, the engine revs indicator, I find my attention diluted and I have seen that the decision making around gear changes is not as precise as it used to be.
2 comments:
That is exactly what is written in The Washington Post article 6/18/2010. BP has hired some of the best brains and top engineers in the world who are unable to cap the gushing oil well. What chance do all those people who get hired into government regulatory agencies like EPA these days from second rung colleges have of being able to detect , foresee and solve this problem? Something to definitely focus on.
Good point. As the people who interact with the system (either in a regulatory capacity or as actual players) have lower or higher competence, that changes the probability of human engineered failures occurring.
To take another example, banking was a boring profession in the 50s and 60s. The brightest never went into banking. Very different dynamic from today where the smartest set goes to banking and apply their minds to game the system,
Post a Comment