Thursday, August 26, 2010

The Judgment Deficit - a real-wordliness deficit

I usually don't use my blog to take on or pick apart published pieces - my aim with the blog is to create a diversity of ideas and viewpoints to the reader. There is plenty of intelligent writing in the Web that is thought-provoking and worth bringing to the attention of readers interested in the general ideas of statistics and machine learning. But I came across this learning recently that - I have to admit - caused a fair amount of angst and therefore an urge-to-act. This was the Judgment Deficit by Amar Bhide, a professor of Finance at Tufts University.

The journal article from HBS talks about how machines or computers can make decisions in certain types of situations and human judgment needs to come in at other places. Fair enough. The article then bemoans the recent Great Recession and lays part of the blame on statistical models used in Finance. Specifically the author says
In recent times, though, a new form of centralized control has taken root: mechanistic decision making based on top-down statistical models and algorithms. This has been especially true in finance, where risk models have replaced the judgments of thousands of individual bankers and investors, to disastrous effect.This kind of thinking is not only delusional but also dangerous. (Another part of the article that didn't necessarily get me singing from the rooftops was the lengthy encomium heaped on the economics of Freidrich Hayek, the libertarian economist and the founder of the famous Austrian Economists school. I am still not clear how is that related to the topic at hand.)

The fundamental reason why banks took the risks they took were because there were incentives to do so and there was not enough of an appreciation of the downside. Bankers thought the spiral of rising home-prices, the ability to take the assets off balance sheet and maintain minimal capital reserves, was an unending one and were unable to either spot the inevitable edge of the cliff or were too late to pull back once they spotted it. Also the desire to have these activities as unregulated as possible (to allow free pursuit of profit, or to make 'markets efficient' as Wall Street would argue) led to a number of opacities (about risk) developing in the system which lead to situations where high-schools in Norway were exposed to the collapse of Bear Stearns. So lets not put the blame on top-down statistical models and algorithms. If the alternative that Bhide suggests, having manual underwriters take more of the decisions, were to have happened, I am not sure whether the conclusions reached by these underwriters would have been any different. Apart from a few economists, fund managers and people like Nourini and Taleb (who have made an image of themselves as Cassandras of Doom and therefore have to say anything to maintain that image), nobody - let me say that again - nobody saw this edifice collapsing. No one thought house prices in the US would ever come down. Everyone (human beings and computers alike) were victims of the rear-view mirror bias, i.e. expecting that the future would play out exactly as the past.

So let's go a little bit easy on computers, statistical models and automated decision making.

Thursday, August 19, 2010

Part 2/3 of disaster estimation - Understanding the expected monetary loss

In part 1 and part 1b of this series, we reviewed some of the ways in which disaster estimation modelers go about estimating the probability of occurence of a catastrophic event. The next phase is the estimation of expected dollar losses when the catastrophe does take place. What would be some of the impacts on the economic activity within a region and how widespread would be the impacts?

This is where the 'sexiness' of model building techniques meets the harsh realities of extensive ground work and data gathering. When a disaster does occur, the biggest disruptions are usually to life and property. Then there are additional longer term impacts to the economic activity of the region and this is driven both directly by the damage to life and property and also indirectly by the impacts to business continuity and ultimately by the confidence that consumers and tradespeople alike continue to have about doing business in the region. Lets examine this one piece at a time.

The disruptions to life and property can be examined by the number of dwellings or business properties that are built specifically to resist the type of disaster event we are talking about. In the case of fires, it is the number of properties that are built with the right building codes that are built under the right safety codes. This type of information requires some gathering but is publicly available information from the property divisions of several counties. In the case of hurricanes, it can be the number of houses that are constructed after a certain year when stricter building codes started to be enforced. This type of data gathering is extremely effort intensive but is often the difference between a good approximate model versus a really accurate model that can be used for insurance pricing decisions. With a competitive market like insurance where there are many companies operating essentially on price, the ability to build accurate models is a powerful edge.

The damage to life often has a very direct correlation to the amount of property damage. Also with the early warning systems in place ahead of disasters (except earthquakes, I suppose), it has become quite common to have really large disasters like hurricanes not resulting in any major loss of life. One significant example was Hurricane Katrina where more than a thousand people lost their lives in the Gulf Coast area and particularly in New Orleans.

In the next article in the series, I will provide an overview of the ReInsurance market. Which is where a lot of this probabilistic modeling ultimately gets applied.

Sitemeter