Saturday, July 17, 2010

Interesting links from Jul 17, 2010

1.The over-stated role of banking in the larger economy (Link here)

2. A very interesting article on the original monetary expansionist, John Law (Link here)

3. My latest area of passion, text mining and analytics. A blog entry from SAS. (Link here)

4.Commentary from Prof.Rajan on the inequality in US income and its inevitable lead to a crisis. His analysis on how income inequality forces asset-price inflation is fascinating (Link here)

Tuesday, July 13, 2010

Disaster estimations - Part 1b/3 Understanding the probability of disaster

Part 1 of my post on modeling catastrophic risk covered measuring the probability that a risk even can occur. This probability can derived based on empirical evidence as well as from other computer models that underlie destructive forces of nature. A good example of a paper that talks about how such a model is built and used is outlined in this paper by Karen Clark, a renowned catastrophic risk modeler and insurer. The paper was a seminal one when it came out as it outlined a scientific method by which such risks could be estimated. The paper is titled "A formal approach to catastrophe risk assessment and management" and the link is here.

The paper outlines an approach to estimate losses from hurricanes impacting the US Gulf coast and the East Coast. The model has a probability assessment for hurricanes making landfall, developed using historical information (going back to about 1910) from the US Weather Service. While this is a great starting point and helps us get to a good estimate of at least a range of losses one can expect and therefore the insurance premiums one should expect to sell, there are important places where the model can be improved. One example is the cyclical nature of hurricane intensity over the last 100 years. Between 1950 and 1994, the Atlantic hurricanes have run through a benign cycle. Hurricane activity and intensity (as measured by the number of named storms and the number of major hurricanes, respectively) have shown an increase since 1994, though. So a model relying on activity from the 1950-1994 period is likely to be off in its loss estimates by more than 20%. See the table for what I am talking about.

How can a modeler correct for such errors in estimates? One way to correct for these estimates is to use the latest in scientific technology and modeling in estimating the probabilities. Developments in scientific understanding of phenomena such as hurricanes means that it is now possible to build computer models that replicate the physics behind the hurricanes. The dynamic physical models incorporate some of the more recent understanding of world climatology, such as the link between Sea Surface Temperatures or SSTs and hurricane intensity. Using some of these models, researchers have been able to replicate the increase in hurricane intensity seen in the last fifteen years in a way that the empirical models built prior to this period have not been able to. The popular science book about global warming called Storm World by Chris Mooney spells out these two different approaches to hurricane intensity estimation, and the conflicts between the chief protagonists of each of these approaches. Based on the recent evidence at least, the more physics based approach certainly appears to be tracking closer to the rapid changes to hurricane intensity. William Gray of Colorado State University, whose annual hurricane forecast has been lucky for many years has been forced to re-fit his empirical model for the rapid increase in hurricane intensity post-1995.

Finally, I leave you with another note about how some of the dynamic physical models work. This is from one of my favourite blogs which is Jeff Masters' tropical weather blog. The latest entry talks precisely about such a dynamic physical model built by the UK Met Office. And I quote:

it is based on a promising new method--running a dynamical computer model of the global atmosphere-ocean system. The CSU forecast from Phil Klotzbach is based on statistical patterns of hurricane activity observed from past years. These statistical techniques do not work very well when the atmosphere behaves in ways it has not behaved in the past. The UK Met Office forecast avoids this problem by using a global computer forecast model--the GloSea model (short for GLObal SEAsonal model). GloSea is based on the HadGEM3 model--one of the leading climate models used to formulate the influential UN Intergovernmental Panel on Climate Change (IPCC) report. GloSea subdivides the atmosphere into a 3-dimensional grid 0.86° in longitude, 0.56° in latitude (about 62 km), and up to 85 levels in the vertical. This atmospheric model is coupled to an ocean model of even higher resolution. The initial state of the atmosphere and ocean as of June 1, 2010 were fed into the model, and the mathematical equations governing the motions of the atmosphere and ocean were solved at each grid point every few minutes, progressing out in time until the end of November (yes, this takes a colossal amount of computer power!) It's well-known that slight errors in specifying the initial state of the atmosphere can cause large errors in the forecast. This "sensitivity to initial conditions" is taken into account by making many model runs, each with a slight variation in the starting conditions which reflect the uncertainty in the initial state. This generates an "ensemble" of forecasts and the final forecast is created by analyzing all the member forecasts of this ensemble. Forty-two ensemble members were generated for this year's UK Met Office forecast. The researchers counted how many tropical storms formed during the six months the model ran to arrive at their forecast of twenty named storms for the remainder of this hurricane season. Of course, the exact timing and location of these twenty storms are bound to differ from what the model predicts, since one cannot make accurate forecasts of this nature so far in advance.

The grid used by GloSea is fine enough to see hurricanes form, but is too coarse to properly handle important features of these storms. This lack of resolution results in the model not generating the right number of storms. This discrepancy is corrected by looking back at time for the years 1989-2002, and coming up with correction factors (i.e., "fudge" factors) that give a reasonable forecast.

If you go to the web-page of the UK Met Office hurricane forecast, you can find a link of interest Reinsurance companies. This link is to buy the hurricane forecast which the UK Met Office has obviously gone to great pains to develop. Their brochure on how the insurance industry could benefit from this research makes for very interesting reading as well.

Sitemeter