Friday, December 24, 2010

Data mining and the inevitable conflict with privacy issues

The explosion in the availability of data in the past decade and the explosion in analytical techniques to interpret this data and find patterns in them has been a huge benefit for businesses, governments as well as for individual customers. Amongst businesses, examples like Amazon.com, Harrah's, Target, Netflix and FedEx have made the analysis of large data their business model. These companies have come up with increasingly sophisticated and intricate ways of capturing data about customer behaviour and offering targeted products based on the behaviour.

Big government has been somewhat late in the game but are making big strides in the field of data mining. But increasingly, areas like law enforcement, counter-terrorism, anti-money laundering, the IRS have leveraged some cutting edge techniques to get them to be more effective at what they do. Which is usually to detect needle of criminal activity amongst the haystack of normal law-abiding activities, and take the appropriate preventive or retributory action.

But as the saying goes, there are two sides to every coin. While the explosion of data and its analysis has been mostly been driven by good intentions, the consequence of some of this work is beginning to look increasingly murky. For example, if there is surveillance of an individual's emails to identify money laundering trails, where is the bright line between what is legitimate monitoring of criminal activity and the unwanted intrusion in the activities of law abiding citizens? The defense from those who do the monitoring has always been that only suspicious activities are targeted - and also that they use sophisticated analytics to model these criminal activities. But as any model builder worth his salt knows, an effective model is one that maximizes true positives AND minimizes the false positives. The false positives in this case are ones that display the similar so-called 'suspicious' behaviour but turn out to be innocent. How can then one build an effective model by being very exclusive in the data points for this model (i.e. by only including behaviour that is understood to be suspicious)? In order to truly understand the false positives and attempt to reduce them, one HAS to include points in the model-build sample that are very likely to be false positives. And therein lies the paradox. To build a really good predictive system, the sample needs to be randomized to include good and bad outcomes, highly suspicious and borderline innocent behaviours.

I want to share two different perspectives on this issue. The first is from the MIT Technology Review that extols the virtues of a data-driven law enforcement system, as practiced by the police department of Memphis, TN. The link is here. An excerpt from this article:
The predictive software, which is called Blue CRUSH (for "criminal reduction utilizing statistical history"), works by crunching crime and arrest data, then combining it with weather forecasts, economic indicators, and information on events such as paydays and concerts. The result is a series of crime patterns that indicate when and where trouble may be on the way. "It opens your eyes within the precinct," says Godwin. "You can literally know where to put officers on a street in a given time." The city's crime rate has dropped 30 percent since the department began using the software in 2005.

Memphis is one of a small but growing number of U.S. and U.K. police units that are turning to crime analytics software from IBM, SAS Institute, and other vendors. So far, they are reporting similar results. In Richmond, Virginia, the homicide rate dropped 32 percent in one year after the city installed its software in 2006.

Now read this other piece, painting a slightly different picture on what is going on.
Suspicious Activity Report N03821 says a local law enforcement officer observed "a suspicious subject . . . taking photographs of the Orange County Sheriff Department Fire Boat and the Balboa Ferry with a cellular phone camera." ... noted that the subject next made a phone call, walked to his car and returned five minutes later to take more pictures. He was then met by another person, both of whom stood and "observed the boat traffic in the harbor." Next another adult with two small children joined them, and then they all boarded the ferry and crossed the channel.

All of this information was forwarded to the Los Angeles fusion center for further investigation after the local officer ran information about the vehicle and its owner through several crime databases and found nothing ... there are several paths a suspicious activity report can take:
The FBI could collect more information, find no connection to terrorism and mark the file closed, though leaving it in the database.
It could find a possible connection and turn it into a full-fledged case.
Or, as most often happens, it could make no specific determination, which would mean that Suspicious Activity Report N03821 would sit in limbo for as long as five years, during which time many other pieces of information could be added to the file ... employment, financial and residential histories; multiple phone numbers; audio files; video from the dashboard-mounted camera in the police cruiser at the harbor where he took pictures; anything else in government or commercial databases "that adds value".

This is from an insightful piece in the Washington Post titled "Monitoring America". The Post Article goes on to describe the very same Memphis PD and asks some pointed questions on some of the data gathering techniques used.


This is where this whole concept of capturing information at the individual level and using it for specific targeting enters unstable ground - when this takes place in an intrusive manner and without due consent from the individuals. When organizations do it, it can be definitely irritating and border-line creepy. When governments do it, it reminds one of George Orwell's "Big Brother". It will be interesting to see how the field of predictive analytics survives the privacy backlash which is just beginning.

Saturday, December 18, 2010

Visualization of the data and animation - part II

I had written a piece earlier about Hans Rosling's animation of country-level data using the Gapminder tool. Here are some more examples of some extremely cool examples of data animation.

At the start of this series, there is more animation from the Joy Of Stats program that Rosling hosted in the BBC. The landing page is a link that shows the plotting of crime data in downtown San Francisco and how this visual overlay on the city topography provides some valuable insights on where one might expect to find crime. This is a valuable tool for police departments (to try and prevent crime that is local to an area and has some element of predictability), residents (to research neighbourhoods before they buy property, for example) and tourists (who might want to doublecheck a part of the city before deciding on a really attractive Priceline.com hotel deal). The researchers who have created this tool that maps the crime data to maps. The researchers in the clip talk about how tools such as this can be used to improve citizen power and government accountability. Another good example of crime data, this time reported by Police Departments across the US can be found here. Finally, towards the end of the clip, the researchers go on to mention what could be the Holy Grail of this kind of visualization. They talk about how real-time data put up on social media and networking sites like Facebook and Twitter (geo-tagged perhaps) could provide a real-time feed into these maps. Now this would have been certainly in the realm of science fiction only a few years back but suddenly now it doesn't seem as impossible.

The San Francisco crime mapping link has a few other really impressive videos as you scroll further down. I really like the one of Florence Nightingale, whose graphs during the Crimean war helped reveal important insights on how injuries and deaths were occurring in hospitals. It is interesting to know that Lady of the Lantern was not just renowned for tending for the sick, but also was a keen student of statistics. Her graphs of deaths which were accidental, caused by war injuries and wounds and finally those that were preventable (and caused by poor hygiene that was quite prevalent at the time) created a very powerful imagery of the high incidence of preventable deaths and the need to address this area with the right focus.

Why is visualization and animation of data helpful and such a critical tool in the arsenal of any serious data scientist? For a few reasons.

For one, it helps tell a story way better than equations or tables of data do. That is so essential to convey the message to people who are not necessarily experts who have insight into the tables, but are important influencers and stakeholders nevertheless who need to be educated on the subject being conveyed. Think of it as how an advertisement (either picture or moving image) is more powerful in conveying the strength of a brand as compared to boring old text.
The other reason, in my opinion, is that graphical depiction and visualization of the data allows the powerful human brain (which is far more powerful than any computer at pattern recognition) to take over the part of data analysis that the human brain is really good at and computers generally not so good at. This is forming hypotheses on-the-fly about the data being displayed and reaching conclusions based on visual patterns in the data. Also the ability to hook into remote memory banks within our brains and form linkages. While Machine Learning and AI are admirable goals, there is still some way to go before computers can match the sheer ingenuity and flexibility of thought that the human brain possesses.

Sunday, December 12, 2010

Thinking statistically – and why that’s so difficult

I came across this piece from a few months back by the Wired magazine writer, Clive Thompson on “Why we should learn the language of data”. The article is one amongst a stream of recent articles in the popular media of how data-driven applications are changing our world. The New York Times has had quite a few pieces on this topic recently.

Clive Thompson calls out how the language of data and statistics is going to be transformational for the world, going forward and how it needs to be core part of general education. Thompson also calls out why thinking about data trends or statistics is hard. It is hard because it is not something that the intuitive wiring in the human brain readily recognizes or appreciates. The human psyche with its fight-or-flight instincts reacts to big, dramatic events well and to subtle trends badly. We are not fundamentally good at a number of things that good decision making calls for, such as being open to both supporting and refuting evidence, not confusing correlation and causality, factoring uncertainty, estimating rare events.

Most of the applications where a data-driven insight has changed the world in any meaningful way have been driven by private enterprise. These changes have also been somewhat incremental in nature. Of course, it has allowed companies to recommend movies to interested subscribers, position goods in stores more effectively, distribute at lower cost, price tickets so as to ensure maximum returns and so on. In other words, these changes may have been game changing for specific industries but not necessarily for the entire human race at large.

Numbers can have greater power than just impacting a few industries at a time, one would think. Just given the sheer amount of data that is being produced in the world today and the rate at which both computing power and bandwidth continues to grow, we ought to have seen a much more wide ranging impact from data driven analysis. We should have been firmly down the road to making progress on combating global warming, diseases like heart disease, diabetes and cancer. Government agencies which are a really big part of the modern economy has not been as successful at driving this form of data driven innovation. Why is that?

This probably has got to do with a fundamental lack of understanding of numbers and statistics, amongst the population at large. The places in the world where a lot of the data gathering and processing is happening, i.e. the Western world, are also the places where an education in science and math is somewhat undervalued in relation to studies like liberal arts, media, legal studies, etc. That is where the emerging economies of the world have an edge. Study of math, science and engineering has always been appropriately valued in countries like India, China and other emerging Asian giants. Now as these countries also begin to generate, process and store data, the math and science educated talent will be chafing at the bit to get into the data and harness its potential. Data has been rightly called as another factor of production like labour, capital and land. It is an irony in the world today that those who have data within easy reach are less inclined to use it.

Friday, December 10, 2010

Swarm Intelligence, Ant Colony Optimizations – advances in analytic computing

Advances in computing have led to some new and interesting developments in the areas of new modeling techniques. This post is going to give some examples of these kinds of techniques. But before that, a small primer on basic modeling techniques. Most of the more commonly used models are generalized linear models. As the name suggests, these models try to establish a more-or-less linear relationships between what is tried to be predicted and what the inputs are. Ultimately the model fit problem is an optimization problem – an attempt to use a generalized curve to represent the data and while doing so, minimize the gap between the actual data and the approximate representation of the data produced by the model.

Of course, optimization problems present themselves in a number of areas. One is of course model fitting but other applications are in areas like planning and logistics – an example being the ever-popular traveling salesman problem. One of the more recent and interesting techniques in solving optimization problems is through a technique called Ant Colony Optimization (ACO). The optimization is a part of series of more generic AI/ machine learning tools called swarm intelligence. Wikipedia defines swarm intelligence as follows
Swarm intelligence (SI) is the collective behaviour of decentralized, self-organized systems, natural or artificial…. SI systems are typically made up of a population of simple agents or bodies interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents.

The ACO algorithm tries to mimic the behaviour of ants in search of food. When ants forage for food, every ant involved in the foraging process moves out of the colony in random ways to search for food. When a food source is located, the ant uses the scent trail of its own pheromones to bring the food back to the colony. Other ants begin to then use the trail left behind by the first ant to make further excursions to the food source and bring back food. Also, by the very nature of the pheromone trail (which is a volatile chemical and therefore evaporates after a certain point in time), the tendency of later ants is to follow more recent and fresher trails, which should also be the shortest ones logically speaking.

One of the more interesting business applications has been indeed in the area of material movement, i.e. logistics. The Italian pasta maker Barilla as well as Migros, the Swiss supermarket chain have been using these techniques to optimize their distribution networks and routes. A paper about this technique is available here. It is more technical. A more layman-friendly treatment of the technique appeared recently in the Economist and was also an interesting read.

Tuesday, November 30, 2010

Animating the data and better "story telling"

One of the challenges with talking about and presenting any analysis about data mining or statistics is that a lay audience is seldom excited by the same things as a more technical audience. A technical audience is as interested in how the answer was reached as much as the answer itself. A non-technical consumer of the same information is probably interested in the implications of the answer as well as the answer itself, with some gut-check to make sure that the process wasn't totally crazy. In other words, they are looking for a story.

Recent trends around the pervasiveness of data and data-driven applications has meant that there is a greater ask from data scientists to tell a compelling "story" to support their analysis. Data scientists need to come up with ways that tell the story behind the data and the projections of the model that may have used the data as input, that are insight generating, that skip some of the unnecessary detail and also paint the various facets of the final solution. And not just tell the final answer. Data animation and data visualization are some of the answers here.

I came across a couple of good examples of such animation recently. Hans Rosling made an interesting presentation about a tool called Gapminder at TED.com. The presentation is here. Gapminder is an organization that makes social, environmental and economic development data from all the countries of the world available and accessible to all, for free. The visualization tool at Gapminder called Gapminder World shows ways in which this data can be animated and made come alive for the non-technical consumer in illuminating and exciting ways.

Rosling made another trailer presentation recently from a BBC 4 program promo called "The Joy Of Stats". (Link is here if the embed doesn't work.). One hopes that this program airs sometime in the US. It is due to air on Dec 7 and 8 in the UK. Any UK readers of the blog are encouraged to go and check the program and share what they felt about it. The content of the program (Link and timings here) sounds interesting enough for me to at least contemplate taking a flight to London and catching the program on the Beeb.

Saturday, October 23, 2010

The quintessential Greek (financial) tragedy

For the last 6 month, the travails of highly indebted countries in the European Union and Greece in particular has been the source of considerable turmoil in the financial system. In addition to increasing the cost of borrowing for Greece and other countries in a similar situation like Spain, Portugal, Ireland and Italy, the other impact has been to increase the overall systemic risk and push the world economy back into the 2008 depths.


The Greece story is particularly fascinating. Unlike other countries where banks made ruinous bets and had their capital wiped out, impacting lending and slowing down economic activity, the banks had no role to play in Greece. Instead it was the systemic lack of fiscal discipline, lack of enforcement of basic property and taxation principles and a proliferation of special interest driven that causes Greece to be on a slippery slope to sovereign bankruptcy and default. The inimitable Michael Lewis has written a highly entertaining but also illuminating piece on why this came to happen. Link is here.


Now what is very interesting and scary is that many of the ills mentioned here is present in many countries around the world. Talk about the aversion towards taxes, the large scale tax evasion, the rampant bribery and corruption in government circles. Seems scarily familiar to people from India and other developing countries. What do you think causes Greece to fail in such spectacular fashion (well, if it hasn't already failed, this article should convince you to "short" Greek debt).

Tuesday, October 19, 2010

Sensational ... but true

Photograph by Terry McCombs
My writings in this space are usually somewhat academic and what I think is an objective view point, whenever I stray from highly academic topics. A recent article from the Rolling Stone magazine, though, caught my eye. The writer talks about the systematic way in which Wall Street banks touting their financial engineering expertise have expanded their influence into small governments in America. The banks have peddled products that have invariably resulted in huge amount of financial distress for these agencies. See link here.

The phenomenon of bankers behaving badly with small governments or for that matter even with larger ones (state pension funds) is nothing new. The 1994 bankrupting of Orange County by Robert Citron (Citron? for Orange Country?), the county treasurer, is a well-known story. See a really good article about this failure here. Another notable example from the same period is Proctor & Gamble's dalliance with derivatives that resulted in a lot of grief for themselves as well as for Banker's Trust, their investment banking advisers. The human psyche seems particularly frail and susceptible to smooth talking operators, talking interesting numbers and displaying other forms of spreadsheet gadgetry, and promising the moon in return for money. In addition to the banks' rapacity, the people at the customer end - that sought to invest in little understood financial instruments, where the risk of the counterparty is bounded but your own downside is infinite - are as much to blame. Not for the lack of financial savvy, but for getting into a situation where such financial gimmickry needed to be resorted to in the first place.

The primary problem here is the particular weakness of small government bodies to be reckless about spending during good times. In an attempt to do something big and important for their constituents (ascribing the best motives), governmental bodies take on big projects when the economic cycle is positive and tax revenues are abundant. They take on big loans which need servicing even when things go bad - when tax revenues decline or interest rates rise or whatever. And then these agencies find themselves strapped for money and start to resort to financial gimmickry. And fall into the arms of the Wall Street firms.

Reminds me of the famous Roald Dahl story about the old man who has a priceless painting tattooed on his back. And who goes away with a smooth talking stranger who promises to keep him happy for the rest of his life, only if the old man displayed his painting to the stranger's guests at his hotel. Needless to say, but in a few weeks, the painting appears sans the old man in a famous art gallery.

Tuesday, September 28, 2010

Facebook's "revenue model"

A really interesting article from Businessweek on the purported revenue model behind Facebook. One of the more insightful articles I have read recently.

Statistical models and ivory towers

Increasingly, business analytics and the use of advanced statistics in business decision making is making big surges. Companies from WalMart to FedEx to Netflix have demonstrated how they can build a sustainable business model on the foundations of good data and solid analysis of that data. To analyze the data, people with both the right statistical background and training as well as the required business acumen are critical. In other words, smart people. And this is usually one of the barriers to organizations making the transition from the pre-analytics to the post-analytics space.

Ever so often, the people who push analytics within an organization come from an angle of intellectual superiority. "I can do math better than you and therefore I am right and better than you" is the mindset that many such practitioners bring to the field. This often results in resistance and sometimes, downright hostility to what the "statistical ones" are recommending, from the rest of the organization. Statistical practitioners often end up plowing a lonely furrow in organizations. And then one day, when the implicit sponsorship that got them into that position goes away, so follow the statistical modelers out of the organization. The feeling when they leave is one of profound disappointment and disillusionment on the side of the modelers, and profound relief and also some good old schadenfreude on the side of the old organization hands. How can this situation be averted? How can people who are obviously so intelligent and well-educated prevent making fools of themselves because they failed to fit into an organization?

A few pieces of advice:
1. Be there to "solve the problem" vs "showcase your smarts"
It is important to keep in mind why smart people are hired by organizations. It is usually to solve some business problem or the other. It is not because the organization suddenly discovered that they needed show ponies to come out and parade their smarts. So the first advice to the smart ones is to focus on fixing organizational challenges, i.e. focus on what they are hired for and build their credibility. Once credibility is built up, it becomes infinitely easier to take on work that is more intellectually stimulating and challenging.

2. Simplify your communication around the solution
Smart people often have the ability to get into really deep thinking about the work that they are involved in. Deep thinking indeed is required to fix many of the more difficult problems that companies and society is faced with. However deep thinking around communications is counter-productive. Human beings are simple creatures and usually favour clean narratives over complex ones. Keep the communication around the solution simple and crisp - you may need to get rid of some of the fancy footwork to get there but the trade-off is usually worth it.

3. Be open to idea "give and take"
Finally, approach idea sharing with positive intent. Ideas usually get better when they are critiqued by other people. Valuable perspectives come to light and unrecognized (by the idea creator) weaknesses are called out. Smart people tend to have a bias towards thinking "my way or the highway". This not only prevents ideas from realizing their full potential but also destroy the buy-in that is required from stakeholders. Buy-in is the oxygen that ideas need to survive and grow and developing the political savvy and getting that buy-in is always critical.

Thursday, August 26, 2010

The Judgment Deficit - a real-wordliness deficit

I usually don't use my blog to take on or pick apart published pieces - my aim with the blog is to create a diversity of ideas and viewpoints to the reader. There is plenty of intelligent writing in the Web that is thought-provoking and worth bringing to the attention of readers interested in the general ideas of statistics and machine learning. But I came across this learning recently that - I have to admit - caused a fair amount of angst and therefore an urge-to-act. This was the Judgment Deficit by Amar Bhide, a professor of Finance at Tufts University.

The journal article from HBS talks about how machines or computers can make decisions in certain types of situations and human judgment needs to come in at other places. Fair enough. The article then bemoans the recent Great Recession and lays part of the blame on statistical models used in Finance. Specifically the author says
In recent times, though, a new form of centralized control has taken root: mechanistic decision making based on top-down statistical models and algorithms. This has been especially true in finance, where risk models have replaced the judgments of thousands of individual bankers and investors, to disastrous effect.This kind of thinking is not only delusional but also dangerous. (Another part of the article that didn't necessarily get me singing from the rooftops was the lengthy encomium heaped on the economics of Freidrich Hayek, the libertarian economist and the founder of the famous Austrian Economists school. I am still not clear how is that related to the topic at hand.)

The fundamental reason why banks took the risks they took were because there were incentives to do so and there was not enough of an appreciation of the downside. Bankers thought the spiral of rising home-prices, the ability to take the assets off balance sheet and maintain minimal capital reserves, was an unending one and were unable to either spot the inevitable edge of the cliff or were too late to pull back once they spotted it. Also the desire to have these activities as unregulated as possible (to allow free pursuit of profit, or to make 'markets efficient' as Wall Street would argue) led to a number of opacities (about risk) developing in the system which lead to situations where high-schools in Norway were exposed to the collapse of Bear Stearns. So lets not put the blame on top-down statistical models and algorithms. If the alternative that Bhide suggests, having manual underwriters take more of the decisions, were to have happened, I am not sure whether the conclusions reached by these underwriters would have been any different. Apart from a few economists, fund managers and people like Nourini and Taleb (who have made an image of themselves as Cassandras of Doom and therefore have to say anything to maintain that image), nobody - let me say that again - nobody saw this edifice collapsing. No one thought house prices in the US would ever come down. Everyone (human beings and computers alike) were victims of the rear-view mirror bias, i.e. expecting that the future would play out exactly as the past.

So let's go a little bit easy on computers, statistical models and automated decision making.

Thursday, August 19, 2010

Part 2/3 of disaster estimation - Understanding the expected monetary loss

In part 1 and part 1b of this series, we reviewed some of the ways in which disaster estimation modelers go about estimating the probability of occurence of a catastrophic event. The next phase is the estimation of expected dollar losses when the catastrophe does take place. What would be some of the impacts on the economic activity within a region and how widespread would be the impacts?

This is where the 'sexiness' of model building techniques meets the harsh realities of extensive ground work and data gathering. When a disaster does occur, the biggest disruptions are usually to life and property. Then there are additional longer term impacts to the economic activity of the region and this is driven both directly by the damage to life and property and also indirectly by the impacts to business continuity and ultimately by the confidence that consumers and tradespeople alike continue to have about doing business in the region. Lets examine this one piece at a time.

The disruptions to life and property can be examined by the number of dwellings or business properties that are built specifically to resist the type of disaster event we are talking about. In the case of fires, it is the number of properties that are built with the right building codes that are built under the right safety codes. This type of information requires some gathering but is publicly available information from the property divisions of several counties. In the case of hurricanes, it can be the number of houses that are constructed after a certain year when stricter building codes started to be enforced. This type of data gathering is extremely effort intensive but is often the difference between a good approximate model versus a really accurate model that can be used for insurance pricing decisions. With a competitive market like insurance where there are many companies operating essentially on price, the ability to build accurate models is a powerful edge.

The damage to life often has a very direct correlation to the amount of property damage. Also with the early warning systems in place ahead of disasters (except earthquakes, I suppose), it has become quite common to have really large disasters like hurricanes not resulting in any major loss of life. One significant example was Hurricane Katrina where more than a thousand people lost their lives in the Gulf Coast area and particularly in New Orleans.

In the next article in the series, I will provide an overview of the ReInsurance market. Which is where a lot of this probabilistic modeling ultimately gets applied.

Saturday, July 17, 2010

Interesting links from Jul 17, 2010

1.The over-stated role of banking in the larger economy (Link here)

2. A very interesting article on the original monetary expansionist, John Law (Link here)

3. My latest area of passion, text mining and analytics. A blog entry from SAS. (Link here)

4.Commentary from Prof.Rajan on the inequality in US income and its inevitable lead to a crisis. His analysis on how income inequality forces asset-price inflation is fascinating (Link here)

Tuesday, July 13, 2010

Disaster estimations - Part 1b/3 Understanding the probability of disaster

Part 1 of my post on modeling catastrophic risk covered measuring the probability that a risk even can occur. This probability can derived based on empirical evidence as well as from other computer models that underlie destructive forces of nature. A good example of a paper that talks about how such a model is built and used is outlined in this paper by Karen Clark, a renowned catastrophic risk modeler and insurer. The paper was a seminal one when it came out as it outlined a scientific method by which such risks could be estimated. The paper is titled "A formal approach to catastrophe risk assessment and management" and the link is here.

The paper outlines an approach to estimate losses from hurricanes impacting the US Gulf coast and the East Coast. The model has a probability assessment for hurricanes making landfall, developed using historical information (going back to about 1910) from the US Weather Service. While this is a great starting point and helps us get to a good estimate of at least a range of losses one can expect and therefore the insurance premiums one should expect to sell, there are important places where the model can be improved. One example is the cyclical nature of hurricane intensity over the last 100 years. Between 1950 and 1994, the Atlantic hurricanes have run through a benign cycle. Hurricane activity and intensity (as measured by the number of named storms and the number of major hurricanes, respectively) have shown an increase since 1994, though. So a model relying on activity from the 1950-1994 period is likely to be off in its loss estimates by more than 20%. See the table for what I am talking about.

How can a modeler correct for such errors in estimates? One way to correct for these estimates is to use the latest in scientific technology and modeling in estimating the probabilities. Developments in scientific understanding of phenomena such as hurricanes means that it is now possible to build computer models that replicate the physics behind the hurricanes. The dynamic physical models incorporate some of the more recent understanding of world climatology, such as the link between Sea Surface Temperatures or SSTs and hurricane intensity. Using some of these models, researchers have been able to replicate the increase in hurricane intensity seen in the last fifteen years in a way that the empirical models built prior to this period have not been able to. The popular science book about global warming called Storm World by Chris Mooney spells out these two different approaches to hurricane intensity estimation, and the conflicts between the chief protagonists of each of these approaches. Based on the recent evidence at least, the more physics based approach certainly appears to be tracking closer to the rapid changes to hurricane intensity. William Gray of Colorado State University, whose annual hurricane forecast has been lucky for many years has been forced to re-fit his empirical model for the rapid increase in hurricane intensity post-1995.

Finally, I leave you with another note about how some of the dynamic physical models work. This is from one of my favourite blogs which is Jeff Masters' tropical weather blog. The latest entry talks precisely about such a dynamic physical model built by the UK Met Office. And I quote:

it is based on a promising new method--running a dynamical computer model of the global atmosphere-ocean system. The CSU forecast from Phil Klotzbach is based on statistical patterns of hurricane activity observed from past years. These statistical techniques do not work very well when the atmosphere behaves in ways it has not behaved in the past. The UK Met Office forecast avoids this problem by using a global computer forecast model--the GloSea model (short for GLObal SEAsonal model). GloSea is based on the HadGEM3 model--one of the leading climate models used to formulate the influential UN Intergovernmental Panel on Climate Change (IPCC) report. GloSea subdivides the atmosphere into a 3-dimensional grid 0.86° in longitude, 0.56° in latitude (about 62 km), and up to 85 levels in the vertical. This atmospheric model is coupled to an ocean model of even higher resolution. The initial state of the atmosphere and ocean as of June 1, 2010 were fed into the model, and the mathematical equations governing the motions of the atmosphere and ocean were solved at each grid point every few minutes, progressing out in time until the end of November (yes, this takes a colossal amount of computer power!) It's well-known that slight errors in specifying the initial state of the atmosphere can cause large errors in the forecast. This "sensitivity to initial conditions" is taken into account by making many model runs, each with a slight variation in the starting conditions which reflect the uncertainty in the initial state. This generates an "ensemble" of forecasts and the final forecast is created by analyzing all the member forecasts of this ensemble. Forty-two ensemble members were generated for this year's UK Met Office forecast. The researchers counted how many tropical storms formed during the six months the model ran to arrive at their forecast of twenty named storms for the remainder of this hurricane season. Of course, the exact timing and location of these twenty storms are bound to differ from what the model predicts, since one cannot make accurate forecasts of this nature so far in advance.

The grid used by GloSea is fine enough to see hurricanes form, but is too coarse to properly handle important features of these storms. This lack of resolution results in the model not generating the right number of storms. This discrepancy is corrected by looking back at time for the years 1989-2002, and coming up with correction factors (i.e., "fudge" factors) that give a reasonable forecast.

If you go to the web-page of the UK Met Office hurricane forecast, you can find a link of interest Reinsurance companies. This link is to buy the hurricane forecast which the UK Met Office has obviously gone to great pains to develop. Their brochure on how the insurance industry could benefit from this research makes for very interesting reading as well.

Tuesday, June 15, 2010

The BP oil spill and the disaster estimations - Part 1/3

The BP oil spill is already the biggest oil spill in the US and is on its way to becoming an unprecedented industrial disaster, given the environmental impact of millions of barrels of oil gushing into the Gulf of Mexico. Even the most hardened of carbon lovers cannot but be moved at the sight of the fragile wildlife in the Gulf literally soaking in the oil. The ecosystem of the Gulf states which were already ravaged by unrestrained development and the odd super-cyclone is now being struck a death blow by the spewing gusher.

Could the specific chain of events leading up to this spill have been predicted? The answer is no. But that doesn't mean that the outcome could not have been anticipated. Given the technological complexity that some of the deep-sea oil drilling operations typically involve, there was always a measurable probability that one of the intermeshing systems and processes would give way and result in an oil-well that was out of control. As Donald Rumsfeld, Secretary of Defense in the Bush II administration put it, stuff happens. But where there has been an abject failure of human science and industrial technology has been in underestimating the impact of this kind of an event on a habitat and overestimating the power of technology to fix these kinds of problems.

Fundamentally, the science of estimating the impact of disasters can be broken down into three estimations:
one, an estimation that failure occurs
second, the damage expected as a result of the failure
the third, (which is probably a function of the second) are our capabilities in fixing the failure or mitigating the impact of the failure.

In this post, I will discuss the first part of the problem - estimating the probability of failures occurring.

There is a thriving industry and a branch of mathematics that works on the estimation of these extremely low probability events known as Disaster Science. The techniques that the disaster scientists or statisticians use are based on the understanding of the specific industry (nuclear reactors, oil drilling, aerospace, rocket launches, etc.) and is constantly refreshed with the our increasing understanding of the physics or science in general underlying some of these endeavours. The nuclear-power industry's approach analyzes the engineering of the plant and tabulates every possible series of unfortunate events that could lead to the release of dangerous radioactive material, including equipment failure, operator error and extreme weather. Statisticians tabulate the probability of each disastrous scenario and add them together. Other industries, such as aviation, use more probability based models given the hundreds of thousands of data points available on a weekly basis. Then there are more probabilistic approaches such as tail probability estimation or extreme event estimation which uses math involving heavy-tailed distributions for the probability estimation of such events occurring. Michael Lewis in his inimitable style talked about this in an old New York Times article called In Nature's Casino.

One variable that is a factor and often the contributing factor in many such disasters is human error. Human error is extraordinarily difficult to model, just based on past behaviour because there are a number of factors that could just confound such a read. For instance, as humans encounter fewer failures, our nature is to become less vigilant and therefore at greater risk of failing. Both lack of experience and too much experience (especially without having encountered failures) are risky. The quality of the human agent is another variable that has wide variability. At one time, NASA had the brightest engineers and scientists from our best universities join. Now, the brightest and the best go to Wall Street or other private firms and it is often the rejects or the products of second-rung universities that make it to NASA. This variable of human quality is difficult to quantify or sometimes difficult to measure in a way that does not offend people on grounds like race, national origin, age and gender. Let us suppose that the brightest and the best joining NASA previously came from colleges or universities where admission standards required higher scores on standardized tests. Now we know that standardized test scores are correlated with the socio-economic levels of the test takers and hence to variables such as income, race, etc. So now if NASA goes to lower rung colleges, does it mean that it was being more exclusive and discriminatory before (by taking in people with average higher scores) and is now more inclusive now? And can we conclude that the drop in quality now is a direct function of becoming more inclusive on the admission criteria front? It is never easy to answer these questions or even tackle the question without feeling queasy about what one is likely to find while answering the question.

Another variable, again related to the human factor is the way we interact with technology. Is the human agent at ease with the technology confronting him or does he feel pressured and unsure from a decision making standpoint? I have driven stick-shift cars before and I have been more comfortable and at ease with the decision making around gear changes when the car-human interface was relatively simpler and spartan. In my most recent car, as I interact with multiple technology features such as the nav system, the bluetooth enabled radio, the steering wheel, the paddle shifter, the engine revs indicator, I find my attention diluted and I have seen that the decision making around gear changes is not as precise as it used to be.

Thursday, June 3, 2010

On Knightian Uncertainty

An interesting post appeared recently attempting to distinguish between risk and uncertainty. The view was proposed by an economist called Frank Knight. The theory proposed by Knight is that risk is something where the outcome is unknown but whose odds can be estimated. But when the odds become inestimable, risk turns to uncertainty. In other words, risk can be measured and uncertainty cannot.

There are economists who argue that Knight's distinction only applies in theory. In the world of the casino, where the probability of a 21 turning up or the roulette ball landing on a certain number can be estimated, it is possible to have risk. But anything outside simple games of probability becomes uncertainty because it is difficult to measure the uncertainty. The real world out there is so complex that it is indeed difficult to make even reasonably short term projections, let alone the really long term ones. So what is really the truth here? Does risk (as defined by Knight) even exist in the world today? Or as the recent world events (be it 9/11, the Great Recession, the threatened collapse of Greece, the oil spill in the Gulf of Mexico, the unpronounceable Icelandic volcano) have revealed, it is a mirage to try and estimate the probability of something playing out with remotely close to the kinds of odds we initially estimate.

I have a couple of reactions. First, my view is that risk can be measured and outcomes predicted more or less accurately under some conditions in the real world. When forces are more or less in equilibrium, it is possible to have some semblance of predictability about political and economic events. And therefore an ability to measure the probability of outcomes happening. When forces disrupt that equilibrium and the disruptions may be caused by the most improbable and unexpected causes, then all bets are off. Everything we have learnt from the time when Knightian risk applied is no longer true and Knightian uncertainty takes over.

Second, this points to the need for the risk management philosophy (as it is applied to a business context) to not only consider what the system knows and can observe but also the risks that the system doesn't even know exist out there. That's where good management practices such as constantly reviewing positions, eliminating extreme concentrations (even if they appear to be value-creating concentrations), constantly questioning the cognitive thinking - can lead to a set of guardrails that a business can stay within. Now these guardrails may be frowned up and even may invite derision from those interested in growing the business during good times, as the nature of these guardrails are always going to be to try and avoid too much of a good thing. However, it is important for the practitioners of risk management to stay firm to their convictions and make sure the appropriate guardrails are implemented.

Tuesday, May 4, 2010

Interesting data mining links

1. The NY Times recently had a piece on how data is increasingly part of our life. Link here.

2. The Web Coupon - a new way for retailers to know more about you. Link here.

3. On Principal Components Analysis. Link here.

Saturday, May 1, 2010

The future of publishing - and a new business model

The demise of an ages-old business model and the emergence of a new one to take its place is always an exciting thing to watch - unless you are part of the age-old business model on its way to its demise. There are old assumptions challenged, changes in the way consumers consume, the emergence of a technology trigger, new financing patterns, new winners and losers. Fascinating to someone looking-in from the outside.

An industry that has pretty much been under attack since the coming of the Internet has been the print and the publishing business. But what threatened to be a slow roll of a snowball (obviously to be replaced with new ways of consuming and disseminating information) has taken the form of a rapidly growing avalanche after digitized books and the digital book reader (the Kindle, predominantly) have become mainstream. As is to be expected, there are powerful players working to pull the rug from under the feet of the big publishing and media companies. First Google with wanting to digitize every book ever published. Amazon then came with the Kindle that cut out printing costs from the value chain and make books much more affordable for end-consumers. Of course, the elimination of the printing, warehousing and the physical distribution process would mean massive job-cuts in the big publishing and printing houses, not to mention a necessary shrinking in the margins retained by the publisher from the printing price of the book.

An interesting article in the New Yorker talks about the demise of publishing at the hands of the digital giants in more detail. Link here Amazon, Apple and Google are the big digital players jockeying for position in this market. A few years back, Microsoft would have been a contender as well but repeated failures to crack the consumer space (where MS does not have a monopolist advantage) has resulted in a little more of circumspection.

Saturday, February 20, 2010

Bank Regulation in the Canadian context - Part 2

To paraphrase from my previous post on the subject (link here), the stock prices of Canadian banks outperformed large American banks during two separate periods through the late 90s and the 2000s. One was a benign period from 1998 to 2005, and the other was the period from 2002 to 2009 (which culminated with the Great Recession), i.e. a combined good and bad period. However, Canada all through this period faced tighter banking regulation than the US banks. What worked in the Canadian example?

Per the FT article, there were three factors involved. And extrapolating from these factors, my belief is that it translated to one important difference in the operating philosophy of Canadian banks vis-a-vis US banks, or for that matter, even the ones in the UK and continental Europe.
- The first factor was a simple regulatory framework. The US famously had an alphabet soup of regulatory agencies that were competing for banks' business. Canada by contrast had a very simple set up. One agency to serve as the central bank - responsible for the stability of the overall system, one as a banking supervisor, one agency for consumer protection and the finance ministry that set the broad rules on ownership of financial institutions and the design of financial products.
- The second factor was a set of really simple and easy-to-follow risk guardrails on individual institutions, having little to no room for flexibility. the first such rule as a requirement of 7% of assets to be maintained as Tangible Common Equity or TCE. Now, 7% is quite a conservative number when compared with the 4.5-6% that US regulators have been comfortable at different points in time. Additionally, the OSFI required that the capital maintained be of the highest quality - shareholder equity. The Canadian regulators require that 75% of TCE should be comprised of shareholder equity. There is no room for quasi-equity products like preferred shares (which, incidentally have not turned out to be very useful from a capital standpoint for US institutions). Finally, the third requirement was a leverage cap of 20:1. Compare this with US banks that have consistently maintained higher leverage ratios in an attempt to expand investments and improve returns to stakeholders in an environment supposedly insulated from risk.
- Finally, a third important factor were the dealings between the Canadian bank regulator and the banks when it came to following rules. The Canadian system was based on principles, rather than narrowly following specific rules. It is about the spirit rather than the letter of the law. The head of the OSFI regularly met with the bank CEOs and was a frequent attendee to board meetings, especially in the ones having the non-executive board members attending. The bank CEOs on their part took interest in maintaining a stable system and paid serious attention to the advisement of the regulators.

Now, I am attempting to fill in the blanks beyond this point. My hypothesis on the operating philosophy of Canadian banks is that these simple and non-negotiable guidelines did not leave too much room for adventures such as optimization around edges, getting into illiquid and structurally untested asset classes (like the synthetic ABSs and MBSs), etc. Canadian banks realized that the one safe and reliable way of making money would be to focus on consumer/ business borrowing needs and meet them with simple lending products. The returns from a plain-vanilla banking business which centered around taking deposits and lending them directly to consumers and businesses, were secure and good enough to generate a healthy return on capital for these banks. Which then got captured in a healthy stock price. The creativity and the management talent of the bankers went towards meeting customer needs, as against getting into even more arcane areas of structured finance.

What does this all mean for risk management and its application? There is a myth out there somewhere that tighter regulations tends to dampen shareholder returns. The high-impact downside resulting from tail-events is prevented, but that is at the cost of profits during more normal times. However, that doesn't seem to have been the case considering the performance of Canadian banks. Canadian banks were more tightly regulated than US banks, i.e. risk management was tighter. But the banks clearly did not suffer as a result. Rather a principles-based risk management practice resulted in greater co-operation between banks and the regulators, allowed the banks to focus on the long-term drivers of value in banking and ultimately returned better returns to shareholders.

Tuesday, February 9, 2010

Bank Regulation in the Canadian context - Part 1

The fallout of the 2008-09 Great Recession in terms of failed banks, lost jobs, shuttered plants, bankrupt companies, is news to all by now. What started off as a repayment crisis had an amplified impact on the overall economy - driven by reckless risk-taking by big banks, over-leveraging and ultimately pursuing a path that seems to suggest that they believed they were too big to fail. Which turned out to be the case ultimately. Read bailout of AIG, the arranged marriage for Bear, government takeovers of Fannie and Freddie and so on.

The contagion has not been limited to US banks and institutions by any means. European Banks (UBS, Deutsche and Societe Generale), British banks, Irish and Icelandic banks - all showed similar behaviours, similar disdain for any considerations of their long-term health believing themselves to be too big to fail. One glorious exception in all of this has been large Canadian banks. As compared to some of their US and European rivals, these large banks have been the very paragon of well-managed and well-run financial institutions and have hardly suffered a blip to their profitability or needed any government largesse over the Great Recession to survive. In fact, Canada is the only G7 country to survive the financial crisis without a state bail-out for its financial sector.

(The top 5 Canadian banks are Royal Bank of Canada, Scotiabank, Toronto-Dominion Bank, Bank of Montreal and the Canadian Imperial Bank of Commerce. Besides cornering nearly 90% of the Canadian market, these banks are in reality large international banks with operations in 40-50 countries, and stock listings on multiple exchanges. A quick primer on Canadian banks is here.)

What caused the Canadian banks to survive? An immediate reaction (which incidentally would be wrong) is that Canadians are somehow too nice to participate in the kind of no-holds-barred plundering practiced by the American banks. They play a soft form of capitalism, one that protects the downside but also somehow limits the upside. Hmmmm, not entirely true. The net shareholder returns of Canadian banks have exceeded that of UK and US banks in the last 5 years, as evidenced in the graph below.


What about returns over a larger time period? How do the top Canada banks compare to the top US banks in terms of stockprice performance?

Looking at a 7 1/2 year period from mid-2002, the total returns on a basket of large Canadian banks (the ones mentioned above) was 144%. In the same period, US large banks (Citi, Chase, BofA, Wells, Goldman, Morgan Stanley) had a return of a paltry 2%. OK, the US banks returns were decimated because of the recent credit crisis. The market over-reacted maybe. If you look at returns from a period from Jan 1998 to Dec 2005, when we were having a so-called 'Goldilocks' economy, the story isn't too different. US bank stocks rises to a more respectable 69% but the performance of Canadian bank stocks improves even more to 183%.

Table of stock price performance for top Canadian banks - followed by US banks
(Boom and Bust Period)

June 2002 Feb 2010
RBC 16.06 50.44
TD 23.19 59.56
CIBC 32.85 59.135
BofM 21.86 48.86
Scotia 17.41 42.87


June 2002 Feb 2010
Chase 22.25 38.39
Wells 25.2 26.71
BofA 35.18 14.47
Citi 28.68 3.18
Goldman 73.35 152.49
MS 35.62 27.13

Table of stock price performance for top Canadian banks - followed by US banks
(Boom Period only)


Jan 1998 Dec 2005
RBC 12.15 39.27
TD 17.49 52.55
CIBC 24.02 65.8
BofM 20.22 55.94
Scotia 16.6 39.93


Jan 1998 Dec 2005
Chase 51.29 48.3
Wells 18.22 35.56
BofA 29.94 46.15
Citi 24.78 48.53
Goldman 73.72 133.26
MS 29.19 56.74

(Goldman Sachs and Bank of Montreal did not have full information over these periods. But having them in the numbers - or taking them out - doesn't change the story.)

SO what can explain the better performance of Canadian banks? What allows them the ability to not only perform better through the cycle but also do so with mininal government handouts? The answer is superior risk management and that will form part of the next part on this subject.

Christya Freeland of FT.com has a fascinating article on the subject and the link is here.

Sitemeter