Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Tuesday, July 13, 2010

Disaster estimations - Part 1b/3 Understanding the probability of disaster

Part 1 of my post on modeling catastrophic risk covered measuring the probability that a risk even can occur. This probability can derived based on empirical evidence as well as from other computer models that underlie destructive forces of nature. A good example of a paper that talks about how such a model is built and used is outlined in this paper by Karen Clark, a renowned catastrophic risk modeler and insurer. The paper was a seminal one when it came out as it outlined a scientific method by which such risks could be estimated. The paper is titled "A formal approach to catastrophe risk assessment and management" and the link is here.

The paper outlines an approach to estimate losses from hurricanes impacting the US Gulf coast and the East Coast. The model has a probability assessment for hurricanes making landfall, developed using historical information (going back to about 1910) from the US Weather Service. While this is a great starting point and helps us get to a good estimate of at least a range of losses one can expect and therefore the insurance premiums one should expect to sell, there are important places where the model can be improved. One example is the cyclical nature of hurricane intensity over the last 100 years. Between 1950 and 1994, the Atlantic hurricanes have run through a benign cycle. Hurricane activity and intensity (as measured by the number of named storms and the number of major hurricanes, respectively) have shown an increase since 1994, though. So a model relying on activity from the 1950-1994 period is likely to be off in its loss estimates by more than 20%. See the table for what I am talking about.

How can a modeler correct for such errors in estimates? One way to correct for these estimates is to use the latest in scientific technology and modeling in estimating the probabilities. Developments in scientific understanding of phenomena such as hurricanes means that it is now possible to build computer models that replicate the physics behind the hurricanes. The dynamic physical models incorporate some of the more recent understanding of world climatology, such as the link between Sea Surface Temperatures or SSTs and hurricane intensity. Using some of these models, researchers have been able to replicate the increase in hurricane intensity seen in the last fifteen years in a way that the empirical models built prior to this period have not been able to. The popular science book about global warming called Storm World by Chris Mooney spells out these two different approaches to hurricane intensity estimation, and the conflicts between the chief protagonists of each of these approaches. Based on the recent evidence at least, the more physics based approach certainly appears to be tracking closer to the rapid changes to hurricane intensity. William Gray of Colorado State University, whose annual hurricane forecast has been lucky for many years has been forced to re-fit his empirical model for the rapid increase in hurricane intensity post-1995.

Finally, I leave you with another note about how some of the dynamic physical models work. This is from one of my favourite blogs which is Jeff Masters' tropical weather blog. The latest entry talks precisely about such a dynamic physical model built by the UK Met Office. And I quote:

it is based on a promising new method--running a dynamical computer model of the global atmosphere-ocean system. The CSU forecast from Phil Klotzbach is based on statistical patterns of hurricane activity observed from past years. These statistical techniques do not work very well when the atmosphere behaves in ways it has not behaved in the past. The UK Met Office forecast avoids this problem by using a global computer forecast model--the GloSea model (short for GLObal SEAsonal model). GloSea is based on the HadGEM3 model--one of the leading climate models used to formulate the influential UN Intergovernmental Panel on Climate Change (IPCC) report. GloSea subdivides the atmosphere into a 3-dimensional grid 0.86° in longitude, 0.56° in latitude (about 62 km), and up to 85 levels in the vertical. This atmospheric model is coupled to an ocean model of even higher resolution. The initial state of the atmosphere and ocean as of June 1, 2010 were fed into the model, and the mathematical equations governing the motions of the atmosphere and ocean were solved at each grid point every few minutes, progressing out in time until the end of November (yes, this takes a colossal amount of computer power!) It's well-known that slight errors in specifying the initial state of the atmosphere can cause large errors in the forecast. This "sensitivity to initial conditions" is taken into account by making many model runs, each with a slight variation in the starting conditions which reflect the uncertainty in the initial state. This generates an "ensemble" of forecasts and the final forecast is created by analyzing all the member forecasts of this ensemble. Forty-two ensemble members were generated for this year's UK Met Office forecast. The researchers counted how many tropical storms formed during the six months the model ran to arrive at their forecast of twenty named storms for the remainder of this hurricane season. Of course, the exact timing and location of these twenty storms are bound to differ from what the model predicts, since one cannot make accurate forecasts of this nature so far in advance.

The grid used by GloSea is fine enough to see hurricanes form, but is too coarse to properly handle important features of these storms. This lack of resolution results in the model not generating the right number of storms. This discrepancy is corrected by looking back at time for the years 1989-2002, and coming up with correction factors (i.e., "fudge" factors) that give a reasonable forecast.

If you go to the web-page of the UK Met Office hurricane forecast, you can find a link of interest Reinsurance companies. This link is to buy the hurricane forecast which the UK Met Office has obviously gone to great pains to develop. Their brochure on how the insurance industry could benefit from this research makes for very interesting reading as well.

Tuesday, June 15, 2010

The BP oil spill and the disaster estimations - Part 1/3

The BP oil spill is already the biggest oil spill in the US and is on its way to becoming an unprecedented industrial disaster, given the environmental impact of millions of barrels of oil gushing into the Gulf of Mexico. Even the most hardened of carbon lovers cannot but be moved at the sight of the fragile wildlife in the Gulf literally soaking in the oil. The ecosystem of the Gulf states which were already ravaged by unrestrained development and the odd super-cyclone is now being struck a death blow by the spewing gusher.

Could the specific chain of events leading up to this spill have been predicted? The answer is no. But that doesn't mean that the outcome could not have been anticipated. Given the technological complexity that some of the deep-sea oil drilling operations typically involve, there was always a measurable probability that one of the intermeshing systems and processes would give way and result in an oil-well that was out of control. As Donald Rumsfeld, Secretary of Defense in the Bush II administration put it, stuff happens. But where there has been an abject failure of human science and industrial technology has been in underestimating the impact of this kind of an event on a habitat and overestimating the power of technology to fix these kinds of problems.

Fundamentally, the science of estimating the impact of disasters can be broken down into three estimations:
one, an estimation that failure occurs
second, the damage expected as a result of the failure
the third, (which is probably a function of the second) are our capabilities in fixing the failure or mitigating the impact of the failure.

In this post, I will discuss the first part of the problem - estimating the probability of failures occurring.

There is a thriving industry and a branch of mathematics that works on the estimation of these extremely low probability events known as Disaster Science. The techniques that the disaster scientists or statisticians use are based on the understanding of the specific industry (nuclear reactors, oil drilling, aerospace, rocket launches, etc.) and is constantly refreshed with the our increasing understanding of the physics or science in general underlying some of these endeavours. The nuclear-power industry's approach analyzes the engineering of the plant and tabulates every possible series of unfortunate events that could lead to the release of dangerous radioactive material, including equipment failure, operator error and extreme weather. Statisticians tabulate the probability of each disastrous scenario and add them together. Other industries, such as aviation, use more probability based models given the hundreds of thousands of data points available on a weekly basis. Then there are more probabilistic approaches such as tail probability estimation or extreme event estimation which uses math involving heavy-tailed distributions for the probability estimation of such events occurring. Michael Lewis in his inimitable style talked about this in an old New York Times article called In Nature's Casino.

One variable that is a factor and often the contributing factor in many such disasters is human error. Human error is extraordinarily difficult to model, just based on past behaviour because there are a number of factors that could just confound such a read. For instance, as humans encounter fewer failures, our nature is to become less vigilant and therefore at greater risk of failing. Both lack of experience and too much experience (especially without having encountered failures) are risky. The quality of the human agent is another variable that has wide variability. At one time, NASA had the brightest engineers and scientists from our best universities join. Now, the brightest and the best go to Wall Street or other private firms and it is often the rejects or the products of second-rung universities that make it to NASA. This variable of human quality is difficult to quantify or sometimes difficult to measure in a way that does not offend people on grounds like race, national origin, age and gender. Let us suppose that the brightest and the best joining NASA previously came from colleges or universities where admission standards required higher scores on standardized tests. Now we know that standardized test scores are correlated with the socio-economic levels of the test takers and hence to variables such as income, race, etc. So now if NASA goes to lower rung colleges, does it mean that it was being more exclusive and discriminatory before (by taking in people with average higher scores) and is now more inclusive now? And can we conclude that the drop in quality now is a direct function of becoming more inclusive on the admission criteria front? It is never easy to answer these questions or even tackle the question without feeling queasy about what one is likely to find while answering the question.

Another variable, again related to the human factor is the way we interact with technology. Is the human agent at ease with the technology confronting him or does he feel pressured and unsure from a decision making standpoint? I have driven stick-shift cars before and I have been more comfortable and at ease with the decision making around gear changes when the car-human interface was relatively simpler and spartan. In my most recent car, as I interact with multiple technology features such as the nav system, the bluetooth enabled radio, the steering wheel, the paddle shifter, the engine revs indicator, I find my attention diluted and I have seen that the decision making around gear changes is not as precise as it used to be.

Thursday, June 3, 2010

On Knightian Uncertainty

An interesting post appeared recently attempting to distinguish between risk and uncertainty. The view was proposed by an economist called Frank Knight. The theory proposed by Knight is that risk is something where the outcome is unknown but whose odds can be estimated. But when the odds become inestimable, risk turns to uncertainty. In other words, risk can be measured and uncertainty cannot.

There are economists who argue that Knight's distinction only applies in theory. In the world of the casino, where the probability of a 21 turning up or the roulette ball landing on a certain number can be estimated, it is possible to have risk. But anything outside simple games of probability becomes uncertainty because it is difficult to measure the uncertainty. The real world out there is so complex that it is indeed difficult to make even reasonably short term projections, let alone the really long term ones. So what is really the truth here? Does risk (as defined by Knight) even exist in the world today? Or as the recent world events (be it 9/11, the Great Recession, the threatened collapse of Greece, the oil spill in the Gulf of Mexico, the unpronounceable Icelandic volcano) have revealed, it is a mirage to try and estimate the probability of something playing out with remotely close to the kinds of odds we initially estimate.

I have a couple of reactions. First, my view is that risk can be measured and outcomes predicted more or less accurately under some conditions in the real world. When forces are more or less in equilibrium, it is possible to have some semblance of predictability about political and economic events. And therefore an ability to measure the probability of outcomes happening. When forces disrupt that equilibrium and the disruptions may be caused by the most improbable and unexpected causes, then all bets are off. Everything we have learnt from the time when Knightian risk applied is no longer true and Knightian uncertainty takes over.

Second, this points to the need for the risk management philosophy (as it is applied to a business context) to not only consider what the system knows and can observe but also the risks that the system doesn't even know exist out there. That's where good management practices such as constantly reviewing positions, eliminating extreme concentrations (even if they appear to be value-creating concentrations), constantly questioning the cognitive thinking - can lead to a set of guardrails that a business can stay within. Now these guardrails may be frowned up and even may invite derision from those interested in growing the business during good times, as the nature of these guardrails are always going to be to try and avoid too much of a good thing. However, it is important for the practitioners of risk management to stay firm to their convictions and make sure the appropriate guardrails are implemented.

Saturday, February 20, 2010

Bank Regulation in the Canadian context - Part 2

To paraphrase from my previous post on the subject (link here), the stock prices of Canadian banks outperformed large American banks during two separate periods through the late 90s and the 2000s. One was a benign period from 1998 to 2005, and the other was the period from 2002 to 2009 (which culminated with the Great Recession), i.e. a combined good and bad period. However, Canada all through this period faced tighter banking regulation than the US banks. What worked in the Canadian example?

Per the FT article, there were three factors involved. And extrapolating from these factors, my belief is that it translated to one important difference in the operating philosophy of Canadian banks vis-a-vis US banks, or for that matter, even the ones in the UK and continental Europe.
- The first factor was a simple regulatory framework. The US famously had an alphabet soup of regulatory agencies that were competing for banks' business. Canada by contrast had a very simple set up. One agency to serve as the central bank - responsible for the stability of the overall system, one as a banking supervisor, one agency for consumer protection and the finance ministry that set the broad rules on ownership of financial institutions and the design of financial products.
- The second factor was a set of really simple and easy-to-follow risk guardrails on individual institutions, having little to no room for flexibility. the first such rule as a requirement of 7% of assets to be maintained as Tangible Common Equity or TCE. Now, 7% is quite a conservative number when compared with the 4.5-6% that US regulators have been comfortable at different points in time. Additionally, the OSFI required that the capital maintained be of the highest quality - shareholder equity. The Canadian regulators require that 75% of TCE should be comprised of shareholder equity. There is no room for quasi-equity products like preferred shares (which, incidentally have not turned out to be very useful from a capital standpoint for US institutions). Finally, the third requirement was a leverage cap of 20:1. Compare this with US banks that have consistently maintained higher leverage ratios in an attempt to expand investments and improve returns to stakeholders in an environment supposedly insulated from risk.
- Finally, a third important factor were the dealings between the Canadian bank regulator and the banks when it came to following rules. The Canadian system was based on principles, rather than narrowly following specific rules. It is about the spirit rather than the letter of the law. The head of the OSFI regularly met with the bank CEOs and was a frequent attendee to board meetings, especially in the ones having the non-executive board members attending. The bank CEOs on their part took interest in maintaining a stable system and paid serious attention to the advisement of the regulators.

Now, I am attempting to fill in the blanks beyond this point. My hypothesis on the operating philosophy of Canadian banks is that these simple and non-negotiable guidelines did not leave too much room for adventures such as optimization around edges, getting into illiquid and structurally untested asset classes (like the synthetic ABSs and MBSs), etc. Canadian banks realized that the one safe and reliable way of making money would be to focus on consumer/ business borrowing needs and meet them with simple lending products. The returns from a plain-vanilla banking business which centered around taking deposits and lending them directly to consumers and businesses, were secure and good enough to generate a healthy return on capital for these banks. Which then got captured in a healthy stock price. The creativity and the management talent of the bankers went towards meeting customer needs, as against getting into even more arcane areas of structured finance.

What does this all mean for risk management and its application? There is a myth out there somewhere that tighter regulations tends to dampen shareholder returns. The high-impact downside resulting from tail-events is prevented, but that is at the cost of profits during more normal times. However, that doesn't seem to have been the case considering the performance of Canadian banks. Canadian banks were more tightly regulated than US banks, i.e. risk management was tighter. But the banks clearly did not suffer as a result. Rather a principles-based risk management practice resulted in greater co-operation between banks and the regulators, allowed the banks to focus on the long-term drivers of value in banking and ultimately returned better returns to shareholders.

Tuesday, February 9, 2010

Bank Regulation in the Canadian context - Part 1

The fallout of the 2008-09 Great Recession in terms of failed banks, lost jobs, shuttered plants, bankrupt companies, is news to all by now. What started off as a repayment crisis had an amplified impact on the overall economy - driven by reckless risk-taking by big banks, over-leveraging and ultimately pursuing a path that seems to suggest that they believed they were too big to fail. Which turned out to be the case ultimately. Read bailout of AIG, the arranged marriage for Bear, government takeovers of Fannie and Freddie and so on.

The contagion has not been limited to US banks and institutions by any means. European Banks (UBS, Deutsche and Societe Generale), British banks, Irish and Icelandic banks - all showed similar behaviours, similar disdain for any considerations of their long-term health believing themselves to be too big to fail. One glorious exception in all of this has been large Canadian banks. As compared to some of their US and European rivals, these large banks have been the very paragon of well-managed and well-run financial institutions and have hardly suffered a blip to their profitability or needed any government largesse over the Great Recession to survive. In fact, Canada is the only G7 country to survive the financial crisis without a state bail-out for its financial sector.

(The top 5 Canadian banks are Royal Bank of Canada, Scotiabank, Toronto-Dominion Bank, Bank of Montreal and the Canadian Imperial Bank of Commerce. Besides cornering nearly 90% of the Canadian market, these banks are in reality large international banks with operations in 40-50 countries, and stock listings on multiple exchanges. A quick primer on Canadian banks is here.)

What caused the Canadian banks to survive? An immediate reaction (which incidentally would be wrong) is that Canadians are somehow too nice to participate in the kind of no-holds-barred plundering practiced by the American banks. They play a soft form of capitalism, one that protects the downside but also somehow limits the upside. Hmmmm, not entirely true. The net shareholder returns of Canadian banks have exceeded that of UK and US banks in the last 5 years, as evidenced in the graph below.


What about returns over a larger time period? How do the top Canada banks compare to the top US banks in terms of stockprice performance?

Looking at a 7 1/2 year period from mid-2002, the total returns on a basket of large Canadian banks (the ones mentioned above) was 144%. In the same period, US large banks (Citi, Chase, BofA, Wells, Goldman, Morgan Stanley) had a return of a paltry 2%. OK, the US banks returns were decimated because of the recent credit crisis. The market over-reacted maybe. If you look at returns from a period from Jan 1998 to Dec 2005, when we were having a so-called 'Goldilocks' economy, the story isn't too different. US bank stocks rises to a more respectable 69% but the performance of Canadian bank stocks improves even more to 183%.

Table of stock price performance for top Canadian banks - followed by US banks
(Boom and Bust Period)

June 2002 Feb 2010
RBC 16.06 50.44
TD 23.19 59.56
CIBC 32.85 59.135
BofM 21.86 48.86
Scotia 17.41 42.87


June 2002 Feb 2010
Chase 22.25 38.39
Wells 25.2 26.71
BofA 35.18 14.47
Citi 28.68 3.18
Goldman 73.35 152.49
MS 35.62 27.13

Table of stock price performance for top Canadian banks - followed by US banks
(Boom Period only)


Jan 1998 Dec 2005
RBC 12.15 39.27
TD 17.49 52.55
CIBC 24.02 65.8
BofM 20.22 55.94
Scotia 16.6 39.93


Jan 1998 Dec 2005
Chase 51.29 48.3
Wells 18.22 35.56
BofA 29.94 46.15
Citi 24.78 48.53
Goldman 73.72 133.26
MS 29.19 56.74

(Goldman Sachs and Bank of Montreal did not have full information over these periods. But having them in the numbers - or taking them out - doesn't change the story.)

SO what can explain the better performance of Canadian banks? What allows them the ability to not only perform better through the cycle but also do so with mininal government handouts? The answer is superior risk management and that will form part of the next part on this subject.

Christya Freeland of FT.com has a fascinating article on the subject and the link is here.

Tuesday, November 24, 2009

Too Big To Fail or Too Scared To Confront?

Back to the blog after a long break. I do need to find a way to become more regular at updating the blog and keeping at expressing my thoughts and ideas.

What spurred this latest post was a decent article I read on the Too-Big-To-Fail (TBTF) doctrine. Of course, one is talking about banks. The article goes into details about the cost of propping up the banks and some of the estimates are truly mind-blogging.

According to the Bank of England, governments and central banks in the US, Britain, and Europe have spent or committed more than $14 trillion—the equivalent to roughly 25 per cent of the world’s economic output—to prop up financial institutions. Combined with a global recession, this bailout has undermined the public finances of the developed world.

Another related set of articles is here from the Free Exchange blog of the Economist. Raghuram Rajan from the University of Chicago School of Business has contributed some good ideas and a robust discussion of the pros and cons of various options have been presented. As always, there is little effort on the part of the various contributors to synthesize a viewpoint. Rather the tendency is to point out why a specific solution presented may not work.

To a large extent, I think these ideas miss an important point. The ideas consistently treat financial institutions as rational entities, which seem to operate largely on principles of rational economics, capital theory and other such textbook ideas. The fact of the matter is that management matters. And management is a function of the human beings that take important decisions within these organizations, their incentive structures and also, more broadly, the set of values and identity that seems to motivate these human players.

And unless we all as stakeholders begin to take notice that the destiny of corporations are driven by the individuals managing those organizations, that the laws of economics are (unlike the laws of nature) created by its human players, we will continue to argue around the margins on window dressing the regulatory system and make no significant progress towards creating a more stable financial system. A stable system by definition is going to provide fewer opportunities to pursue supernormal profits. A stable system needs the suppression of that oldest of human sins, greed. Do have the courage to confront ourselves and seriously consider a slew of workable solutions to fix a broken system? Only time will tell but I am not holding my breath.

Sunday, September 27, 2009

The Fed's failure in end to end risk management

Another one in a series of risk management write-ups. (I guess this is becoming more and more common as this is my full-time job right now.) I came across a recent article in the Washington Post about the malpractices in lending practiced by subprime affiliates of large banks and the reluctance of the Federal Reserve to play an effective regulatory role. The article is here.

The article talks about how the Fed gradually withdrew in its regulatory responsibility on consumer finance companies as these were not "banks". The Fed reduced its oversight of these companies because it believed it did not have the right jurisdiction to regulate these companies. This was despite a considerable amount of evidence from individuals and other watch-dog bodies that were reporting egregious practices by these institutions. Another big factor that was playing at the time was the good old "markets self-regulate" belief (I was going to say theory and I corrected myself. Maybe I should say, myth.) but I am not going to spend too much time in this post on that.

Why did the Fed turn its head away from the problem? One of my hypotheses is too much of a reliance on "literalism". The Fed chose to literally interpret its mandate of regulating banks and decided to look no further - even though there were other institutions whose practices were exactly the same as what any bank would do. Literalism is a particular problem I have observed in the US. It is the strong objection to interpret a piece of policy/ law developed years ago in line with the world today. This problem is most commonly seen with respect to the US Constitution and its various amendments. But "literalism" is a problem when it creates blind spots in end-to-end risk management and ends up threatening the viability of the corporation or, as in this case, the entire financial system. An effective risk manager is expected to be proactive in identifying gaps in the end-to-end risk management and being open to taking on more responsibility, proposing changes to the system, as needed.

The other problem was that the Fed tended to be influenced more by grand economic theories and conceptual/ philosophical frameworks and decided to discount the data coming up from the ground. According to the article, the Fed tended to discount these pieces of anecdotal evidence as their place within a broader framework or their systematic impact was well-known. This is another problem often with smart people. It is a thinking that goes: I think and talk in concepts, abstractions and theories. Therefore, I will only listen when other people talk the same way. Now this is a problem which afflicts many of us, and therefore might be even borderline acceptable in everyday life. But this is fatal in risk management, where your job is to anticipate different ways in which the system can be at risk. An effective risk manager is expected to constantly keep her radar up for pieces of information that might be contrary to a pre-existing framework and have an efficient means of investigating whether the anecdotal evidence points to any material threat.

Finally, one important lesson that is worth taking way is that when it comes to human created systems, there is no one overarching framework or "truth". Because interactions between humans and institutions created by humans are not governed by the laws of physics, there are often no absolutes in these things. Many theories or frameworks could be simultaneously true or may apply in portions of the world we are trying to understand. Depending on the prevailing conditions, one set of rules may hold. And as conditions change or as the previous framework pushes the environment to one extreme, the competing framework often becomes more relevant and appropriate to apply. It is often practical to keep one's mind open to other theories and frameworks. Ideological alignment or obsession with one "truth" system only makes one closed to other explanations or possibilities.

Wednesday, September 16, 2009

A case study in risk management

The credit crisis of 2008, or the Great Recession as it is now famous as, has had many many books written on it. Writers from across the ideological spectrum have written about why the crisis occured and how their brand of ideology could have prevented the crisis. Which is why I was skeptical when I came across this piece which seemed to rehash the story of the collapse of Lehman. I was pleasantly surprised that this article was about one element that has been whispered off and on, but not very convincingly: about risk management based on common sense. (The reader needs to get past the title and the opening blurb, though. The title seems to suggest the credit crisis would have never taken place if Goldman Sachs hadn't spotted the game early enough. That is plain ridiculous. The leveraging of the economy + the decline in lending standards created a ticking time-bomb. But I digress.)

The article is not about having some fancy risk management metrics or why our models are wrong or why we should not trust a Ph.D that offers to build a model for you. (Of course, all of these elements contributed to why the crisis was ignored for all these years.) Instead, the article recounts a real-life meeting that took place in Goldman Sachs at the end of 2006. The meeting was convened by Goldman CFO, David Viniar, based on some seemingly innocuous happenings. The company had been losing money on its mortgage backed securities for 10 days in a row. The resulting deep-dive into the details of the trades pointed to a sense of unease about the mortgage market. Which then caused Goldman to famously back-off from the market.

I'll leave the reading to you to get more details of what happened. But some thoughts on what contribute to effective risk management practices.

- A real-life feel for the business. You can't be just into the models, you need to be savvy enough to understand how the models you build interact with the real world outside. And it is an appreciation of this interaction that cause the hairs to stand at the back of your neck when you encounter something that just doesn't seem right.
- Proper role of risk management in the decision making hierarchy. Effective risk management takes place when the risk governance has the authority to put the brakes on risk takers (i.e., the traders, in this case). In Goldman, there were a number of enablers for this type of interaction to take place effectively. Most importantly, risk management reported to the CFO, i.e. high enough in the corporate heirarchy. Second, investment decisions needed a go-ahead from both the risk takers and risk governance.
- Mutual respect between risk governance and risk takers. Goldman encourages a collaborative style of decision making. This allows multiple conflicting opinions to be present at the table. Minority opinions are encouraged and appreciated. Over time, this fosters a culture that genuinely tolerates dissonance of opinions. This also allows the CFO to be influenced by the comptroller group as much as he typically would by the trading group.
- Finally, a certain intellectual probity to acknowledge what it does not know or understand. During the meeting, the Goldman team was not able to pinpoint what their source of unease was. But they were able to honestly admit that they didn't really understand what was going on, but that it was also most appropriate to bring the ship to harbour, given their blindspot about what they didn't know. It takes courage to back-off from an investing decision, saying "I don't understand this well enough" in the alpha-male investment banking culture.

All in all, a really interesting read.

Tuesday, September 1, 2009

Knowledge-worker roles in the 21st century - 2/2

I am going to now talk about the second kind of job, that is going to become increasingly attractive for knowledge workers. In the first type of job, I talked about the advances in computing and communication capabilities and technology that make it extremely attractive for jobs that had been performed hitherto by humans to now be transferred to machines. Does this mean that we are all headed into a world depicted in the Matrix or in the Terminator movies?

I think not. As these jobs get outsourced, I anticipate a blowback where society discovers that there are certain types of jobs that cannot be handled by computers at all. These are tasks where highly interrelated decisions need to be made, and where the decisions themselves have second-, third- and fourth-order implications. Also, the situations are such that these implications cannot be 'hard-coded' but keep evolving at a rate that make it necessary for the decision maker to not only follow rules but also exercise judgment. These are places where a 'human touch' is required even in a knowledge role. (I say 'even' because knowledge roles by definition should be easier to codify and outsource to computers.)

One such area that is certainly a judgment based role is risk management. Risk management is anticipating and mitigating different ways in which downside loss can impact a system. Risks can be of two types. One, there are standard 'known' risks whose frequency, pattern of occurence and downside loss impact are comparatively well-known and therefore easier to plan for and mitigate. The second are the unknown risks whose occurency and intensity cannot be predicted. Now any system needs to be set up (if it wants to survive for the long term, that is) to handle both these types of risks. But as you make the system more mechanized to handle the first type of known and predictable risks, it has lesser ability and flexibility to handle the second 'unknown' type of risk.

This is where the role of an experienced risk manager comes in. A risk manager typically has a fair amount of experience in his space. Additionally, he has the ability to maintain mental models of systems in his head which have multiple interactions and whose impacts span multiple time periods. The role of the risk manager is then to devise a system that works equally effectively against both known and unknown risks. The system needs to be such that standard breakdowns are handled without intervention. At the same time, a dashboard of metrics are created about the system which give visibility into the fundamental relationships underlying the system. And when the metrics point to the underlying fundamentals being stretched to breaking point, that's the point at which the occurence of the unexpected risks becomes imminent. The risk manager then steers the system away from being impacted by the downside implications that can result.

My role in my industry is a risk management role, and the role has given me the chance to think deeply about risk and failure modes. And it certainly seems clear to me that there will always be room for human judgment and skills in this domain.

Sunday, June 7, 2009

Stress testing your model - Part 2/3

Continuing on the topic of risk management for models. After building a model, how do you make sure the model remains robust under working conditions? More importantly, make sure it works well under extreme conditions? We discussed the importance of independent validation for empirical models in a previous post. In my experience, model failures have been frequent when the validation process has been superficial.

Now, I want to move on to sensitivity analysis. Sensitivity analysis involves understanding the variability of the model output as the inputs to the model are varied. The inputs are changed by + or - 10 to 50% and the output is recorded. The range of outputs gives a sense of the various outcomes that can be anticipated and one needs to prepare for. Sensitivity analysis can also be used to stress test the other components of the system which the model drives. For example, let's say the output of the model is a financial forecast that goes into a system that is used to drive, deposit generation. The sensitivity analysis output gives an opportunity to check the robustness of the downstream system. By knowing that one might require occasionally to generate deposits at 4-5 times the usual monthly volumes, one can prepare accordingly.

Now, sensitivity analysis is one piece of stress testing that has usually been misdirected and incomplete. Good sensitivity analysis looks at both the structural components of the model as well as the inputs to the model. Most sensitivity analysis I have encountered stress only the structural components. What is the difference between the two?

Let's say, you have a model to project the performance of the balance sheet of a bank. One of the typical stresses that one would apply is to the expected level of losses on the loan portfolio of the bank. A stress of 20-50% and sometimes even 100% increase in losses is applied and the model outputs are assessed. When this is done consistently with all the other components of the balance sheet, you can get a sense of the sensitivity of the model to various components.

But that's not the same as the sensitivity to inputs. Because inputs are based in real-world phenomena, their impact is usually spread out to multiple components in the model. For example, if the 100% increase in losses were driven by a recession in the economy, there would be other impacts that one would need to worry about. Now, a recession is usually accompanied by a flight to quality from investors. So if there is a recessionary outlook, the value of equity holdings could crash as well due to equity investors moving out from equities (selling) and into more stable instruments. A third impact could be the impact of higher capital requirements on the value of traded securities . As other banks face the same recessionary environment, their losses could increase to such an extent that a call to increase capital becomes inevitable. How does one increase capital? The easiest route is to liquidate existing holdings. Driving a greater fall in the market prices of traded securities. Thus, putting further stresses on the balance sheet.

So, the scenario of running a 50% increase in loan losses is a purely illusory one. When loan losses increase, one has to contend with what the fundamental driver could be and how can that fundamental driver impact other portions of the balance sheet.

The other place where sensitivity analysis is often incomplete is by not looking at the impact of upstream and downstream processes and strategies. A model is never a stand-alone entity. It has upstream sources of data and down-stream uses of the model output. So if the model has to face situations where there are extreme values of inputs, what could be the implications on upstream and downstream strategies? These are the questions any serious model builder should be asking.

This discussion on sensitivity analysis has hopefully been eye-opening to modeling practitioners. Now, we will go on to a third technique, Monte Carlo simulation in another post. But before we go there, what are other examples of sensitivity analyses that you have seen in your work? How has this analysis been used effectively (or otherwise)? What are good graphical ways of sharing the output?

Sitemeter