Wednesday, December 28, 2011

Car Insurance savings and too-clever marketing

A quick rant post.

I have been reflecting a bit on GEICO, Progressive and others claiming how you can save a lot of money (15%, so many dollars) by switching to that company. A highly deceptive form of advertising and here's why.

First, to start off, the marketing message taken at face-value seems to imply causation - switch to company X and you will save money. In reality, the sequence of events is the opposite. People typically shop for a quote and then when they find the quote saving them money over what they currently have, they switch over. So it is likely that for every person who switches, there are one or more people who don't switch because they don't save any money or they save too little for it to be worth the hassle. So to say that switch and you will save money is somewhat disingenuous. Only some people save money with Company X and they are the ones that switch.

The second part of deception comes in the dollar amount of the switch. The way this information is gathered is typically by surveying customers that have switched. Why is this deceptive? Well, because a number of behavioral economists studies have shown that we human beings tend to rationalize. We tend to give ourselves more credit than necessary or justifiable in general. This manifests itself in a number of ways such as most people thinking they are above-average drivers, people over-estimating investment returns they make and so on. So when a customer has made the (what the customer thinks) is the extremely smart decision to switch, they are likely to also over-estimate the savings that they have realized as they are proud of the switch decision they just took. And so it is very likely that the savings number is inflated to some extent.

So save x% by switching to GEICO is actually a smart ploy to get people to ask for a GEICO quote. Doesn't hurt at all to get one, in an extremely crowded market-place. But promising savings in the language that these companies use doesn't seem very above board.

Monday, December 12, 2011

Great Recession - A new theory linked to productivity improvement

I wrote a couple of years back on what has come to be known as the Great Recession of the twenty-first century. I remarked that the recession appears to show no signs of abating and recent events seems to have borne that out. While GDP growth in the US is in positive territory, it barely is. And the problems in Europe and a couple of natural disasters affecting Asia (the earthquake in Japan and the flooding in Thailand) have put brakes on the emerging markets engine that was pulling the world economy along for the last 4 years.

In the meantime, a number of well-argued articles and books have been written about the genesis of the crisis, and they have largely focused on the financial sector, the US mortgage market and the excesses there. The Nobel Prize winning economist, Joseph Stiglitz, approaches this issue from a slightly different angle in a recent write-up in Vanity Fair. Stiglitz argues that the Great Recession has its roots in something more benign than mortgages gone toxic. It lay in the productivity increases in the last two decades and caused a large number of job categories employing very large portions of the labor force to basically become redundant in the economy. What is interesting about this theory is that (Stiglitz argues) this is exactly what happened leading up to the Great Depression. The productivity improvements now are in the areas of manufacturing and services and the productivity improvement then was in agriculture.To quote, In 1900, it took a large portion of the U.S. population to produce enough food for the country as a whole. Then came a revolution in agriculture that would gain pace throughout the century—better seeds, better fertilizer, better farming practices, along with widespread mechanization. Today, 2 percent of Americans produce more food than we can consume.

Extremely interesting article and a forcefully made argument on the cause of the crisis and what could be done to solve it.

Saturday, December 10, 2011

Computing based on the human brain - the answer to Big Data?

A slight detour from my usual subjects around predictive analytics. I came across this recent article that is prescient of the direction of modeling and predictive analytics in general. And that is the move away from the current model of computer design, based on the famous von Neumann architecture to something that is much more similar to the thing computing and modeling and decision making are ultimately designed to emulate, viz. the human brain.

IBM Watson - Super-computer or energy hog?
First some background. Computer architecture has consistently followed the classic von Neumann architecture. Without getting into too many details, what the architecture boils down to is a separate processing unit (known variedly as CPU, ALU, microprocessor) and a separate memory unit, both connected by a communication channel called a Bus. This architecture has served computing well over the past 50 years, and now has brought the computer within access of every single human being on Earth. The fact that 2-year old toddlers are extremely adept with the Apple iPad is testimony to the success of the von Neumann model. After all, nothing succeeds like success. Even as processor chips have become more advanced and started incorporating their own internal memory module (called cache memory), the von Neumann architecture has been faithfully replicated. But successful doesn't mean ideal or optimal or even efficient. The burn of the laptop on my thigh as I type this post is indication that the current computing model, while successful, is also an extremely power-hungry one. The IBM- Watson machine, famous for playing and beating human opponents in Jeopardy, is also famous for consuming 4000 times the power of its human competitors. The human brain functions with about 20 watts of power while Watson consumes more than 85,000 watts. And all that Watson can do is play Jeopardy. The human brain can do a lot more like writing, recognizing pattern, expressing and feeling emotion, negotiating traffic, even designing computers!

So what might a more efficient model look like? Well, it looks a little more like the human brain. The human brain has both logical problem solving, thinking as well as memory managed through one element of computing infrastructure, so to speak, which is the neuron interconnected through synapses. And that is the model that is being pursued by IBM in collaboration with Cornell, Columbia, the University of Wisconsin and the University of California, Merced. The project is also funded by DARPA, and more details can be found at the link at the start of the page. The big a-ha moment according to the project director and IBM computer scientist, Dharmendra Modha (in the middle of vacation, no less) was to drive the human-brain driven computing project through the fundamental design of the processor chip or the hardware rather than through software. To quote some details from the New York Times article by Steve Lohr,
The prototype chip has 256 neuron-like nodes, surrounded by more than 262,000 synaptic memory modules. That is impressive, until one considers that the human brain is estimated to house up to 100 billion neurons. In the Almaden research lab, a computer running the chip has learned to play the primitive video game Pong, correctly moving an on-screen paddle to hit a bouncing cursor. It can also recognize numbers 1 through 10 written by a person on a digital pad — most of the time.

Why is this relevant to predictive analytics?
What is a mention of this project doing in a predictive analytics blog? It has to do with Big Data. Online, mobile, geo-spatial and RFID technologies are creating streams of data in amounts that would have been impossible to conceptualize even a decade back. As the availability of data increases and the power of conventional computing infrastructure and storage infrastructure gets overwhelmed, we will have to rely on a distributed memory storage and computing set-up that is more similar to the human brain. A space worth watching.

Thursday, December 8, 2011

Tesco Clubcard - Metrics and Success Factors

Getting back to this subject after a really long break. In the first part on this subject, we reviewed Tesco’s loyalty program and the types of business decisions aided by the Clubcard. The Tesco crucible maintains information about:
1. Customer demographics
2. Detailed shopping history
3. Purchase tastes, frequency, habits and behaviours
4. Other individual level indicators obtained from public sources

Tesco then uses this information for a number of business benefits such as:
1. Loyalty
2. Cross Sells
3. More optimal inventory and store network planning
4. Optimal targeting and marketing of manufacturer’s promotions
5. Generating customer insights and marketing those insights

The link to the previous article that details out these points is here

So what else goes into making this program successful?

Metrics
One important factor is the metrics used by Tesco to measure success. Primarily two metrics. The first is the change in share of wallet. Based on demographic information collected, Tesco has an estimate of the total spend of that household. Based on that estimate, a share of wallet can be computed based on Tesco sales. This is of course an estimated measure, but given the right kinds of assumptions, not a particularly ambitious estimate to make.

(The key here is make sure the estimates are generated in an unbiased manner. An estimated metric is always prone to manipulation. For instance, a small increase in unit sales can be projected to be a larger increase in share of wallet, by manipulating the projected overall spend. This problem can be avoided if the estimation is done by an independent group that is incentivised to get its estimates right and not as much on the volume of sales. This is the role of Decision Sciences groups found in many organizations.)

A related measure of share of wallet is the number of purchase categories into which Tesco has penetrated. Remember that Tesco is present in many purchase categories such as groceries, apparel, durables, banking products, vacation packages, insurance, auto sales, pharmacy products, gas, etc. Effectiveness of the Tesco brand is realized when the customer begins to use Tesco for multiple product categories. So that is a useful metric to track, both as an indication of overall profitability as well as marketing and cross-sell effectiveness.

The second main metric being measured is just pure customer behaviour from a frequency standpoint. How is the company changing the frequency of visit of customers, and what sorts of visits are they getting from them? Of course, with the wide use of smart phones and the tracking devices which are inherent in these phones, it is possible to gather a lot of spatial and temporal information such as: Which store? Duration of the visit? At what time of the day or week?

Other Success factors
No company can maintain sustained growth and profitability on the strength of purely analytics without addressing the human face of the analytics - in other words, the customer service aspect. Tesco management was clear to convey the message to the store staff that the Clubcard program was an important value-add for customers and hence an inherent part of customer service. That it wasn’t fundamentally manipulative. This was done through a communication program that was rolled out across all stores and that involved every store employee of Tesco.

The other important success factor that was critical was management vision. Many organizations tend to see these programs as cost drivers  and strive to minimize cost while maximizing customer satisfaction, often conflicting goals. But the Tesco management was clear about the ultimate goal of the Clubcard which is to drive loyalty. What also helped was the breadth of vision that allowed for multiple revenue streams from the ClubCard program that were not directly related to the core idea of give-back to the customers and loyalty benefits.

Another philosophy that the Tesco management employed fairly successfully was test-and-learn. Most of the major improvements and enhancements were first piloted in smaller stores. Extremely rigorous measurement mechanisms were then employed to make sure that the right inferences were drawn from the test.

Overall, the key realization was that the Clubard program is not just an electronic sales promotion, but rather the entire business has to be physically re-engineered to be customer-insight led.

In my final piece, I will touch on the impact to the overall bottom-line - and the top-line benefits that came from the Clubcard program.

Sitemeter