Saturday, October 18, 2014

Learnings from the Strata 2014 Conference

After three hectic days at the Strata conference trying to appreciate the poetry, I am on my way back on the Acela from New York to DC. There was tons to learn from the conference and words can only do so much justice but there is a set of learnings I want to share from my perspective. Caveat: These are all colored by my knowledge, my personal context, my organizational context but a lot of learnings are things that I am sure are going to resonate with a lot of people. Also another caveat that there is no neat structuring of what I am going to share, so treat it as such. So here goes:

1. Map Reduce as we know it is already behind us
MapReduce as a specific set of technologies written in Java (not as an overall philosophy, as indeed, MapReduce has become a philosophy very similar to Agile) is already behind us. Now we had MapReduce 2.0 come out late last year and it has been an improvement definitely on MapReduce 1.0. But when it comes to large-scale ingestion of data and making it usable, the mainstream has shifted to Apache Spark. What is surprising is that Spark as a technology is fairly new and not very stable. But the pace of technology evaluation is such that people are finding use for Spark in a number of really relevant and creative ways. And in 3 years, technologies like Spark will replace what MapReduce almost entirely. (Even though some people are going to argue there is a place for both)

2. Using BigData tools vs investing in custom development on agile technologies is an important decision
With the emergence of the open source software movement and also the ability to easily share software, learning, approaches using a number of internet based platforms , it is no surprise that a lot of startups see open source as an easy way to bootstrap their product development. Over the years, open source software is becoming the norm for driving product development and data infrastructure creation within almost all tech and digital industry leaders.

With the Cambrian explosion of product development in the data space, a lot of the products being released are tools or building blocks that then allow efficiencies around data processing and data pipeline. So an organization that needs to harness and use BigData for its day to day needs has this very important decision in front of them. Should they be doing custom development on the generic open source technologies and therefore allow their solutions to evolve along with the underlying generic technology, or should they bring in third party tools for important parts of their data processing? (This is a variant of the classic Build vs Buy question, but has some nuances because of the open source explosion.)

Each decision comes with its pros and cons. Working with tools improves speed to market, but then forces the buying organization to use a set of constraints that a tool is likely to impose on them. Working on generic technologies removes this dependency and allows for natural product evolution, but this then comes at the cost of development time and lower speed to market, potentially higher costs. And these are not easy decisions. My specific observation here was around how my organization has chosen to ingest data into its HDFS environment. Should we be doing custom development using some of the open source data ingestion frameworks such as Apache Flume or Storm, or should we use a product that comes with a number of desirable features out-of-the-box like Informatica? These are not easy decisions and I think the whole Build vs Buy decision on BigData needs its own blogpost.

3. Open source is here to stay

I think I might have said this before but open source is here to stay and going through a Cambrian explosion. Enough said on that!

4. Innovation to new and dynamic technologies needs to be multi-threaded

As relative late adopters on to the BigData platform, my organization has been following a linear and established path to BigData adoption. The goal here has been being able to get to low-hanging fruit with BigData here around cost savings – by taking spend away from investing in RDBMS platforms. It is a perfectly legitimate goal to have and I think we are going about this goal in a very structured manner. But in a world of fast evolving technologies, this focus creates the risk that we end up having a blind spot within the overall ecosystem around other use-cases of the technology. In our case, real-time data use-cases and streaming analytics is a big blind-spot from my vantage point. The risk here is that by the time we achieve the low-hanging fruit by being systematic and focused, we end up losing a lot of ground in other areas and are similarly behind when the next technology wave happens.

So my view here is that we need to be multi-threaded in our technology adoption. We need to have specific goals and be focused on them to make these new technologies mainstream – but at the same time, we need to be aware of other applications of the technology and make sure there are investments in place to build our capabilities on these areas which are not immediate focus. Also, to have a SWAT team working on even newer technologies and ideas that are likely to become mainstream 12 months from now.

Just a smattering of my immediate thoughts from Strata. Like I promised, not very organized but did want to share some of my unvarnished opinions.

Wednesday, October 15, 2014

Third time at Strata and why that's like reading poetry

I am heading up to New York to the 2014 Strata conference. It has become a ritual of sorts to go to Strata in October as I have been going for the last three years.

My first trip was in 2012 where admittedly, I was going as a BigData skeptic. Or put more accurately, I was going in with an open mind about the possibilities of BigData but definitely under-exposed in terms of its capabilities and what organizations and professionals could stand to gain. I walked away with some appreciation of the BigData case studies but many of the examples seemed like applying technology for technology sake.

When I went in 2013, my perspective was certainly more informed. I was going to understand the big data ecosystem in more detail, having spent a considerable amount of time both reading up on the technologies as well as working closely with practitioners in the area. I left the conference with a much wider understanding of the entire HDFS and BigData ecosystem. Props here for Silicon Valley Data Sciences whose presentation was both detailed in both its breadth and depth, a difficult thing to accomplish.

Also, I broadened my own perspective in terms of the kinds of problems that can be solved by BigData. Previously, my field of vision was narrowly focused on business problems. The problem with that lens is that business problems are a reflection of the past - how the business operated yesterday and the challenges that were created in a pure economic sense. What this approach is blind to is the huge world of consumer and human problems that need exploring and solving using BigData. Opening my mind to a whole host of consumer/ human challenges made me aware of the need to harness and harmonize different types of data, the mash-ups and insights created and the different world one could vision for customers. I also had the opportunity within my org to work closely on a classic BigData problem - which was building a holistic view of the customer across functional and product silos. And so working with and talking to people with vastly greater experience and hands-on knowledge made me more informed, allowed me to appreciate even finer nuances in the space and form even more bold customer and business value propositions.

So as I am headed up to New York to attend Strata for the third year, my mind goes to poetry. I have always felt that you need to pass the hump of understanding language to understand the intricacies and beauty within the language. Understand the rhythm and the poetry of the language. And that requires deep study. Countless hours of reading, multiple hundreds of tweets. And hard hard hours of whiteboard sketching, debating ideas, learning from the experts. 

And so here's listening to some good poetry over the next three days at Strata. Should be fun.

Monday, October 6, 2014

A/B Testing Part 3 - Building a culture of experimentation

In my previous posts, I have talked about the organizational readiness and the technical preparedness to do online A/B testing effectively.  These baseline elements are foundational and need to be in place for any testing and experimentation approach to be successful.

The next stage is actually building a culture of experimentation and testing amongst product creators. There are a number of mental barriers to overcome. I have talked about the need for product managers to start appreciating the need for "testing in the wild" as a useful addition to any prototype or usability testing. Another mental barrier that comes in the way is the fear that a test might amount to nothing and therefore one shouldn't waste valuable dev cycles testing minor improvements and that testing should be "reserved" only for really big changes.

This is a place where it is important to have a schooling in developing product hypotheses and ways in which those hypotheses can be proved (or disproved). In my organization, we spent (and continue to invest) a considerable amount of time discussing the principles of testing and experimentation, and making sure that product managers walk away with a pretty good understanding of the overall "scientific method" - i.e. the need to develop and validate hypotheses through a systematic process.

We spent a good amount of time on the following questions

- What KPI or metric are you trying to influence?
So specifically, which important customer related or business related metric you are trying to impact? This is an important step to focus the experimentation effort on the things that really matter from a product standpoint. So to actually walk through an example, let us say one of the metrics we are going to impact is the "bounce rate" on a website.

- What is your hypotheses on how you can influence the metric?
What are the different ideas that can be employed to lower the bounce rate? What underlying consumer behavior are we trying to change here? By the way, a lot of these hypotheses need to be generated either from data analysis (so, something that shows that repeat visitors have high bounce rates) or from a detailed understanding of customer needs (through techniques like design thinking and empathy interviews). So one could hypothesize that one of the reasons why bounce rate is high is because our website does not effectively recognize repeat visitors on the site. Or that the reason why bounce rates are high is because of too much content on the page. Or that the call-to-action button needs to be of a different color and font to stand out from the page.
One other quick thing to point out. There might be situations where the purpose of the test is basically to generate new behavioral hypotheses and not to necessarily prove existing ones. So take a typical sales funnel with lots of form fields and a few call to action buttons. One could just come up with variants of the font used, the color of the button, the shape and size and test all of them to see which combination of form field size + font + button shape + button color is the most optimal. The results of the test could create a body of learning around what is preferred by customers, which in turn could influence other such funnels in future. This approach is also useful when there is some kind of new technology or feature to be introduced. So imagine doing a banking transaction on a wearable device like Google Glass. Given the newness of the technology, there isn't typically one proven answer and we need to get to the proven answer through experimentation.

- Finally, what is a test one could run in order to test out these hypotheses?
So specifically, what will the control experience look like and what would be the treatment? And by offering the treatment, what is the metric that we are expecting to move or impact? So in the bounce case example, the test could be

It is typical to spend a few weeks or even months just on this part of the journey - which is to inculcate a testing or experimentation culture within the organization. I do want to emphasize the need to get this culture piece right throughout the conversation. It is a known psychological quirk that human beings tend to be far more sure of things that are inherently uncertain. We are "sure" that a customer is just going to fall in love with a product we have built in a certain way, just because it was out idea. It is important to challenge this internal wiring problem, that can get in the way of true knowledge seeking.