bigdata

Data Goes Through Phases on the Way to Insights

Over the last few years I've been primarily building medium to large scale custom real time analytics platforms for clients. It's kept me pretty busy. I've done some for startups and even one for a big Fortune 50 client. This is been awesome in a variety of ways. Much of that work is finally about to see sunlight ('net light) finally and starting to hit the wires which makes me happy of course.

Along the way I have seen a patterns emerge in these types of systems. They are patterns that at a glance may seem obvious but are anything but when you are down in the weeds dealing with the various challenges associated with building these types of real time business analytics applications. For now, I've decided there are six phases in the life of a data object like a tweet, post, G+ post, email, support call, etc. to become meaningful and ultimately measurable. This pipeline looks something like:

Capture -> Distill -> Index -> Compute -> Display -> Interact --> Measure

The measure phase can feed right back into capture so the snake can eat it's own tail. Most of the applications I've architected and built with my clients and teams have ended up just like this evenutally. They didn't all need each piece right out of the gate and of course, there are more items you can add on to augment this list. But, no matter which direction, tools, or applications we built, they all ended up looking a bit like this pipeline eventually as they matured.

Capture. This keeps getting easier and easier. Much of it can even be very successfully outsourced now by using tools like DataSift or Gnip. Aggregating and storing the data with things like node.js and MongoDB, HBase, Cassandra, Redis, Node.js, and others is making this a bit rote at this point. So, it is much easier now to capture and save arbitrary data streams than ever before.

Distill. This is a combination of things like manually curated filters, NLP for categorization or sentiment and various other possible "metrics" of a sort. This can be heavily automated using a variety of very useful open source tools/algorithms, services like openamplify and much more. This part is about taking that raw mess of data and filtering it down to something more meaningful while deriving a data set that can be used for later purposes in the indexing and compute phases.

Indexing. Take the data you have saved and distilled. Then, making it searchable. Doing this at low volumes and high latency is dead easy. Doing this at scale in a low latency, high throughput, highly available and scalable fashion is very non-trivial. You'll notice this is getting harder to do as you move through the pipeline. Solr, ElasticSearch, and other tools have proven helpful in this area in various ways.

Compute. All the rage at the moment is creating metrics and scores from the derived data that has been captured, distilled, and indexed. Even apparently simple and embarrassingly parallel algorithms can be an insane can of worms at this stage. When you hit the limits of scale up you better have made wise choices at the beginning or you'll be facing a big rewrite. Writing code that scales and creating algorithms that can be scaled is also not easy at all. Tools like Akka and Fabric Engine are ones I've been working with and exploring quite a lot as well as hadoop of course and various options for stream based processing. This is were a lot of the FUN FUN is right now and it's technically very exciting in this area.

Display. Displaying information in a meaningful way to a user takes serious concerted effort. When millions and billions of things are being analyzed in near real the limitations of your user interface and choices to display data will become evident very quickly. Be ready to pivot. This is extremely hard to get right the first time. Until you get solid user feedback it's nearly impossible to get it right. This is one of the reasons I'm a fan of Agile, Lean, Lean UX, etc. The dance between UX, IA, Front End Development, API design and backend development, systems engineering, and more to make these complex distributed high throughput systems all work in a performant, distributed, and scalable way while being a joy to use is definitely a massive challenge. Often, I've found, it is an exercise in keeping things simple and fighting to eliminate unnecessary complexity day after day. Complex systems definitely seem to trend toward entropy and not entropy.

I'm happy to say all of this is becoming easier quickly as frameworks mature, new tools come along, and knowledge amongst engineers, designers, and product teams continues to increase on average. We build on the foundation what what came before us for sure. Lastly, I'd be remiss to point out that there is definitely no silver bullet for any of these phases of the pipeline, there is no one programming language that makes it all better, there is no secret handshake super secret message queue, or database that solves all your ills. But, this certainly is a fun time to be working in 'net space.

When is Big Data Actually Big?

There is a quandary for anyone trying to wrap their head around what “BigData” means. When is big data really big? I had a good conversation with a friend of mine @ckenton today and as we were discussing some impactful things he and his company have done for clients over the last couple of years with careful and meaningful data analytics of what would mostly be considered social media data. I was struck by the fact that there was significant impact using what, by data volume measure, was actually not all that much data; perhaps a few gigabytes in aggregate in each case we discussed.

On another front I have two active projects for two very different clients right now where I and my teams have architected and built systems from scratch that crunch from the 10’s of 1000’s to the millions of pieces of data per day in near real time. We’ve created code, frameworks, and modules and used off the shelf kit whenever we could. There have been moments of bliss and moments of solid wall to forehead pounding frustrations. We’re using tools like MongoDB, Riak, node.js, PHP, Redis, Scala, Java, Akka, AWS, Capistrano, Jenkins, Chef, Git and more. We’re using a flexible agile workflow models with business agreements and contracts that match. We are doing all this to analyze data. With all of this we are crunching what some would would call big data and it’s definitely growing very, very quickly by volume. But, it is not big data because of the ever growing volume. It’s big data because from it impactful meaning is extracted and the end users of these insights from otherwise chaotic looking data streams can make impactful business decisions quickly for their contextual needs.

In summary, I’ve come to think that Big Data is Big when the insights derived from it is truly meaningful and potentially significantly impactful. It doesn’t matter if it’s a few Gigabytes or a few Petabytes of data. The technical challenges will vary of course depending on data volume but what really matters is what you learn from the data you have and then, most importantly, what you do with that newfound knowledge once you have it in your grasp.