analytics

QSNB Meetup v1.3 - Feb 26!

The Quantified Self North Bay Meetup Group http://bit.ly/11fe6vb will be meeting on Februay 26th, 2013 for it's 3rd scheduled meetup.

If you are interested in what Quantified Self is up to this could be an intersting meeting to attend. So, if you find yourself in/near San Rafael, CA after work on 26 Feb 2013 come an join some curious folks that are all interested in what we can learn from ourselves and each other with smart tracking and analysis of the data we generate.

QS is a large and growing community so if you aren't near here on the 26th be sure and check out the upcomming meetings around the world on the main site.

http://quantifiedself.com/2013/01/the-quantified-self-community/

I hope to see you on the 26th!

http://www.meetup.com/quantified-self-north-bay/events/99809072/

Moving at the Speed of Cloud

The majority of my work in the last three years or so has been all about receiving, getting, pushing, pulling, and generally wrangling streams of data (mostly social data) for the purposes of analytics, comparison, or saving across a broad range of products and services for startups (one of my own) and fortune 500 companies. It's been keeping me busy. All of this for the ultimate reason of helping businesses make better and more well informed decisions about products, services, and more.

During this time I and my colleagues have developed the relationships, partnerships, technology stacks, and processes necessary to deliver these types of applications very quickly and at a high quality level. This has been fun all in all and something for which demand seems to be growing quickly.

To give a sense of the technology "stack" I've mostly settled on for solving these types of problems we are using:

Languages: Scala, Java, Node.js, PHP, Ruby

Frameworks: Symfony2, Play2.0, express.js, twitter bootstrap

Data Store: MySQL, MongoDB, Riak, Redis

Infrastructure: Amazon Web Services

Orchestration: Chef, Custom Scripting, AWS Cloud Formation

That's just a high level snapshot of course, there are a lot of details down inside each of those items from favored libraries to DB clients, and configuration management frameworks.

The best part for me is that it seems like for the first time in a long time many buisinesses seem to understand and believe in the value of the application of technology to solving business problems as a first order task.

The drive for big data aggregation and analytics is a natural evolution of the the maturation of cloud computing as both a technology and a service/process. The continued evolution of programming languages, application frameworks, and even the general understanding of distributed service oriented architectures and how to program REST API's is all improving as such an incredible rate that it's just an awesome time to be creating software.

So much of what we are doing now has been "around" in one form or another for a long time. The science in computer science laid the foundations quite some time ago. It's only now that so much is becomming so  accessible and the information on how to use all these tools is readily available.

I read a recent article/survey posted to Forbes.com that said the cloud is still three years away from it's full impact. The first cloud camp, where I did a session on developing for the cloud, was in 2008. That's only four years ago and look how much has changed! Awesome. 

From where I sit, this is an exciting time with nearly unlimited possibilties. Ideas are critical. Exececution is just as important. If you want to talk about any of these things I'm usually found either in San Francisco or San Rafael so let's chat! Good times!!

Data Goes Through Phases on the Way to Insights

Over the last few years I've been primarily building medium to large scale custom real time analytics platforms for clients. It's kept me pretty busy. I've done some for startups and even one for a big Fortune 50 client. This is been awesome in a variety of ways. Much of that work is finally about to see sunlight ('net light) finally and starting to hit the wires which makes me happy of course.

Along the way I have seen a patterns emerge in these types of systems. They are patterns that at a glance may seem obvious but are anything but when you are down in the weeds dealing with the various challenges associated with building these types of real time business analytics applications. For now, I've decided there are six phases in the life of a data object like a tweet, post, G+ post, email, support call, etc. to become meaningful and ultimately measurable. This pipeline looks something like:

Capture -> Distill -> Index -> Compute -> Display -> Interact --> Measure

The measure phase can feed right back into capture so the snake can eat it's own tail. Most of the applications I've architected and built with my clients and teams have ended up just like this evenutally. They didn't all need each piece right out of the gate and of course, there are more items you can add on to augment this list. But, no matter which direction, tools, or applications we built, they all ended up looking a bit like this pipeline eventually as they matured.

Capture. This keeps getting easier and easier. Much of it can even be very successfully outsourced now by using tools like DataSift or Gnip. Aggregating and storing the data with things like node.js and MongoDB, HBase, Cassandra, Redis, Node.js, and others is making this a bit rote at this point. So, it is much easier now to capture and save arbitrary data streams than ever before.

Distill. This is a combination of things like manually curated filters, NLP for categorization or sentiment and various other possible "metrics" of a sort. This can be heavily automated using a variety of very useful open source tools/algorithms, services like openamplify and much more. This part is about taking that raw mess of data and filtering it down to something more meaningful while deriving a data set that can be used for later purposes in the indexing and compute phases.

Indexing. Take the data you have saved and distilled. Then, making it searchable. Doing this at low volumes and high latency is dead easy. Doing this at scale in a low latency, high throughput, highly available and scalable fashion is very non-trivial. You'll notice this is getting harder to do as you move through the pipeline. Solr, ElasticSearch, and other tools have proven helpful in this area in various ways.

Compute. All the rage at the moment is creating metrics and scores from the derived data that has been captured, distilled, and indexed. Even apparently simple and embarrassingly parallel algorithms can be an insane can of worms at this stage. When you hit the limits of scale up you better have made wise choices at the beginning or you'll be facing a big rewrite. Writing code that scales and creating algorithms that can be scaled is also not easy at all. Tools like Akka and Fabric Engine are ones I've been working with and exploring quite a lot as well as hadoop of course and various options for stream based processing. This is were a lot of the FUN FUN is right now and it's technically very exciting in this area.

Display. Displaying information in a meaningful way to a user takes serious concerted effort. When millions and billions of things are being analyzed in near real the limitations of your user interface and choices to display data will become evident very quickly. Be ready to pivot. This is extremely hard to get right the first time. Until you get solid user feedback it's nearly impossible to get it right. This is one of the reasons I'm a fan of Agile, Lean, Lean UX, etc. The dance between UX, IA, Front End Development, API design and backend development, systems engineering, and more to make these complex distributed high throughput systems all work in a performant, distributed, and scalable way while being a joy to use is definitely a massive challenge. Often, I've found, it is an exercise in keeping things simple and fighting to eliminate unnecessary complexity day after day. Complex systems definitely seem to trend toward entropy and not entropy.

I'm happy to say all of this is becoming easier quickly as frameworks mature, new tools come along, and knowledge amongst engineers, designers, and product teams continues to increase on average. We build on the foundation what what came before us for sure. Lastly, I'd be remiss to point out that there is definitely no silver bullet for any of these phases of the pipeline, there is no one programming language that makes it all better, there is no secret handshake super secret message queue, or database that solves all your ills. But, this certainly is a fun time to be working in 'net space.