bigdata

What is to Show for Five Years of Cloud Computing?

Just a few short years ago launching a virtual machine in the cloud was a simple and basic. With a couple of API calls and maybe a button click or two you were up and running in a just a few minutes. The choices were limited but it was nice. You could even get a little storage to go along with the instance. Just before that we were leasing, renting and co-locating dedicated physical hardware in data centers and it took weeks to order, provision, deploy, and set up the gear. Fast forward to today and we are now full-on in a cloud computing revolution redefining how technology is deployed. There are so many choices and so many of them good that it can be complete overwhelming to those trying to make sense of it all. On top of it all, every day I meet people who have never deployed anything in “the cloud.” It’s just as easy as ever to launch a machine but there is so much more available to the on-demand computing as a service would be client today. I was trying to think of what really is different or better today than it was five years ago. What’s really new to show for 5+ years of cloud computing innovation and effort?

First and foremost, people figured out that cloud computing is good for something really important (meaning people not Google). They figured out that the cloud in its various forms is phenomenal for capturing and processing what has come to be known as big data. This is a really important point. It’s never been easy, and still isn’t, to aggregate and process voluminous, high speed, or wildly unstructured data. In fact, prior to cloud computing coming of age it was down right impossible fiscally and technically. Now, it’s all there at the click of a few buttons as pretty as you like. You can now spin up a super computer for just a few dollars an hour to crunch even your most gnarly data sets.

A second fairly dramatic improvement is in the category of orchestration of resources. There are far more resources available to orchestrate for an infinite number of purposes but doing so has never been easier (not, I did not say easy). Due to the proliferation in understanding of the creation and consumption of API’s you can now quite literally with a single set of tools launch a server at several different cloud providers, geo locations, and even operating system varieties if so you wanted and if you’re clever with tools like Puppet, Chef, Cloud Formation, Cloud Foundry or others you can do it all from the comfort of your very own laptop in just a few minutes. You can quickly and relatively easily, historically speaking, compose masses of servers into useful services for nearly anything you can dream!

A third thing that’s changed is the raw power available via a command line or cloud console and in the newer implementations of older software architectures. You can now, in just a few moments, provision a server with 244 GiB memory and high speed 10 Gigabit Ethernet. And, that is just a building block to the real power. The real power comes as a result of massive improvements and capabilities in the arena of distributed computation, storage, and software defined networking. This allows you to provision dozens to thousands of these types of machines relatively on a whim. Frankly, not many people can even figure out what to do with all this power even if they do know how to provision it today. This has forced software architects and engineers to push forward much faster with zeal and learn how to write distributed applications and in many cases, the occasion is being met. So, raw power in both virtualized hardware and the software that can be deployed on it has come a very long way.

In summary, cloud computing had already been brewing for decades with its roots reaching far back in time. Grids, clusters and more were all precursors. However, it is striking how far things have come in just about five years. There has been unprecedented improvement and what feels like ever increasing speed of improvements. Good times indeed.

Moving at the Speed of Cloud

The majority of my work in the last three years or so has been all about receiving, getting, pushing, pulling, and generally wrangling streams of data (mostly social data) for the purposes of analytics, comparison, or saving across a broad range of products and services for startups (one of my own) and fortune 500 companies. It's been keeping me busy. All of this for the ultimate reason of helping businesses make better and more well informed decisions about products, services, and more.

During this time I and my colleagues have developed the relationships, partnerships, technology stacks, and processes necessary to deliver these types of applications very quickly and at a high quality level. This has been fun all in all and something for which demand seems to be growing quickly.

To give a sense of the technology "stack" I've mostly settled on for solving these types of problems we are using:

Languages: Scala, Java, Node.js, PHP, Ruby

Frameworks: Symfony2, Play2.0, express.js, twitter bootstrap

Data Store: MySQL, MongoDB, Riak, Redis

Infrastructure: Amazon Web Services

Orchestration: Chef, Custom Scripting, AWS Cloud Formation

That's just a high level snapshot of course, there are a lot of details down inside each of those items from favored libraries to DB clients, and configuration management frameworks.

The best part for me is that it seems like for the first time in a long time many buisinesses seem to understand and believe in the value of the application of technology to solving business problems as a first order task.

The drive for big data aggregation and analytics is a natural evolution of the the maturation of cloud computing as both a technology and a service/process. The continued evolution of programming languages, application frameworks, and even the general understanding of distributed service oriented architectures and how to program REST API's is all improving as such an incredible rate that it's just an awesome time to be creating software.

So much of what we are doing now has been "around" in one form or another for a long time. The science in computer science laid the foundations quite some time ago. It's only now that so much is becomming so  accessible and the information on how to use all these tools is readily available.

I read a recent article/survey posted to Forbes.com that said the cloud is still three years away from it's full impact. The first cloud camp, where I did a session on developing for the cloud, was in 2008. That's only four years ago and look how much has changed! Awesome. 

From where I sit, this is an exciting time with nearly unlimited possibilties. Ideas are critical. Exececution is just as important. If you want to talk about any of these things I'm usually found either in San Francisco or San Rafael so let's chat! Good times!!

Data Goes Through Phases on the Way to Insights

Over the last few years I've been primarily building medium to large scale custom real time analytics platforms for clients. It's kept me pretty busy. I've done some for startups and even one for a big Fortune 50 client. This is been awesome in a variety of ways. Much of that work is finally about to see sunlight ('net light) finally and starting to hit the wires which makes me happy of course.

Along the way I have seen a patterns emerge in these types of systems. They are patterns that at a glance may seem obvious but are anything but when you are down in the weeds dealing with the various challenges associated with building these types of real time business analytics applications. For now, I've decided there are six phases in the life of a data object like a tweet, post, G+ post, email, support call, etc. to become meaningful and ultimately measurable. This pipeline looks something like:

Capture -> Distill -> Index -> Compute -> Display -> Interact --> Measure

The measure phase can feed right back into capture so the snake can eat it's own tail. Most of the applications I've architected and built with my clients and teams have ended up just like this evenutally. They didn't all need each piece right out of the gate and of course, there are more items you can add on to augment this list. But, no matter which direction, tools, or applications we built, they all ended up looking a bit like this pipeline eventually as they matured.

Capture. This keeps getting easier and easier. Much of it can even be very successfully outsourced now by using tools like DataSift or Gnip. Aggregating and storing the data with things like node.js and MongoDB, HBase, Cassandra, Redis, Node.js, and others is making this a bit rote at this point. So, it is much easier now to capture and save arbitrary data streams than ever before.

Distill. This is a combination of things like manually curated filters, NLP for categorization or sentiment and various other possible "metrics" of a sort. This can be heavily automated using a variety of very useful open source tools/algorithms, services like openamplify and much more. This part is about taking that raw mess of data and filtering it down to something more meaningful while deriving a data set that can be used for later purposes in the indexing and compute phases.

Indexing. Take the data you have saved and distilled. Then, making it searchable. Doing this at low volumes and high latency is dead easy. Doing this at scale in a low latency, high throughput, highly available and scalable fashion is very non-trivial. You'll notice this is getting harder to do as you move through the pipeline. Solr, ElasticSearch, and other tools have proven helpful in this area in various ways.

Compute. All the rage at the moment is creating metrics and scores from the derived data that has been captured, distilled, and indexed. Even apparently simple and embarrassingly parallel algorithms can be an insane can of worms at this stage. When you hit the limits of scale up you better have made wise choices at the beginning or you'll be facing a big rewrite. Writing code that scales and creating algorithms that can be scaled is also not easy at all. Tools like Akka and Fabric Engine are ones I've been working with and exploring quite a lot as well as hadoop of course and various options for stream based processing. This is were a lot of the FUN FUN is right now and it's technically very exciting in this area.

Display. Displaying information in a meaningful way to a user takes serious concerted effort. When millions and billions of things are being analyzed in near real the limitations of your user interface and choices to display data will become evident very quickly. Be ready to pivot. This is extremely hard to get right the first time. Until you get solid user feedback it's nearly impossible to get it right. This is one of the reasons I'm a fan of Agile, Lean, Lean UX, etc. The dance between UX, IA, Front End Development, API design and backend development, systems engineering, and more to make these complex distributed high throughput systems all work in a performant, distributed, and scalable way while being a joy to use is definitely a massive challenge. Often, I've found, it is an exercise in keeping things simple and fighting to eliminate unnecessary complexity day after day. Complex systems definitely seem to trend toward entropy and not entropy.

I'm happy to say all of this is becoming easier quickly as frameworks mature, new tools come along, and knowledge amongst engineers, designers, and product teams continues to increase on average. We build on the foundation what what came before us for sure. Lastly, I'd be remiss to point out that there is definitely no silver bullet for any of these phases of the pipeline, there is no one programming language that makes it all better, there is no secret handshake super secret message queue, or database that solves all your ills. But, this certainly is a fun time to be working in 'net space.

When is Big Data Actually Big?

There is a quandary for anyone trying to wrap their head around what “BigData” means. When is big data really big? I had a good conversation with a friend of mine @ckenton today and as we were discussing some impactful things he and his company have done for clients over the last couple of years with careful and meaningful data analytics of what would mostly be considered social media data. I was struck by the fact that there was significant impact using what, by data volume measure, was actually not all that much data; perhaps a few gigabytes in aggregate in each case we discussed.

On another front I have two active projects for two very different clients right now where I and my teams have architected and built systems from scratch that crunch from the 10’s of 1000’s to the millions of pieces of data per day in near real time. We’ve created code, frameworks, and modules and used off the shelf kit whenever we could. There have been moments of bliss and moments of solid wall to forehead pounding frustrations. We’re using tools like MongoDB, Riak, node.js, PHP, Redis, Scala, Java, Akka, AWS, Capistrano, Jenkins, Chef, Git and more. We’re using a flexible agile workflow models with business agreements and contracts that match. We are doing all this to analyze data. With all of this we are crunching what some would would call big data and it’s definitely growing very, very quickly by volume. But, it is not big data because of the ever growing volume. It’s big data because from it impactful meaning is extracted and the end users of these insights from otherwise chaotic looking data streams can make impactful business decisions quickly for their contextual needs.

In summary, I’ve come to think that Big Data is Big when the insights derived from it is truly meaningful and potentially significantly impactful. It doesn’t matter if it’s a few Gigabytes or a few Petabytes of data. The technical challenges will vary of course depending on data volume but what really matters is what you learn from the data you have and then, most importantly, what you do with that newfound knowledge once you have it in your grasp.