cloud computing

What is to Show for Five Years of Cloud Computing?

Just a few short years ago launching a virtual machine in the cloud was a simple and basic. With a couple of API calls and maybe a button click or two you were up and running in a just a few minutes. The choices were limited but it was nice. You could even get a little storage to go along with the instance. Just before that we were leasing, renting and co-locating dedicated physical hardware in data centers and it took weeks to order, provision, deploy, and set up the gear. Fast forward to today and we are now full-on in a cloud computing revolution redefining how technology is deployed. There are so many choices and so many of them good that it can be complete overwhelming to those trying to make sense of it all. On top of it all, every day I meet people who have never deployed anything in “the cloud.” It’s just as easy as ever to launch a machine but there is so much more available to the on-demand computing as a service would be client today. I was trying to think of what really is different or better today than it was five years ago. What’s really new to show for 5+ years of cloud computing innovation and effort?

First and foremost, people figured out that cloud computing is good for something really important (meaning people not Google). They figured out that the cloud in its various forms is phenomenal for capturing and processing what has come to be known as big data. This is a really important point. It’s never been easy, and still isn’t, to aggregate and process voluminous, high speed, or wildly unstructured data. In fact, prior to cloud computing coming of age it was down right impossible fiscally and technically. Now, it’s all there at the click of a few buttons as pretty as you like. You can now spin up a super computer for just a few dollars an hour to crunch even your most gnarly data sets.

A second fairly dramatic improvement is in the category of orchestration of resources. There are far more resources available to orchestrate for an infinite number of purposes but doing so has never been easier (not, I did not say easy). Due to the proliferation in understanding of the creation and consumption of API’s you can now quite literally with a single set of tools launch a server at several different cloud providers, geo locations, and even operating system varieties if so you wanted and if you’re clever with tools like Puppet, Chef, Cloud Formation, Cloud Foundry or others you can do it all from the comfort of your very own laptop in just a few minutes. You can quickly and relatively easily, historically speaking, compose masses of servers into useful services for nearly anything you can dream!

A third thing that’s changed is the raw power available via a command line or cloud console and in the newer implementations of older software architectures. You can now, in just a few moments, provision a server with 244 GiB memory and high speed 10 Gigabit Ethernet. And, that is just a building block to the real power. The real power comes as a result of massive improvements and capabilities in the arena of distributed computation, storage, and software defined networking. This allows you to provision dozens to thousands of these types of machines relatively on a whim. Frankly, not many people can even figure out what to do with all this power even if they do know how to provision it today. This has forced software architects and engineers to push forward much faster with zeal and learn how to write distributed applications and in many cases, the occasion is being met. So, raw power in both virtualized hardware and the software that can be deployed on it has come a very long way.

In summary, cloud computing had already been brewing for decades with its roots reaching far back in time. Grids, clusters and more were all precursors. However, it is striking how far things have come in just about five years. There has been unprecedented improvement and what feels like ever increasing speed of improvements. Good times indeed.

It's 2013! Things Break, Services Falter. Move Forward.

It's a New Year, I have the cloud, but I still have many of the same old Single Points of Failure.

It's known that a single point of failure (SPOF) is a risk. It's an Achilees heel so to speak. That goes for people, companies, planets, AMI's, AZ's, Regions, Countries, or beers in the fridge. Whatever processes you have to do your general day to day work should be able to deal with known SPOF's and be flexible enough to assimilate and adjust to newly found failure modes. But, and this is important, there is a substantial cost associated with eliminating certain SPOF's. Let's say you decided that you no longer are accepting of having Earth be an SPOF for your awesome blog. Well, in that case, you need a space program, and an interplanetary network that puts this desire out of reach unless you are NASA, Elon Musk, or Richard Branson. Admittedly, that is an extreme example but my point is that your tolerance for risk and downtime must be considered carefully for any technology for which you have implicit or assumed service level agreements with your users. Let's think about Netflix for a moment.

Netflix's service was severely impacted this last Christmas Eve by an outage affecting AWS ELB's in their US East region. Based on my arms length information about Netflix operations through what I've read that is public, in my opinion and far more than most organizations, Netflix understands this cost/benefit of utilizing AWS. They say themselves in a recent post:

"Our strategy so far has been to isolate regions, so that outages in the US or Europe do not impact each other."

"Netflix is designed to handle failure of all or part of a single availability zone in a region as we run across three zones and operate with no loss of functionality on two." Source: http://techblog.netflix.com/2012/12/a-closer-look-at-christmas-eve-outage.html

Netflix clearly understands the risk and still they have chosen to take it despite the known risks. They were completely at the mercy of AWS in this last outage since the failure was regional in nature and their systems do not allow for multi-regional failover within a country for a single user account or group of accounts YET; but they are working on it.

As an AWS client, they do have a reasonable expectation as a customer that the underlying primitives they use from AWS to compose their services will work reliably. In this case, that primitive was Elastic Load Balancers. Like an AMI is a virtual server, an ELB is something of a virtual load balancers. In VPC's ELB's can span AZ's but then, the ELB is an SPOF unless your service is capable of re-initializing an ELB dynamically when it ceases to serve its purpose and can then re-route traffic accordingly. This is non-trivial but can also likely be dealt with if you understand the various intricacies of geo aware anycast backed DNS services.

Someone asked me if the AWS outages of 2012 would make me re-think my plans for cloud computing in 2013. This does not change my cloud plans for 2013 in any way. But, to be clear, even though I really like AWS, AWS is not the cloud and the cloud is not AWS. AWS is a big and deeply important part of the cloud ecosystem. I'm quite thankful for all they've done to further the understanding of cloud around the world. They are likely to stay on top, from my point of view, for a long while. I and my teams deployed large amounts of AWS in 2012 supporting the services of numerous clients.

I don't think these outages will cause any meaningful pause in most cloud plans for 2013 for anyone who takes the time to understand these sorts of situations and doesn't just fall prey to FUD (Fear, Uncertainty, Doubt) and really is serious about moving to a cloud computing model will keep marching forward. It's not perfect but the benefits to business and technical agility far outweigh the risks and knowledge ramp up investment that is necessary to make full use of cloud computing.

Things break and outages happen. There are very few systems where this is not true and those systems have been designed specially to deal with an extreme need for continuous availability. Especially complex systems and systems deployed at a large scale like AWS can break in interesting ways. It's not so much that things break that is so bad. It is what is done next that matters to keep the same things from breaking again and again. AWS does a pretty good job on this front in my opinion. It performs, communicates, and adjusts far better than most hosting providers I have historical experience working with over the last 15 years or so. They have raise the bar substantially.

Regarding AWS's IaaS services. It is AWS's job to provide a reasonable SLA and maintain it. It is up to users of the services to provide their users with services that have a reasonable SLA and maintain. Decoupling the service from the server is at the heart of the accelerating innovation in hosting of internet connected services that began quite some time ago and now marches under the banner of cloud computing. Now, if you use their PaaS services, it's a bit of a different situation but that's the subject of a whole different discussion I suspect.

Supporting Information Blast from ProductionScale's blog past is contained within the following older posts of mine (in no particular order):

The Traits of a Modern IT Organization, 8/2008
Thoughts on the Business Case for Cloud Computing, 4/2009
Get Your Head in the Clouds, 4/2008
Why Should Businesses Bother with Cloud Computing, 3/2009

Moving at the Speed of Cloud

The majority of my work in the last three years or so has been all about receiving, getting, pushing, pulling, and generally wrangling streams of data (mostly social data) for the purposes of analytics, comparison, or saving across a broad range of products and services for startups (one of my own) and fortune 500 companies. It's been keeping me busy. All of this for the ultimate reason of helping businesses make better and more well informed decisions about products, services, and more.

During this time I and my colleagues have developed the relationships, partnerships, technology stacks, and processes necessary to deliver these types of applications very quickly and at a high quality level. This has been fun all in all and something for which demand seems to be growing quickly.

To give a sense of the technology "stack" I've mostly settled on for solving these types of problems we are using:

Languages: Scala, Java, Node.js, PHP, Ruby

Frameworks: Symfony2, Play2.0, express.js, twitter bootstrap

Data Store: MySQL, MongoDB, Riak, Redis

Infrastructure: Amazon Web Services

Orchestration: Chef, Custom Scripting, AWS Cloud Formation

That's just a high level snapshot of course, there are a lot of details down inside each of those items from favored libraries to DB clients, and configuration management frameworks.

The best part for me is that it seems like for the first time in a long time many buisinesses seem to understand and believe in the value of the application of technology to solving business problems as a first order task.

The drive for big data aggregation and analytics is a natural evolution of the the maturation of cloud computing as both a technology and a service/process. The continued evolution of programming languages, application frameworks, and even the general understanding of distributed service oriented architectures and how to program REST API's is all improving as such an incredible rate that it's just an awesome time to be creating software.

So much of what we are doing now has been "around" in one form or another for a long time. The science in computer science laid the foundations quite some time ago. It's only now that so much is becomming so  accessible and the information on how to use all these tools is readily available.

I read a recent article/survey posted to Forbes.com that said the cloud is still three years away from it's full impact. The first cloud camp, where I did a session on developing for the cloud, was in 2008. That's only four years ago and look how much has changed! Awesome. 

From where I sit, this is an exciting time with nearly unlimited possibilties. Ideas are critical. Exececution is just as important. If you want to talk about any of these things I'm usually found either in San Francisco or San Rafael so let's chat! Good times!!

Data Goes Through Phases on the Way to Insights

Over the last few years I've been primarily building medium to large scale custom real time analytics platforms for clients. It's kept me pretty busy. I've done some for startups and even one for a big Fortune 50 client. This is been awesome in a variety of ways. Much of that work is finally about to see sunlight ('net light) finally and starting to hit the wires which makes me happy of course.

Along the way I have seen a patterns emerge in these types of systems. They are patterns that at a glance may seem obvious but are anything but when you are down in the weeds dealing with the various challenges associated with building these types of real time business analytics applications. For now, I've decided there are six phases in the life of a data object like a tweet, post, G+ post, email, support call, etc. to become meaningful and ultimately measurable. This pipeline looks something like:

Capture -> Distill -> Index -> Compute -> Display -> Interact --> Measure

The measure phase can feed right back into capture so the snake can eat it's own tail. Most of the applications I've architected and built with my clients and teams have ended up just like this evenutally. They didn't all need each piece right out of the gate and of course, there are more items you can add on to augment this list. But, no matter which direction, tools, or applications we built, they all ended up looking a bit like this pipeline eventually as they matured.

Capture. This keeps getting easier and easier. Much of it can even be very successfully outsourced now by using tools like DataSift or Gnip. Aggregating and storing the data with things like node.js and MongoDB, HBase, Cassandra, Redis, Node.js, and others is making this a bit rote at this point. So, it is much easier now to capture and save arbitrary data streams than ever before.

Distill. This is a combination of things like manually curated filters, NLP for categorization or sentiment and various other possible "metrics" of a sort. This can be heavily automated using a variety of very useful open source tools/algorithms, services like openamplify and much more. This part is about taking that raw mess of data and filtering it down to something more meaningful while deriving a data set that can be used for later purposes in the indexing and compute phases.

Indexing. Take the data you have saved and distilled. Then, making it searchable. Doing this at low volumes and high latency is dead easy. Doing this at scale in a low latency, high throughput, highly available and scalable fashion is very non-trivial. You'll notice this is getting harder to do as you move through the pipeline. Solr, ElasticSearch, and other tools have proven helpful in this area in various ways.

Compute. All the rage at the moment is creating metrics and scores from the derived data that has been captured, distilled, and indexed. Even apparently simple and embarrassingly parallel algorithms can be an insane can of worms at this stage. When you hit the limits of scale up you better have made wise choices at the beginning or you'll be facing a big rewrite. Writing code that scales and creating algorithms that can be scaled is also not easy at all. Tools like Akka and Fabric Engine are ones I've been working with and exploring quite a lot as well as hadoop of course and various options for stream based processing. This is were a lot of the FUN FUN is right now and it's technically very exciting in this area.

Display. Displaying information in a meaningful way to a user takes serious concerted effort. When millions and billions of things are being analyzed in near real the limitations of your user interface and choices to display data will become evident very quickly. Be ready to pivot. This is extremely hard to get right the first time. Until you get solid user feedback it's nearly impossible to get it right. This is one of the reasons I'm a fan of Agile, Lean, Lean UX, etc. The dance between UX, IA, Front End Development, API design and backend development, systems engineering, and more to make these complex distributed high throughput systems all work in a performant, distributed, and scalable way while being a joy to use is definitely a massive challenge. Often, I've found, it is an exercise in keeping things simple and fighting to eliminate unnecessary complexity day after day. Complex systems definitely seem to trend toward entropy and not entropy.

I'm happy to say all of this is becoming easier quickly as frameworks mature, new tools come along, and knowledge amongst engineers, designers, and product teams continues to increase on average. We build on the foundation what what came before us for sure. Lastly, I'd be remiss to point out that there is definitely no silver bullet for any of these phases of the pipeline, there is no one programming language that makes it all better, there is no secret handshake super secret message queue, or database that solves all your ills. But, this certainly is a fun time to be working in 'net space.

The SaaS Aggregation Benefit Mirage

In this service oriented on-demand world I’ve been running into something again and again lately that I’ve found interesting and a bit annoying.

To start, imagine I’m going to build an application that uses two 3rd party services on-demand.  We’ll just call them service A and service B and say each have two features.  For this example it does not really matter what the services do.

Service A
  Feature A-1
  Feature A-2
Service B
   Feature B-1
   Feature B-2

So, I create my application and it first uses service A do something and it uses Feature A-1 and A-2.  Then, with the output of that it uses service B to do something else using feature B-2.

Now, a few months down the line when things are going great I get a call from my account manager at Service A telling me I can now get all the features of service B directly from them included.  So, what they are telling me is that my service structure now looks like this:

Service A
  Feature A-1
  Feature A-2
  Feature B-1
  Feature B-2
Service B
   Feature B-1
   Feature B-2

On the surface this looks really good.  It’s the same thing with less hassle right?  Maybe not.

This is where my annoyance surfaces.  Dig in and dig in well.  What I find again and again is that it’s simply not true because of what I’ll just call the filter effect.  What you really are getting with this new and improved service A is more like.

Service A
  Feature A-1
   Feature A-2
   Feature B-1

Notice that Feature B-2 is missing and that probably no body mentioned it.  Or, it’s more like:

Service A
   Feature A-1
   Feature A-2
   Feature C-1
   Feature C-2
   Feature C-3
   Feature C-n-OMG
Service B
   Feature B-1
   Feature B-2

And you don’t care because C isn’t B and all you need as A-1, A-2, and B-2.  While they say it’s equal is not and the app use feature B-2 if you’ll recall.  How much time did you just spend?

So, by the time you get through all this and figure out that the new improved Service A + B is pretty useless and all you really want is what you already have you will have wasted a lot of time.  There are less features, more complexity, less control, and likely much worse service and support for the aggregated services since you have no direct relationship to the end point provider.

So, rambling aside the point is that these service provider mashup aggregaters are not what they often seem on the surface and I’m frequently finding that the best deal is going right to the source and that any “savings” on the surface likely gets eaten up later in a variety of ways that are difficult to predict.  In most cases, it’s best to go to the source to get what you want.