cloud computing

When is Big Data Actually Big?

There is a quandary for anyone trying to wrap their head around what “BigData” means. When is big data really big? I had a good conversation with a friend of mine @ckenton today and as we were discussing some impactful things he and his company have done for clients over the last couple of years with careful and meaningful data analytics of what would mostly be considered social media data. I was struck by the fact that there was significant impact using what, by data volume measure, was actually not all that much data; perhaps a few gigabytes in aggregate in each case we discussed.

On another front I have two active projects for two very different clients right now where I and my teams have architected and built systems from scratch that crunch from the 10’s of 1000’s to the millions of pieces of data per day in near real time. We’ve created code, frameworks, and modules and used off the shelf kit whenever we could. There have been moments of bliss and moments of solid wall to forehead pounding frustrations. We’re using tools like MongoDB, Riak, node.js, PHP, Redis, Scala, Java, Akka, AWS, Capistrano, Jenkins, Chef, Git and more. We’re using a flexible agile workflow models with business agreements and contracts that match. We are doing all this to analyze data. With all of this we are crunching what some would would call big data and it’s definitely growing very, very quickly by volume. But, it is not big data because of the ever growing volume. It’s big data because from it impactful meaning is extracted and the end users of these insights from otherwise chaotic looking data streams can make impactful business decisions quickly for their contextual needs.

In summary, I’ve come to think that Big Data is Big when the insights derived from it is truly meaningful and potentially significantly impactful. It doesn’t matter if it’s a few Gigabytes or a few Petabytes of data. The technical challenges will vary of course depending on data volume but what really matters is what you learn from the data you have and then, most importantly, what you do with that newfound knowledge once you have it in your grasp.

Building an Application upon Riak - Part 1

For the past few months some of my colleagues and I have been developing an application with Riak as the primary persistent data store.  This has been a very interesting journey from beginning to now.  I wanted to take a few minute and write a quick "off the top of my head" post about some of the things we learned along the way.  In writing this I realized that our journey breaks down into a handful of categories:
  • Making the Decision
  • Learning
  • Operating
  • Scaling
  • Mistakes
We made the decision to use Riak around January of 2011 for our application.  We looked at HBase, Cassandra, Riak, MySQL, Postgres, MongoDB, Oracle, and a few others.  There were a lot of things we didn’t know about our application back then.  This is a very important point.

In any event, I’ll not bore you with all the details but we chose Riak.  We originally chose it because we felt it would be easy to manage as our data volume grew as well as because published benchmarks looked very promising, we wanted something based on the dynamo model, adjustable CAP properties per “bucket”, speed, our “schema”, data volume capacity plan, data model, and a few other things.

Some of the Stack Details

The primary programming language for our project is Scala.  There is no reasonable scala client at the moment that is kept up to date for Riak so we use the Java client.

We are running our application (a rather interesting business analytics platform if I do say so myself) on AWS using Ubuntu images.

We do all of our configuration management, cloud instance management, monitoring harnesses, maintenance, EC2 instance management, and much more with Opscode Chef.  But, that’s a whole other story.

We are currently running Riak 1.0.1 and will get to 1.0.2 soon.  We started on 0.12.0 I think it was... maybe 0.13.0.  I’ll have to go back and check.

On to some of the learning (and mistakes)

Up and Running - Getting started with Riak is very easy, very affordable, and covered well in the documentation.  Honestly, it couldn't be much easier.  But then... things get a bit more interesting.

REST ye not - Riak allows you to use a REST API over HTTP to interact with the data store.  This is really nice for getting started.  It’s really slow for actually building your applications.  This was one of the first easy buttons we de-commissioned.  We had to move to the protocol buffers interface for everything.  In hind sight this makes sense but we really did originally expect to get more out of the REST interface.  It was completely not usable in our case.

Balancing the Load - Riak doesn’t do much for you when it comes to load balancing your various types of requests.  We settled, courtesy of our crafty operations team on an on application node haproxy to shuttle requests to and from the various nodes.  Let me warn you.  This has worked for us but there be demons here!  The configuration details of running HA proxy to Riak are about as clear as mud and there isn’t much help to be found at the moment.  This was one of those moments over time that I really wished for the client to be a bit smarter.

Now, when nodes start dying, getting to busy, or whatever might come up you’ll be relying on your proxy (haproxy or otherwise) to handle this for you.  We don’t consider ourselves done at all on this point but we’ll get there.

Link Walking (err.. Ambling) - We modeled much of our early data relationships using link walking.  The learning?  S-L-O-W.  Had to remove it completely.  Play with it but don’t plan on using this in production out of the gate.  I think there is much potential here and we’ll be returning to this feature for some less latency sensitive work I perhaps.  Time will tell...

Watchoo Lookin’ for?! Riak Search - When we stared search was a separate project.  But, we knew we would have a use for search in our application.  So, we did everything we could to plan ahead for that fact.  But, by the time we were really getting all hot and heavy (post 1.0.0 deployment) we were finding our a few very interesting things about search.  It's VERY slow when you have a large result set.  It's just the nature of the way it's implemented.  If you think your search result set will return > 2000 items then think long and hard about using Riak's search functions for your primary search. This is, again, one of those things we’ve pulled back on quite a bit. But, the most important bits of learning were to:
  • Keep Results Sets small
  • Use Inline fields (this helped us a lot)
  • Realize that searches run on ONE physical node and one vnode and WILL block (we didn’t really feel this until data really started growing from 100’s of 1000’s of “facets” to millions.
At this point, we are doing everything that we can to minimize the use of search in our application and where we do use it we’re limiting the result sets in various ways and using inline fields pretty successfully.  In any event, just remember Riak Search (stand alone or bundled post 1.0.0 is NOT a high performance search engine).  Again, this seems obvious now but we did design around a bit and had higher hopes.
 
OMG It’s broken what’s wrong - The error codes in the early version of Riak we used were useless to us and because we did not start w/ an enterprise support contract it was difficult sometimes to get help.  Thankfully, this has improved a lot over time.

Mailing List / IRC dosey-do - Dust off your IRC client and sub to the mailing list.  They are great and the Basho Team takes responding there very seriously.  We got help countless times this way.  Thanks team Basho!

I/O - It’s not easy to run Riak on AWS.  It loves I/O.  To be fair, they say this loud and clear so that’s my problem.   We originally tried fancy EBS setup to speed it up and make it persistent.  In the end we ditched all that and went ephemeral.  It was dramatically more stable for us overall.

Search Indexes (aka Pain) - Want to re-index?  Dump your data and reload.  Ouch.  Enough said.  We are working around this in a variety of ways but I have to believe this will change.

Basho Enterprise Support - Awesome.  These guys know their shit.  Once you become an enterprise customer they work very hard to help you.  For a real world production application you want Enterprise support via the licensing model.  Thanks again Basho!

The learning curve - It is a significant change for people to think in an eventually consistent distributed key value or distributed async application terms.  Having Riak under the hood means you NEED to think this way.  It requires a shifted mindset that, frankly, not a lot of people have today.  Build this fact into your dev cycle time or prepare to spend a lot of late nights.

Epiphany - One of the developers at work recently had an epiphany (or maybe we all had a group epiphany).  Riak is a distributed key value data store.  It is a VERY good one.  It’s not a search engine.  It’s not a relational database.  It’s not a graph database.  Etc.. etc..  Let me repeat.   Riak is an EXCELLENT distributed key value data store.  Use it as such.  Since we all had this revelation and adjusted things to take advantage of the fact life has been increasingly nice day by day.  Performance is up.  Throughput is up.  Things are scaling as expected.

In Summary - Reading back through this I felt it came off a bit negative.  That's not really fair though.  We're talking about nearly a year of learning.  I love Riak overall and I would definitely use it again.  It's not easy and you really need to make sure the context is correct (as with any database).  I think team Basho is just getting started but are off to a very strong start indeed.  I still believe Riak will really show it's stripes as we started to scale the application.  We have an excellent foundation upon which to build and our application is currently humming along and growing nicely.

I could not have even come close to getting where we are right now with the app we are working on without a good team as well.  You need a good devops-like team to build complex distributed web applications.

Lastly and this is the real summary, Riak is a very good key value data store.  The rest it can do is neat but for now, I'd recommend using it as a KV datastore.

I'm pretty open to the fact that even with several months of intense development and near ready product under our belt we also are only scratching the surface.

What I'll talk about next is the stack, the choices we've made for developing a distributed scala based app, and how those choices have played out.

Stop Staring at my Polyglot!

I received an interesting comment/question via my blog recently.  It went a bit like this... 

I’m developing a distributed cloud application but my developers are pushing back on me for having a polyglot database strategy.  What should I do?

I won’t get into exactly what it is they are doing since that would take several more pages.  This is something of a stream of thought post so apologies if it is a little rough around the edges.  The easiest way to answer is in the context of an application I’ve been working on for a while that has some similarities to what this person wants to build.  Everything I’m describing is part of an app I’ve been building with a client since earlier this year.

Typically you'll need a few layers of "data storage” for any distributed batch or real time application (cloud native application) which is what I understand that you are trying to build.

I consider anything that holds data that is for presentation, computation, or transformation part of the data storage architecture and I like to break it down by time in storage from least amount of time to most. 

Short or Very Short Term: Single node caches (like memcached) or volatile computer node memory
Mid Term: Queue's and IMDG's
Long Term  Durable Storage:  Dynamo and BigTable derivatives abound 

There are numerous database products that live in or even between those tiers these days; more than ever before.  By no means is what follows even close to an exhaustive list.  A quick list of the ones I have worked with in the last few months personally looks like: 

Short Term: Memcached, Redis, RabbitMQ, ZeroMQ, DRAM, APC, MongoDB
Mid-Term: Redis, GridGain, RabbitMQ, ZeroMQ, MongoDB
Long-Term: Riak, MongoDB, S3, Ceph, Swift, CloudFiles, EBS, HBase

Short-Term storage is ALL in memory, not persisted to disk, and not intended to be used for long periods of time.  Your application also has to be able to deal with the fact that this type of storage is essentially ephemeral.  If the node gets a KILL signal from some source or another your app needs to know how to deal with this gracefully.  In other words, storage here is not durable at all.

Mid-Term storage is used for longer running processes.  It benefits greatly from being distributed and having a higher degree of durability.  This is generally still where most of the work in done in main system memory (no disk I/O) but also where you might do complex calculations or data transformations on your way to your goal.  You do it here because it’s fast.  You do things here because they can be shared amongst lots of workers (like queue subscribers). 

Long-Term storage is used for exactly that, long term durable storage of important data that provides sufficient and reasonable interfaces from which to retrieve that data again when needed.  Preferably it’s possible to do things like map-reduce jobs so that you can iterate and retrieve what is necessary which you may then operate on at one of the higher levels up this stack.

You’ll see that I’ve put some of them in all or multiple categories which might seem odd until you understand how they work and match the technology to what ever you are trying to achieve from a business perspective.

I have a tendency to avoid things that require overly complex operational management issues for starting up projects because I like to try to get my TCO (Total Cost of Ownership) over time (3-5 years) as low as possible while achieving the project goals and SLA’s.  There are a couple of exceptions on the list above that do have more operational overhead (MongoDB and HBase) but they are good enough in the right context that you might want to learn and use them anyway.

Now, back to the question at hand.  Should I use one type of DB or many for the needs at hand.  In this case, I’ve told them that they should use as few as possible, possibly only one.  The reason for this was that in their case they will value speed, consistency, and lower cost of operations at this early stage of their project.  They are developing an interesting distributed system for cool reasons.  I recommended a choice to them that I think will help them get to their goals fast and cost effectively while allowing them down the line to break off pieces of the application later as and if needed.

Parting words are that it will, over time, be nearly unavoidable that this (and most) applications of a distributed nature end up being database polyglotoumous.  However, I do think it adds a lot of complexity and overhead and in the early stages of a project it's not usually necessary unless what you are doing is of great complexity in which case you might want to break that down anyway to something more manageable.

The NIST Definition of Cloud Computing(Draft)

click for original doc

I thought I'd start the week with a reminder of an oldie but goodie.  This document came out after the intial barrage of "what is cloud computing" and "cloud computing defined" posts from a few years ago.  But, I've always felt that NIST did a great job with it overall.

In my opinion it's still one of the best and most complete current definitions of Cloud Computing of any other out there.  So, in the off chance that you have not seen this defintion of cloud computing it is definately worth the time to read through.

One of my earliest definition articles from April 2008, Get Your Head in the Clouds, is still my most trafficked article on this site most weeks.

"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud  model  promotes availability and is composed of five essential characteristics, three service models, and four deployment models..."  --NIST Cloud Definition

In particular I like the way they break down the characteristics, service, and deployment models. 

 

Cloud Operating Systems: Do They Exist Yet?

I was asked an interesting question by friend of mine a few days ago.  He says simply, “Are there any Cloud Operating Systems?”

I then proceeded to argue that no, there really were not any because all of the necessary pre-requisites for being both cloud native while providing the basic services an operating system should provide were not available in the current crop of software claiming to be a cloud operating system upon which I could build cloud native applications.

Let’s think about what an Operating System gives you.  An OS is essentially an abstraction layer over hardware that provides the necessary interfaces(drivers) to get the work done that is needed by computer programs.  At its most basic an operating system provides storage, networking, and compute and the management of those things in a fairly transparent way.   It also provides a tremendous amount of flexibility to it’s user.

Wikipedia defines an Operating System as, “software, consisting of programs and data, that runs on computers, manages computer hardware resources, and provides common services for execution of various application software. The operating system is the most important type of system software in a computer system. Without an operating system, a user cannot run an application program on their computer, unless the application program is self booting.”  -wikipedia

If I was to write the definition for wikipedia using the definition for an operation system as the bse then I would say, “A cloud operating system (COS) is software, consisting of programs and data, that runs on IaaS providers, manages IaaS resources, and provides common services for execution of various application software and SaaS. The cloud operating system is the most important type of system software in a cloud computing system. Without a cloud operating system, a user cannot run a cloud native application program on the cloud computer, unless the application program is self booting.

To me, this sounds most like what we currently often call Platform as a Service (PaaS).

The problem I have with current PaaS systems is that they dictate heavily and take away far too much from the programmer from the underlying services.  This is a compromise of course.  But,  a true operating system is far more flexible than the existing crop of PaaS services.  Just imagine if your newest operating system upgrade wouldn’t let you access ½ the CPU cores or the networking services for some reason and your manufacture just said, tough cookies because that’s just how it is.  You’d probably try to return it if you could.

Where this got interesting for me in thinking about it later was the concept of “self-booting” that was mentioned in the wikipedia defintion above.  If you build the necessary intelligence into your PaaS such that it can bootstrap an application, the storage, networking, and compute necessary to run and then manage that moving forward then you do, in fact, have something more resembling a cloud operating system as defined above.