scale

Practical Matters When Building Servers in the Cloud - Configuration Management

For some time now I’ve been thinking about and reading about tools like Chef and Puppet.  A couple of years ago a got a couple of small projects off the ground with Puppet for a job I was working as well.  But, with the way the cloud is developing and my general belief that if you are really deploying a cloud computing application but then you find yourself logging into a server command prompt for some reason during normal operations then something has either gone wrong or you are doing something wrong.

The issue of scripted server build and configuration management is hardly new.  There are numerous other resources I’m sure you can search to learn the history of configuration management tools both commercial and open source.  For my part, I’ve been doing a number of experiments and have chosen to work with the Opscode Chef platform.  What follows in this article are a few of the things I’ve learned along the way.

Knowing some ruby helps a LOT!  Opscode chef is going to be challenging to get the hang of if you do not know the first thing about the ruby programming language.  If you are in that camp, you might just want to invest a bit of time w/ a good basic ruby programming language tutorial first.  A great deal of the flexibility and power of the chef tools is about being able to use ruby effectively.  This is not a major barrier because you do not need to be a l33t hax0r by any means.  But, it will help a great deal if you know how variables work, how to do a loop, case statements, and other basic constructs of the language.

I have been used to building and deploying things with push methods a lot in the past.  With a tool like chef things are kind of turned around the other way.  You need to put yourself the the shoes of the server you are trying to configure and think more about pulling data to yourself.  This is essentially what happens after you bootstrap a chef configured server.  It pulls a lot of data to it after it registers itself with a chef server somewhere.  It then uses this data(recipes and roles) to turn itself into the server it is told it should be more or less.

Why would I bother with all this you might be thinking!?  Well, assuming I have set up my environment properly, defined my roles, created the proper recipes assigned to that role, then with a command that looks a bit like the following:

knife rackspace server create 'role[serverrole]' --server-name myawesomeserver --image 49 --flavor 1

NOTE: This example uses the knife tools to interact w/ chef.  Knife is a command-line utility used to interact with a Chef server directly through the RESTful API.

I can start a server (a node) running the software I want running on the rackspace cloud with the code I want on it in about 4-5 minutes.  That’s 4-5 minutes from the time I hit the <enter> key on my keyboard!  Race ya!

Now, if I’m building one server, this might not seem very worthwhile.  But, if I am building 100 or 1000... or if I’m going to be building them and tearing them down constantly by the dozens or hundreds per day then yes, this makes ALL THE SENSE IN THE WORLD!  But WAIT! It gets better.

With this command I can launch THE SAME server on a different cloud in 4-5 minutes (AWS US-East-1c in this case)

knife ec2 server create -G WebServers,default --flavor m1.small -i ami-2d4aa444 -I /Users/me/Downloads/mykey.pem -S servername -x ubuntu 'role[serverrole]' -Z us-east-1c

Just think about this for a moment.  From my laptop (macbookpro in this case) I can launch a server that is exactly the same on two different cloud computing providers in well under 10 minutes without ever touching a cloud GUI like the AWS console or the Rackspace Cloud console (which would only slow me down).

Now, it wasn’t exactly trivial to set all this up so that it works.  But, the fact is, it wasn’t that bad either and I learned a TON along the way.

So, this was just a little intro article.  There are LOADS of great resources about chef out there.  I will warn you about one other thing I learned.  It’s a bit hard to search for information about this software because it’s called “chef” and it has “recipes” which means a lot of the time you’ll end up with search results from FoodTV network.  I like food so I don’t mind sometimes but it can be annoying.
I've worked with Puppet in the past, love it.  I'm working with Chef now, love it.  I'll almost certainly be using both for projects where they are best appropriate in the future.

Have good configurating and I'm sure I'll be writing more about this in the near future.

 

Dynamic DNS Rocks, More Sites Should Use It!

 

I was doing some thinking about DNS today and in particular, Dynamic DNS.  I'm still surprised more people haven't heard of and do not use this type of service.  DNS is one of those things that, in my opinion and if at all possible, you should outsource to people who can and will do it better than you.  Yes, that includes internal and external DNS.
In short, dynamic DNS services allow you to provide and programitcally control things like multiple load balanced A records or CNAMES for a single domain or web service.  This can be especially important in the context of an elastic cloud computing hosted service were certain things can sometimes be ephermal or come and go very quickly (like an IP address or a compute node).  Just like every other part of your infrastructure, your DNS needs to be elastic and programmable too.
Some of the reasons you might use Dynamic DNS:
  • Load Balancing - A Smarter version of round robin more or less
  • CDN Management
  • Site Migrations
  • Disaster Recovery
  • It'll make you all the rage at parties
I made a short list of some of the Dynamic DNS services I know about. Here they are:
Used this extensively over the years and have met the team.  It's a great service run by an excellent team. I highly recommend.
Used this one a couple of times and it worked out well.  Their interaface was a bit odd but I haven't used it for a couple of years.
Have not personally used this one so I can't provide much more information at the moment.  Will update in the future if that changes.
For further reading, WikiPedia Says...
"Dynamic DNS provers provide a software client program that automates the discovery and registration of client's public IP addresses. The client program is executed on a computer or device in the private network. It connects to the service provider's systems and causes those systems to link the discovered public IP address of the home network with a hostname in the domain name system. Depending on the provider, the hostname is registered within a domain owned by the provider or the customer's own domain name. These services can function by a number of mechanisms. Often they use an HTTP service request since even restrictive environments usually allow HTTP service. This group of services is commonly also referred to by the term Dynamic DNS, although it is not the standards-based DNS Update method. However, the latter might be involved in the providers systems."
So, while you are thinking about DNS I'll leave you with the following related tip...
Your DNS registrar is not the same as your dynamic DNS provider necessarily.  Your DNS Registrar should not necessarily be the same as your Dynamic DNS provider and it most definately should NEVER be your ISP/hosting provider.  Although, I have used www.dyndns.com and Dynect together for various reasons.  This is serious business if things go south w/ your hosting provider.  I have actually seen companies held hostage pending litigation over trivial matters when the wrong provider had registrar control.  Your domains are an asset.  Control them yourself and delegate control of them securely to someone you trust to get the help you do the work need.

 

ZeroMQ Musings and Server Build

zeromq

I just read an excellent writeup about ZeroMQ (ØMQ/ZMQ) yesterday on igvita.com.  This software appears to have been around a while but I hadn't seen it before.  It's really quite impressive.  So, I found myself quite curious to play around with it a bit this weekend.  So, I built a little rig that would let me do that based on Ubuntu 10.04 LTE.

I wanted to use the ruby bindings for my playing around and ruby 1.9.2p0.  I quickly found that most of the easy to find examples out there are in C or Python.  But, there is still some good stuff.  I'll add some of things I found as links at the bottom of this post. 

The server build instructions  here in case anyone else was interested.  The following steps will yield you a basic build with which you may test ZMQ w/ by writing ruby code.

If anyone has thoughts, ideas or improvements on this setup by all means please do let me know!  Comments have been off for a while on my blog but I'll be turning them back on after this post.

Server Build - Ruby 1.9.2p0 + ZMQ + Ruby Bindings

While playing around a bit this weekend with zeroMQ and wanting to mess w/ the ruby bindings I found I needed to build a server.  It wasn’t difficult but these are the steps which might help you get going quickly on the rackspace cloud.

Provision Your Server

I grabbed mine from the Rackspace cloud.  Your milage may vary but I know that a RS 10.04 is a well build no frills ubuntu server.  I really like using their templates as the basis for my builds.  Once you have your server up and you are logged in:

You are now all set with a ruby 1.9.2p0 and zeroMQ enabled server on Ubuntu 10.04 in the Rackspace cloud.  If this was helpful then let me know what you do with it as this is a very exciting combination.

Note:  This will work well with any Ubunutu 10.04 server. It doesn’t  have to only be a Rackspace Cloud Server.

For next steps take a look at the basic zeroMQ example published by Will’s Web Miscellany.
Of other notable interest is the Mongrel2 project which incorporates ZMQ.  The mongrel2 manual is very good reading as well.
Other helpful Links I found have been tagged on my Delicious acct here.  I'll be adding more as well as I find them.

Some Cloud Thoughts on a Clear and Sunny Day

Cloud Computing is a deployment model and cloud computing is a business model.  Cloud computing is not some silver bullet magical thing.  It's not even easy *gasp* sometimes.

As a deployment model cloud computing can it is simply summed up as on-demand, self-service, reliable, and low to no capital costs services for the consumer.

As a business model it is summed up as, again, low to no long term capital costs (and the associated depreciation) and pay as you go service provider pricing models.  In reality these are mountains of micro transactions aggregated into monthly and yearly billing cycles.  For example, I spent $0.015 for a small compute instance w/ a cloud infrastructure provider because I just needed an hour of an Ubuntu 10.04 linux machine to test a quick software install combination and update a piece of documentation.  I'll get a bill for that at the end of the month.  Get this...

An hour of compute time costs me 3.3 times LESS than a piece of hubba bubba chewing gum cost me at $0.05 (one time use only) over 30 years ago. #cloud

Enterprises and service providers are learning very quickly from the how the early public cloud vendors how to do things differently and often more efficiently.  It was well summed up in the Federal CTO's announcement of the government application cloud.  Basically, that we saw that consumers could get IT services for orders of magnitude less than we could.  So, we're fixing that by emulating what the companies that service the consumers are doing. Smart.  Bechtel did this exact same thing years ago when analyzing that the cost per GB of storage for Amazon was orders of magnitude less than Bechtel could and asked the very important question why and then answered it very well.
A couple of years ago now I helped found a company called nScaled.   nScaled does, business continuity as a service.  It is only possible with the resources, at the price, and at the speed we have moved because of following cloud computing deployment and business models.  It would not have been possible for us to build this business when we did and the way we have without these models.  
In March 2008 I called cloud computing a renaissance.

It is my opinion that Cloud Computing is a technology architecture evolution that, when properly applied to business problems, can enable a business revolution. I've been saying this for a while but in recent weeks I have actually come to prefer the term renaissance over revolution.

Today, two years into a startup that uses the raw power of cloud computing deployment and business models across the board to enable new ways for companies to consume disaster recovery and business continuity solutions I can say without a doubt that I believe that cloud computing is a renaissance more than ever before!

 

Excellent RailsEnvy List in Ep.101

RailsEnvy Episode #101

The list of links from this podcast episode was particularly intriguing for me this week.

Of particular interest to me is:

TorqueBox - JRuby backed Rails application platform (and more) build on JBoss AS.  Very intriguing and I'll be experimenting right away with this for an application I've just pushed out into production.

ShardTheLove - An Active Record horizontal sharding solution with build in support for migrations, testing, and more.  I will also be evaluating this for inclusion into a new application I am just launching.

Jammit - A static asset packing solution for Rails applications. Finding a good solution for this can sometimes be challenging.  However, doing it in any modern web application is pretty much mandatory.  I look forward to testing this library.

Great stuff and worth a look if your pumping out Rails applications that you want to be scalable on-demand.