Instead of an Article, A Response to a Scalability Article

Nati Shalom again has posted a thoughtful and excellent blog post

Twitter as a scalability case study

 

So, I just put my comment right there on his blog and it's probably easier to read it there in-line.  Here is a re-post of my response though since it was practically a blog post anyway.  It doesn't really stand totally on it's own so you should check out Nati's post first.

A wise article. Thank you. My thoughts after reading it some directly or tangentially related...

Slowly but surely people are catching on that what can generally be referred to as loosely coupled asynchronous capable systems architecture stacks and software architectures are critical to building truly scalable systems. Twitter even knew this early on and tried to adjust many times but for reasons unknown by me still made/makes some rather odd choices about their systems and software architecture; their mysql usage for example. Funny thing is, the body of work exists to build these sites already. People just keep focusing on the wrong things like language wars or putting the individual problems into overly broad problem domains or applying the wrong solutions all together.

Today's languages and frameworks can take some of the sting out of developing and scaling an application to a point but once an application moves beyond any significant traffic level problems inevitably arrive that lay bare all the bad choices that followed before. The people who really know how to address and fix those problems are few and far between. The people who know how to avoid those problems from the very beginning based on their experience are even more rare. The companies who have those rare people on staff and actually listen to them almost don't exist at all.

I think, in business, the definition of insanity is doing the same thing over and over and expecting a different result. If so, we're seeing industry insanity around the concepts of designing web based applications that will scale. People just keep making the same mistakes again and again.

I've been thinking about all this in terms of site traffic for the average implementation of a systems and software architecture underlying a web application that might grow on todays terms. I think the page views per month to description looks loosely like this.

0 - 100,000 page views = micro site
100,000 - 1,000,000 = small site - troubles start here
1,000,000 - 100,000,000 = large site - troubles magnify dramatically here. OMG Rewrite!
100,000,000 - 1 billion = very large site - nothing works that used to work because it just can't because your system died a tragic death

Each of these requires certain skills and knowledge to build to. But all of them can be handled if planned for well up front. It's commonly held that premature scaling is the root of all evil ( and cost over runs). It's just not true. Designing the systems and software architecture of your site to be a very large site doesn't require the CAPEX expense to do so any more. It's just an technology architecture problem and it requires a broad range of skills to solve.

Anyway, I've officially rambled on and I want ice cream so bye!

Thanks for reading ProductionScale!