I just got back from a quick trip to LinuxWorld, where I discovered that the vendors are still *gasp* completely clueless. They’re stuck in this “The Server is Precious” mindset, which prevents them from understanding the real value in Linux. Linux isn’t an operating system, it’s a framework. Datacenter computing isn’t a collection of random boxes, it’s a compute grid. Of course, for this to all make sense, software must be built with the datacenter in mind. Good IT shops understand this, and bad ones go to LinuxWorld and suck up Avocent bullsh!t about remote server management.
“The Server is Precious”:
This old-school mentality basically entails a staff of administrators who lovingly hand-craft every server for it’s new purpose. The server performs this purpose for some period of time, then fails spectacularly. The admins profusely apologize for this failure, sell their management on a cluster (-f_ck) at five times the price of the original server. The cluster, being less reliable than the original server, proceeds to fail in a more subtle, less recoverable way, taking the service down again. The admins, who probably understand something about grid computing, refuse to bring it up to their management, since they’re afraid of change, and Server TLC is their only modus operandi. So, the company continues to hunt around for a better set of crutches for their poorly designed, highly persistent application. They’ll probably go to windows server, sucking up the microsoft crap about five-nines.
Linux is a Framework:
Okay, so now that it’s clear that you shouldn’t think of servers as these little hand-crafted beasties, how do we move on? The key is to think of Linux as an application framework rather than a monolithic operating system. A monolithic operating system is generally hand-installed, comes with every imagineable feature, and must be upgraded by hand every few years. A framework, however, is simply the collection of open source software that surrounds the business application. This framework is the minimum set of libraries and tools required to operate and maintain the application in production. Each application should have it’s own framework. Keep it simple.
Datacenter Grid Computing:
Every system in your datacenter should be viewed as a meaningless, stateless blob of compute capacity. Think of it as a big CPU hopper. The Infrastructure team is responsible for throwing the lowest-price, least stinky CPU manure into the top of the hopper. Their only job is to keep costs low, and the hopper full enough to satisfy demand. They need to understand (and be measured by) the cost-per-pound of said CPU, and they need to charge each service for the capacity that they consume. Think of it this way: you need to buy processors and DIMMs. Anything else is overhead. Some of it (like a mainboard) is probably necessary. But why do you need a discrete power supply? Why a discrete chassis? What the hell do you need remote floppy redirect over a java applet for, again?
Make the Software an Appliance:
Ah! Now we’re getting somewhere. Once you have a big hopper of CPU cycles, you can simply instantiate the software on this resource as needed. Need more? Take it from the hopper on a just-in-time basis. Done with your resources? Back in the hopper. Suddenly, things like a complete, mangled (ahem, managed) operating system become superfluous. Since you built a framework of your minimum set of requirements around your application, you can just toss that blob into the compute farm and execute it to your heart’s content.
Rubber, meet Road:
Okay, this is all good to talk about. In fact, lots of people are talking this way. So, who gets it… and who is completely missing the boat?
On the Boat:
- Amazon Web Services
- RedHat’s Xen Platform
- Rackable Systems
- Dell Cloud Computing
- Silicon Mechanics
Still on the Dock:
Am I crazy? Post a comment, let me know. Who else is still on the dock? Who is the captain of the boat?