Virtualize Everything


I had a good visit to IBM's Executive Briefing Center in Raleigh yesterday. I was there to learn about data centers, power, blades and virtualization. I've put some of my notes on data center power requirements at Between the Lines.

When I first thought about virtualization as a tactic in the data center, I assumed that the point was saving hardware costs. That's not true for a couple of reasons.

First, the most sophisticated virtualization solutions, capable of running Windows as well as Linux, like VMWare's ESX aren't cheap. In fact, by my calculations, VMWare has figured out how to take just about all the hardware savings for themselves in their software licensing. Competition is eating into that, but you're still not going to save much money on the initial purchase.

The real saving come in the form of decreased sever management. About 2000, labor costs associated with managing servers surpassed server capitalization costs for the first time and they've been climbing steadily ever since.

Virtualization hides physical constraints, makes it easier to deploy, grow, and migrate applications, minimizes the impact of changes to physical resource, and enables hardware change-outs to be accomplished transparently without "maintenance windows." These are the real advantages of moving to virtualization in the data center.

Organizations usually approach virtualization in two phases. The first phase of virtualization is for physical consolidation of multiple servers. Over the years, the trend has been to isolate applications on their own server in order to survive DLL Hell. Often this has resulted in servers with very low utilization numbers. Consolidating those applications on virtual servers retains the advantages of separate servers but with fewer servers to manage. Another advantage is improved resource flexibility. In this phase, the organization probably uses disparate management tools for individual servers.

The goal of the second phase is logical simplification. In this phase, the number of resources, including network and disk resources, being virtualized approaches 100% giving maximum resource flexibility. This phase is usually marked by a move to unified management and automation tools.

Server virtualization performs better on big SMP boxes than on the equivalent number of processors in pizza boxes--at least for applications with unpredictable loads. The reason is pretty simple: the more applications that can be brought onto the same hardware the more headroom you have to handle bursty loads. It's kind of an N+1 solution to the bursty load problem instead of a 2N solution.

Intel and AMD plan to make changes to their architectures to specifically support virtualization. This will have several positive effects:

  1. First, VMWare's proprietary advantage will be much smaller and other virtualization technologies, including open source projects like Xen will be able to more easily provide virtualization features including support for Windows.
  2. Second, with the architecture changes, the hypervisor layer could be as small as 35000 lines of code. This is small enough to be embedded in the firmware of systems so that all servers just "know" how to virtualize the OS layer.

In short, virtualization will become a commodity. VMWare and others will have to figure out how to make money in the management and automating tools built to manage the virtualization layer.

Update: Intel's virtualization technology is code named Vanderpool. AMD's is code named Pacifica. Tom Yager had a recent column on virtualization at InfoWorld.


Please leave comments using the Hypothes.is sidebar.

Last modified: Thu Oct 10 12:47:19 2019.