It’s a virtual case of d

Virtualization is the buzzword du jour this year. Everyone is touting its benefits in many areas of IT. But do they really know what virtualization is?

In a nutshell, server virtualization is a technique that lets you run multiple versions of an operating system, multiple operating systems or multiple copies of the same operating system simultaneously on a single machine, and have them appear to the world as separate systems. Storage virtualization, on the other hand, makes many physical devices look like a single device, or as several “devices” that may bear no resemblance to the actual equipment.

In servers, the technology is very much a case of “back to the future.” Mainframe veterans know it well; IBM mainframes have used it since the 1960s. And most vendors of Unix servers have marketed systems for many years as well. However, until recently, the Intel platform presented some technical challenges; Intel-based virtual systems are relatively new.

Virtualization, regardless of the platform, usually works like this: A program known as a hypervisor, or virtual machine monitor (VMM) runs either directly on the hardware or on a host operating system such as Windows.

The VMM creates and controls virtual machines – the multiple operating systems that appear as though they’re running on individual computers. When you log on to a virtual machine, it’s the VMM that passes the operating system instructions on to the physical hardware and passes the hardware’s feedback to the virtual machine. It knows how to keep the virtual machines running on a single computer from bumping heads and interfering with each other – in fact, virtual machines are often used to create test environments on production systems, since if the test “machine” crashes, it doesn’t affect the main system.

A virtual machine has access to all of the physical resources of its host hardware: Memory, network interfaces, disk drives and so forth. It is configured, when created, to use some or all of them. For example, on a computer that has 32 GB of memory, the administrator may tell the VMM to allocate 4 GB to each virtual machine. To the users of that virtual machine, it will look as though the “computer” they’re running only has 4 GB of memory.

You can actually have a virtual network running on a single computer. Each virtual computer can have its own IP address, even though all of the virtual NICs are talking through the single hardware NIC on the host.

Your virtual computers don’t need to all be running the same operating system, either. Hosting companies take advantage of this to satisfy their clients’ needs: On one server, they can host virtual machines running Windows and virtual machines running Linux, for example, and their customers won’t have any idea they’re not running on separate physical machines.

One form of virtualization, known as emulation, can even run operating systems meant for entirely different hardware. In these cases, the VMM has the additional task of taking the commands from the virtual machine and translating them into commands that the underlying hardware understands, then interpreting the hardware’s outputs for the virtual machine. This technique is useful if you have to retire an obsolete computer, but still need to run a virtual version of it on newer hardware that ordinarily would not be compatible.

Virtualization has a cost, however. The overhead of running the VMM does affect performance. Consequently, chip vendors are developing hardware virtualization technologies. Intel, for example, has already introduced CPU virtualization technology, and is now working on virtual input/output (I/O). AMD has already introduced its own I/O virtualization technology.

This should reduce the VMM overhead by performing some tasks in hardware rather than software.

Share on LinkedIn Share with Google+
More Articles