Virtualization 101: The Basics

Virtualization 101: The Basics
Page content

What is Virtualization?

Traditionally, servers were large computers with their own chassis, motherboards, CPUs and other hardware. Each server required their own power supply, network connection and space in the data center. Virtualization allows companies to reduce these requirements. The basic concept of virtualization is that a large server known as a hypervisor runs a special operating system or application that allows it to host guest servers. Each guest server shares the physical hardware of the host but is in fact unaware that it is a virtual server.

Some companies virtualize servers to save money, by not having to keep purchasing hardware and operating system software. Other companies consolidate their servers onto virtual hosts to save room in their data centers. Companies wishing to go green virtualize their servers to reduce the amount of electricity and heat generated by physical servers.

A Look at the Hypervisor

The core of virtualization is the hypervisor or host system. The hypervisor runs the purpose built operating system or application hosts the guest machines. When a guest machine attempts to access a piece of hardware like a network card or video display, the hypervisor software intercepts the call. The hypervisor then gets the information or performs the action that the guest requested and passes the results back to the guest operating system.

Several companies produce hypervisor software. VMware is one of the largest providers of dedicated virtualization technology. Microsoft and Oracle have baked virtualization into their current operating system offerings. Microsoft calls their offering “Hyper-V.” Oracle through its acquisition of Sun Solaris calls its technology “Zones.”

Open source versions of virtualization exist as well. A technology known as Xen runs on various Linux platforms.

The Guest is Always Right

In the simplest terms a guest is a virtual machine running on a hypervisor. When a new guest server is required, the administrator first builds the virtual machine container. During this process the administrator configures all of the “hardware” settings the guest thinks it will have–how much memory, storage space, the number and speed of the CPU’s and so on. Once these items are configured the administrator can install the operating system and other applications onto the guest. As far as the virtual server is concerned it is the only one running on that hardware and has no awareness of other guest machines running on the host.

Virtual machines can even be turned into templates or images already preconfigured with all of the settings a company requires. These images can be spun up even faster than having to install an entire operating system.

Some virtualization technology can run several different types of operating systems. VMware and Microsoft’s Hyper-V, for example, can run Windows, Linux and Solaris guests. Sun’s Zones only support the Solaris operating system.

Hardware Considerations

Modern operating systems require more and more resources with every new version. In order to run multiple guests on a hypervisor, the hardware must be able to support these requirements. What may seem like a lot of memory or storage for one physical server would not be enough for a hypervisor being tasked with running a dozen or more operating systems simultaneously.

A single hypervisor with over 64 GB of RAM, several terabytes of storage and multiple CPUs is not uncommon. Fast hardware is the order of the day to ensure that any requests made by guests (often at the same time) are handled quickly. For guests, running on hypervisors without sufficient hardware or running too many virtual machines, the operating system or its applications will appear to lag or performance suffers.

What to Virtualize?

Organizations often begin the virtualization efforts with non-critical servers or ones with minimal hardware requirements. File shares and print servers are common candidates. Other systems such as domain controllers or authentication servers are virtualized as the company becomes more confident in the technology and their staffs’ ability to administer them properly. Small databases and other applications the company uses eventually get consolidated and virtualized.

Systems not commonly virtualized are large databases or high transaction servers where the sharing of resources might cause peformance problems. Mission critical applications or systems are generally not virtualized unless a well tested fail-over or high-availability process is in place.

The Hazards of Virtualization

The biggest threat to virtualization is hardware failure. When hypervisors have problems the damage is not limited to a single server. Every guest operating system running on that host is affected. Companies must have a plan to recover from the potential disappearance of several servers at once if a hypervisor were to fail.

Disaster recovery systems whether physical or virtual allow companies to remain online when system fail. VMware, Microsoft and other virtualization technology vendors provide solutions to deal with these events. For example, VMware’s vMotion technology can shift guests from one hypervisor to another as needed.

Virtual Machine sprawl is another hazard. Due to the ease of deploying servers, companies must control who can build or start these virtual machines. Keeping track of virtual machines has become an industry in and of itself. A hypervisor can be very quickly overwhelmed with guests if care is not taken. Operating system costs can also rise since each guest must run a licensed copy of Windows or other OS.