OS Virtualization: An Introduction

Posted by dotmil on Nov 28, 2005 10:44 AM EDT
DebCentral; By Josh King
Mail this story
Print this story

One of the hottest topics in all of IT today is the subject of virtualization. While it has been around for some time, it has just recently started to garner the attention of the biggest names in tech. Everyone from Intel and AMD, to Microsoft, Sun, and virtually every commercial Linux vendor has either current or planned support for virtualization. So what is it, and why is everyone so head over heels about it?

Virtualization comes chiefly in two forms, hardware or software virtualization. The most well known is likely hardware emulation. In this type of virtualization, the host OS provides a layer which translates the usual system functions of the guest OS.

For example, VMware running on Linux but also running a Windows OS inside the application. In this situation, VMware intercepts the calls Windows makes to the actual physical hardware and translates those calls into a manner in which the Linux kernel can understand.

So if Windows says it needs access to the BIOS or video card, VMware steps in and takes the message then acts as a translator and asks the Linux kernel to please provide the needed information or run the processes, once that information or process is complete then VMware must take the translation back in the other direction, from Linux to Windows.

The guest OS is provided with a complete, virtual environment in which to carry out its duties. This environment can even be made to emulate hardware that the host OS doesn't even have access to (such as a PDP-10 emulator allowing ancient Unix code code to run on top of Linux). As you can see, this type of virtualization has one inherent flaw, guest OS's will never be as fast or responsive as the host due to the translation that must occur.

This limitation with emulation has been worked around and made to be a relatively small enough issue that many still rely on such solutions. This issue of speed, or the lack thereof, was much more significant in the past due to physically slower hardware. Even with todays hardware, the difference can be quite noticeable for some applications. Therefore, this type of virtualization has always been considered by many as more of a workaround than a perfect solution. Luckily, hardware virtualization technology did not stop with emulation. But we'll get back to that shortly.

The second main type of virtualization is OS or software. This type is a newer approach, and is much more efficient than the older method of hardware emulation we talked about above. Examples of this type of virtualization can be seen in applications such as Xen, Virtuozzo, UserMode Linux, VServer, or Solaris' Zones.

This type of virtualization also uses a base, or host OS but with one major difference. Instead of sharing processes by translating them from the guest, running them on the host, and returning the result, this approach provides each guest its own environment to directly access the hardware. So, if that same copy of Windows running on Linux needed to access the video card, it would do so with native Windows code and without waiting for the Linux kernel to run the call for it.

Processes are kept separate from each other as much as possible, and do not inter depend on each other as they do with emulation. However, another design goal is to share the software that is common to all guest and hosts OS's. So if you're running Linux on Linux, some portions of the OS will be common to each of them, and therefore is more efficient to share those rather than duplicating them over and over again. This software approach has proven to be extremely well received and much more performance oriented than the emulation of old.

Today, it is the standard method for serious virtualization. Almost every modern OS sold, created, or used today can be used with or is equipped for this type of virtualization.

The next step in this evolution is actually to go down lower to do "real" hardware virtualization. Intel and AMD both have been developing processors that allow multiple OS's to run inside partitions on the system, requiring no additional software and no emulation of any kind. Earlier this week, Intel was the first to announce the availability of this type of virtualization for desktops (it has been available within Xeons for some time now) with Intel Virtualization Technology (VT).

These processors currently are single core Pentiums with Hyperthreading, but will be replaced with dual core, Hyperthreaded VT enabled processors next year. AMD is planning to roll out their line of virtualization equipped processors in multiple core designs very shortly.

These chips function by providing a layer, called a hypervisor, that allows multiple OS or applications to utilize the host hardware to its full potential without tripping over each other or attempting to share a single memory address among OS's.

Something you will often see mentioned along with this newest style of virtualization is a blade server. A blade server is basically a larger server chassis that houses many servers inside of it. Think of it this way, imagine a standard rack mount server (maybe 4u sizeish); that server houses one machine.

Now take that same chassis, but add in some special hardware that allows "blades", basically servers on a circuit board, to be plugged in or swapped around. Now instead of having one machine in that 4u space, you may have 10 or more. This in itself can lead to huge cost savings, but adding virtualization of the OS's on those servers and you once again multiply the available options but drastically reduce space and power consumption. For more on blade servers, see the Wikipedia discussion of the topic.

So, that's the lowdown in a nutshell on what virtualization is. But now the question is, why would you want this? What can you do with it that is of any benefit other than raising your geek points? Actually, there are several huge reasons why many businesses and most all operating system and processor manufacturers are making virtualization a priority for next-generation platforms of all kinds.

One of the most obvious benefits of virtualization is that it lowers the need for massive datacenters. Instead of having a dedicated machine for serving applications, one for intranet/internet serving, another for mail, etc. those functions can be combined into one sufficiently powerful machine. This is done by running multiple instances on a guest OS on the same hardware.

So even though all those services may be located on the same physical machine, they are each running in their own dedicated and protected OS environment. The virtualization application is what makes this happen. It allows you to have your mail and web running on Red Hat, and your database running on Solaris on the same physical machine but totally independent of each other. The virtualization provides them each with access to the processors, dedicated and protected memory, and all other physcial hardware located within the machine.

Add virtualization to any of the numerous blade server offerings available, and an enterprise datacenter that used to need an entire floor of a building may now need just a small corner of one room. This all adds up to real savings not only in the cost of the physical hardware, but also in power, HVAC, and other infrastructure costs. Also, fewer machines to manage means fewer employees dedicated solely to administering systems. Those human assets can then be redeployed in areas more critical or core to the business.

From a development standpoint, virtualized OS's again offer many tempting benefits. Development teams can have easy access to "sandboxes" for pre-deployment testing. Also, because virtualization separates management of physical and virtual assets, development teams are free to continue working as always no matter what physical changes may be made on the backend of the business.

Disaster recovery is greatly enhanced by virtualization. Disaster recovery means taking your data and keeping it the same, while also changing the physical structure of the underlying hardware. Since your virtualized systems aren't ingrained into any one hardware configuration, getting back up and running with a totally new hardware layer will be much simpler than with a non-virtualized platform. Once you separate the physical from the virtual, many such beautiful benefits arise.

Lastly, virtualization gives you the power to choose. Your choices for applications, as well as operating systems, are much less limited. If you know you've always wanted to run "App X" on Solaris, but never had a machine to dedicate to Solaris and didn't want the cost of sometimes pricey Sun hardware, now you can run that same app on a commodity server or blade without investing in an entirely new set of management headaches. With virtualization, the hardware and the software are decoupled, allowing you to manage them independently of one another.

There are some headaches with virtualization, just the same as with any growing technology. Currently, management tools are somewhat lacking for most OS's. This could place a burden on your IT staff to learn new skills, and delve into areas they have not had to previously. However, any competent systems administrator should be up and running fairly well in a relatively short period of time. The other main issue is that moving to a virtualized environment will require systems operators and network professionals to work in harmony with each other to handle the issues involved with readdressing IP's, and virtualizing network services. But again, this issue generally proves to be relatively minor compared to the benefits. The largest possible issue is in inducing a single point of failure by having so much loaded onto one machine. This is one of many reasons why farming services out to multiple machines has been the standard for so long. However, this issue too can have the threat lessened by building appropriate backups and redundancy into your overall design.

Even if you're not sure or not quite ready to start moving to a virtual environment, this is one technology that you will want to keep an eye on for the near future. Imagine entire datacenters running in a one rack closet; a business environment that is totally portable across platforms. Virtualization gives you one thing that most companies are looking for in their technology today, freedom and control. The freedom to manage hardware and software independently, and the control to pay for only what you need. That's the promise of server virtualization, and the time for it is now.

For a more in depth look at the topics covered here, please see these resources: An Introduction to Virtualization Xen virtual machine Virtuozzo Microsoft Virtual Server User-Mode Linux VMware

» Read more about: Story Type: News Story; Groups: Intel, Kernel, Microsoft, Red Hat, Sun

« Return to the newswire homepage

This topic does not have any threads posted yet!

You cannot post until you login.