Another random RAID related question :P
|
Author | Content |
---|---|
techiem2 May 03, 2009 3:07 PM EDT |
So I'm thinking of turning the fileserver into a virtual machine.
I've been wanting to build a new VM server anyway, so if I have the $$ I'll probably do it.
My current vm server is only serving one vm, and it makes sense to get the fileserver into one as well.
Save some power, etc.
(current VM server is an AthlonXP 2200+ with 1GB RAM and current fs is a Celeron 766 with 256MB RAM) So of course the vm server would have a nice little raid1 array for hosting the os, vm images, etc. I'd be using KVM most likely since it uses qemu for the userland, which is what I've been doing now, so I'd be familiar with it already (and have tested it a tad on my box since it's new enough to have a supported CPU). What I'm wondering is, at what level should I do the raiding for the fileserver's sharing setup if I do this? Would I setup the raid10 array at the host level and then the pass the array device on to the vm for partitioning up with lvm and whatnot (this would seem to make sense to me)? Or would it make more sense to pass all the raw disks to the vm and have it do all the raid management itself (to me this doesn't make as much sense as it would use up vm cpu time)? It would seem it's technically the same either way, but having the host do the raid management would seem to me to be more logical since it would know about the arrays already if something happened to the fileserver vm. |
gus3 May 03, 2009 4:53 PM EDT |
What kind of VM are you using? Xen? KVM? ... *edit* duh, open eyes, it's KVM. |
gus3 May 03, 2009 5:21 PM EDT |
Here's my guess, based on my scant understanding of how architecture-based VM's work in Linux. Inside the guest VM, certain instruction types are trapped, to be handled by the host. I think the instruction in question is port I/O to the disk controller. When such a trapped instruction is encountered, the VM is stopped, its state is saved into memory, and the VM manager (qemu in your case) receives a signal (not a Unix signal, just a programmatic flag) that a VM has a hardware request. I know Bochs does this in software; my understanding is that qemu and its ilk do this trap in hardware. For example, the disk I/O request inside the VM will be translated into a file operation via VFS. In this way, a Unix regular file can be made to look like a disk partition to the VM, even if the actual file is on NFS or Samba. A well-designed VM manager means the guest system never knows the difference. To answer your actual question: It would probably make more sense to manage the RAID in the host, not the VM, if you are using standard Linux volume management tools. This way, the VM generates one trap to host for each read and write operation, rather than the four that you would get by putting RAID management in the guest VM. A VM trap is an extremely expensive operation, even more so than a user-to-kernel call, and this setup would minimize that cost. |
techiem2 May 03, 2009 5:31 PM EDT |
Yeah, your explanation makes it even more logical to do things on the host level. You could even do the lvm management there too when you add disks. I figure things would be faster to do at the host level (managing the raid, tweaking volumes, etc.). Then all the vm has to know is that sdb1 gets mounted to blah and sdb2 gets mounted to bleh when it boots up. When you boot it back up after adding a new disk and expanding the volume on the host, the vm doesn't need to know or do anything new, it just "magically" has more space to work with. :P Let the host worry about the raiding and let the fileserver vm worry about, well, fileserving. |
gus3 May 03, 2009 10:05 PM EDT |
Wow, you mean my explanation actually made sense? I'm impressed. I had just woke up from a nap five minutes earlier when I saw your original message. My mind was in a bit of a blur, as my first comment shows. |
Sander_Marechal May 06, 2009 11:26 AM EDT |
I agree with gus. Do it in the host. I don't know much about KVM or Qemu but I know Xen and VirtualBox support LVM voluems as disks for guest VMs. I assume KVM can do the same. Just lay out your disks in RAID1 or RAID10 like you asked in your other thread. Then use the RAID as an LVM physical volume. Create an LVM volume for the host root and swap. You could create a volume for boot as well, but you'd need top use Grub2 or Lilo as the bootloader. Then, for each VM guest create two more LVM volumes. One for the root filesystem and one for swap. Create more LVM volumes for data storage. For example, you could create another LVM volume on the host, mount that in your fileserver guest and use it to store the shared files. So, let the host deal with RAID and LVM. Then LVM volumes simply become virtual disks/partitions for your virtual guests. |
You cannot post until you login.