Virtualization Management?

Forum: LinuxTotal Replies: 2
Author Content
techiem2

May 26, 2010
10:28 AM EDT
As I've been waiting for my disks to come (final disk arrives today, yay!) to fix up the storage lvm on my VM server, I've been wondering if there is a good VM management system I could use on the server.

I would like something Web based of course (no X on the server) that would handle managing my VMs. It would need to support KVM of course, and would need to support the proper options (such as assigning a disk image and a physical disk device/lvm group to a vm - such as how I'm doing things manually now). Of course I would need to be able to add my already existing Linux/Win vms into it. Ideally something I could install on my Gentoo setup as is, but I'm not opposed to trying a specialized distro that meets the criteria (as long as I could install it in a vm first to test it).

Anyone have any experience/recommendations? I'd like to get my VMs managed more properly.

Thanks.
matteverywhere

May 29, 2010
9:28 AM EDT
Hi techiem2,

have you looked at openQRM (http://www.openqrm.com) ? It can do transparent P2V, V2P and V2V, supports multiple virtualization technologies (Xen, KVM, VWmare, Citrix etc.) and is fully web-based. openQRM also has a Cloud-Addon which allows you to construct your VMs via drag-and-drop.

Please find a teaser at youtube -> http://www.youtube.com/watch?v=HPtcEcjlni0

enjoy + have a great day,

Matt
techiem2

May 29, 2010
11:12 AM EDT
I've actually got that running on my server now.

I just haven't been able to figure out exactly how to:

1. Get a new working kvm up and connected to

2. Pull in my existing kvm machines and their disks

The documentation and howtos I've found generally seem to be a bit lacking in completeness.

The "normal" way it wants to do things is apparently to store the kernels on itself and push them to vms when they pxe boot. However I know I don't want that because my vms all vary greatly in their setups, and I don't want to pxe boot them, just normal kvm vm boot them. I'm sure it's a great method for some virtualization setups, just now for mine.

Besides the fact that I already have an ltsp server setup for pxe boot clients. hehe.

*Edit* As an add on (which might help anyone else thinking on this) my current setup is basically:

Firewall/router/dhcp/dns/etc. server box (it currently has pxe boot clients pointing to the ltsp kvm machine's ip)

VM server box which currently has: A raid/lvm setup to store the main system One of the partitions holds the launching scripts and virtual disks for the kvm machines (most of the machines just have one virtual disk, I think one might have 2 disks). I have an lvm setup (currently 4 disks in 2 raid1 arrays as the 2 pvs) that I pass the lv from to the fileserver kvm machine as the secondary disk (the storage area disk) (the fileserver vm has a virtual disk for the os and then access to the lv as the secondary disk that it uses for the fileserver data storage and shares with the lan) The kvm machines are a mix of (primarily) Gentoo, Debian, and Windows. The *nix kvm machines tend to vary quite a bit in their kernel and system setups, and at this point most of the kvm machines are running 32bit distro installs.

I also have a raid1 array of 2 320GB disks spare (that array was the fileserver volume until I just replaced it with the new 4 disk setup - 1TB raid1 and 750GB raid1 - nice upgrade) that I can use for testing/migration.

You cannot post until you login.