Xen help needed

Forum: LinuxTotal Replies: 2
Author Content
hkwint

Dec 08, 2008
5:30 PM EDT
Hi Fellow LXerers,

LXer user Kanoie sent me the following question, which I cannot fully answer because I have no experience with Xen. Maybe someone does and can help this user out?

Quoting: Hi, This is regarding the article at http://lxer.com/module/forums/t/24435/ I got a new HP DL120 G5 with 2 1000GB disks. I wish to run XEN to have atleast 4 virtual servers and would like to implement this: RAID + LVM + LUKS + Xen. I wish to use RAID1. But I was a bit confused with the procedure: How do I start? Do I first boot using a bootable cd and then partition disk using fdisk and create partitions for /boot 256 Mb (ext3 or reiserfs??).Should the remaining go for LVM or can I create 2 more partitions for Xen dom0 as / 1.5Gb and 300 Mb root swap and the remaining as LVM. And how do I create the LVM physical volume (PV)? Is is using this: pvcreate /dev/hdaX /dev/hdbX pvscan Also there is something called LVM volume group (VG) as per http://www.walkernews.net/2007/07/02/how-to-create-linux-lvm...

Also in the post it said "partition with fdisk so that you have two 'Raid Autodetect' type partitions for each of the drives" - I couldnt not follow the RAID autodetect - how do I enable this?

After the above step, Do I now use mdadm to create RAID1? mdadm --create /dev/md6 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

Once this is done, I now install debian, xen and xen-tools on the 1.5G partition created. Am I correct?

If not could you help me with the whole procedure?

Thanks, Regards, Amit
Sander_Marechal

Dec 08, 2008
5:32 PM EDT
I got the same message and asked him to repost on the forum. I also pointed him to this recently published story: http://lxer.com/module/newswire/view/113425/index.html
hkwint

Dec 08, 2008
6:11 PM EDT
Well Amit, apart from Xen I can probably help you.

RAID, LVM and LUKS ar like building blocks which you can stack in different orders. You can put RAID on top of LVM and LVM on top of LUKS, or you can first encrypt some harddrive using LUKS, than use LVM to add those LUKS drive to a 'container'; a volume group, and after that implement RAID on top of that. This provides great flexibility, but also makes your task a bit harder because you should find out which way you prefer.

LVM works like this: You have physical volumes; these are realpartitions which are on your real harddisk. Normally, you assign those partitions to LVM using fdisk or cfdisk, and for the type you enter "8E". This is to distinguish the volume from normal ext3 / ReiserFS etc. volumes in your partition table.

Something like sd1a, hd1b etc can be assigned type "8E", and without formatting those, these partitions will be available to LVM. However, without telling LVM that it can use those physical partitions, it cannot use them, because it doesn't "know" about them. Therefore, you have to create a physical volume on that partition. It is a kind of 'flag' to let LVM know that it can use that particular partition. The manual of 'pvcreate' can be found at http://linux.about.com/library/cmd/blcmdl8_pvcreate.htm

'pvscan' is a command to let LVM look for the flags you placed on your physical partitions using "pvcreate". So 'pvscan' finds every physical partition LVM is allowed to use.

You can create multiple physical volumes, as many as you like. They are not required to have a name, because it's only an 'existing partition' with a kind of flag which ensures LVM knows it can use that particular volume.

These physical volumes can be put in 'containers'. Let's imagine you have four large shipping containers, your "physical volumes". You can tell LVM it may use three of this containers by making physical volumes on those three. Because the fourth is not an LVM physical volume, it doesn't have type 8E; it might just be an ext2 partition or so, LVM knows it is not allowed to use that one. Imagine LVM puts those three shipping containers next to each other, and removes the two 'inner' walls between those containers, creating one big container out of those three. That's what 'vgcreate' does. It is a 'volume group' (vg) of physical containers, or physical partitions. It is virtual because in reality, LVM can't put those three containers next to each other. However, LVM (or better: Linux) has a clever way to make your operating system think those physical containers are next to each other and have no walls, while in reality they are not next to each other; they are disconnect. That's why it's called a "group".

You can make multiple groups. This means you could use three physical partitions to create group VG1 and two other physical partitions to create VG2. It's up to you. Those "volume groups" are required to have a name, to distinguish them from one another.

Now let's say you have a group VG1 which exists of three physical containers. Those three physical containers are 'joined as 1" by the volume group. Out of this volume group, you can take 'virtual partitions'. Let's imagine the three shipping containers with the walls between them removed. It's now one large room, and by placing some walls you can make five rooms in your 'connected shipping containers', or almost whatever number of rooms you like, provided there's enough space.

These virtual partitions are called 'logical volumes'. You make them by using lvcreate.

On top of this logical volumes, which your OS sees just as if it were a real 'physical partition', you can make filesystems, using mkext2fs, mkreiserfs etc.

Now, I told about the thing in Linux that makes Linux think some non-connected harddisk space is connected. That part of the kernel is actually called device-mapper, and it may come in handy to explain what it does. What it does, is that it can make a 'virtual' partition look like a 'physical' partition to Linux.

This is the same as what LUKS does; it takes a 'physical' partition, on top of that it encrypts the data, and what you use is the 'virtual' partition LUKS creates, also using the device mapper. However, because of the device mapper, Linux doesn't know the 'virtual' partition made by LUKS is virtual and not physical. And that's where it becomes confusing. You can use 'pvcreate' to place an LVM-flag on 'virtual' partition created by LUKS, because Linux doesn't know the partition created by LUKS isn't a physical one; the device-mapper takes care of this.

The same goes for RAID. You can put two disks together using RAID, and Linux thinks the resulting 'md0' or 'md1', which in fact is a 'virtual partition', is a physical partition. Therefore, you can use 'pvcreate' to add an LVM-flag to your md0-array, or you can use 'logical' volumes (virtual partitions) created by LVM as parts of your RAID array. You can add "virtual partitions" created by LUKS to RAID or LVM, or you can use 'virtual' partitions created by RAID or LVM as a 'physical' partition to put LUKS on top of, and LUKS will again place another virtual partition on top of it. You can go on and on and on this way.

The only important remark here is: Your boot-scripts have to support it.

Back in the days I was using this, I had "LUKS" (actually LUKS didn't exist back then, but the idea is the same) on top of LVM, though Gentoo back then assumed someone would only use LVM on top of encrypted partitions. Nowadays this issue is solved however.

Regarding your boot-partition: Most boot loaders - including GRUB - don't understand LVM, RAID or LUKS 'virtual' partitions, they only understand real partitions. Therefore you should place /boot (and the kernel) on a real physical partition, apart from LVM etc.

"RAID autodetect" is also a flag placed on a partition which tells "mdadm" it may use that partition for software RAID. You do this by setting the type to 'fd', the HEX-code understood by partitioning-programs meaning 'Linux software RAID'.

Hope this helps. If not, let us know.

You cannot post until you login.