"Best" disk setup for the fileserver vm?

Forum: LinuxTotal Replies: 13
Author Content
techiem2

May 16, 2010
6:21 PM EDT
I am rethinking the disk setup for my fileserver virtual machine. Currently the base machine is raiding the 2 fileserver disks, has an lvm volume group on the array, then passes the lvm logical volume to the virtual machine as a disk. The volume is currently formatted ext3.

I am not sure if this is the best or not as I am still very familiar with the workings of everything (especially the lvm stuff).

My goal (which the current raid/lvm setup may accomplish, I just don't know how): Be able to add more disks to the array easily. Be able to expand the available disk space as more disks are added. The array is defined as a raid10 array and currently contains 2 300GB disks.

This week I have a 4 disk external case coming that I am planning to move those 2 disks into, along with a new 1TB disk and the 750GB disk I have around.

Obviously I would like to add the 2 new disks into the array, and expand the space appropriately. I am just not sure the best way to go about adding them to the array and to the disk space setup.

Should I be tweaking/expanding the current setup and still pass the lvm to the vm? Should I be doing something totally different with the disk setup to pass to the vm?

I eventually plan to replace the 2 300GB disks with larger ones, then replace the 750GB with something bigger, then eventually add another external case with more disks as I need more space (obviously the ideal would be to be using all disks the same size).

Redoing the complete disk setup is an option, as I'm pretty sure I have enough space elsewhere to temporarily move stuff around if I need to.

Thanks!

*Edit* From looking some more I should probably be passing the volume group to the vm rather than the logical volume - If I should be doing it this way. *End Edit*

*AnotherEdit* Should I be using something like iSCSI to export the disk(s) to the vm instead of direct passing? Would it make a difference? *End AnotherEdit*
devnet

May 16, 2010
11:01 PM EDT


I think you're complicating things...I have 3.3 TB setup for my home network with deduplication using Greyhole and sharing everything with Samba since I have a mixed environment.

I use Amahi to do that and I can add and remove drives at the drop of a hat. I'm not sure why you would go for purchasing all the hardware you'll need to purchase in order to make a complicated setup like that for your home.

There's a video about what Amahi does here: http://blog.amahi.org/2010/04/27/amahi-5-3-storage-pooling-r...

Check it out and see if it fits what you're looking for...if this is for a work environment, just ignore me. If not, this may be worth your time.
techiem2

May 16, 2010
11:24 PM EDT
I want the raid for the duplication/performance/etc. My home network is probably more complex than many small businesses. :P

I'm mainly trying to figure out what to do on top of whatever arrays I make of the disks.

I'll take a look at amahi and see if that is something that would make things a bit easier.

techiem2

May 16, 2010
11:27 PM EDT
It looks like Amahi is it's own standalone product, not something that can integrated into my existing fileserver vm. It appears that it is based on samba, which wouldn't make it work well for my setup.
chalbersma

May 17, 2010
2:51 AM EDT
perhaps you should look into zfs
Sander_Marechal

May 17, 2010
6:49 AM EDT
@Techiem2: Your current raid10 + LVM setup allows you to add disks easily. The only thing you need to look out for is that you always add disks in pairs and of the same size. You now have an extra 750G and an extra 1TB drive. I would not add those together. You will loose 250G that way (the two disks become a raid1 pair of 750G which will be added to the 300G raid0 you have now, making for a total of 1050G).

Since the raid10 is already a physical volume for LVM it will automatically grow.

But, if I were you I would do things slightly different. Instead of making one big raid10 array as a single LVM PV I would make many raid1 arrays and add them all as individual PVs to the LVM. This way you will be able to remove pairs of disks as well. LVM has a migrate command. With that command you can move data away from certain PVs (if there is enough free space). An example:

You have a 2x300G raid1 as PV1. You have a 750G raid1 as PV2. Four disks total so your case is full. Now you get two 1TB disks. With LVM migrate, you can tell LVM to move all data away from PV1. Then you remove PV1, destroy the raid1 and take out the disks. Then you insert your new 1TB disks, raid1 them together and and add those as PV3 to your LVM.

As for filesystems on top of LVM, I suggest you benchmark yourself. Standard disks benchmarks are all fine and dandy but they tell you nothing about the kind of workload you have. Take a look at this article of mine. In this I tested various filesystems on RAID/LVM that mimic my own workload:

http://www.jejik.com/articles/2008/04/benchmarking_linux_fil...
techiem2

May 17, 2010
6:28 PM EDT
After looking around, I've found that I have a 750GB disk in this machine.

So I could easily swap that out for the coming 1TB.

So maybe I'll add the 2 750GBs as a RAID1 and add them as another PV to the VG.

No use doing anything with the array of 300GBs since they are already basically acting as a RAID1.

I'll probably move the data off and redo it all anyway, since I have the passing to vm and partitioning done on the wrong level.
techiem2

May 17, 2010
10:01 PM EDT
From looking at your benchmarks I'm thinking XFS might be best for my setup too. The fileserver primarily holds my audio collection and pictures (most 500K+), along with some downloads, ISOs, backups, etc. It may eventually hold my MythTV data as well (maybe swap out the 1.5TB disk in the myth box with one of the 300GB ones when I can get a second 1.5TB to match with it to replace the 300GB array and move the mythtv data to the fileserver).
gus3

May 17, 2010
10:11 PM EDT
I've read good reports (okay, one good report) about MythTV using JFS for its media storage. Recording and playback require only so many Mb/s access, which any modern controller/drive can supply. However, deleting large files is something JFS is very good at.

EDIT: After some quick testing, ext3 is noticeably slower than both JFS and XFS at deleting gigabyte files. Then again, on my desktop system, what's the difference between 50ms and 10ms?
techiem2

May 17, 2010
11:45 PM EDT
So if I'm understanding it all right (and it's possible I have some things in the wrong order/done on the wrong machine [vm server or fileserver vm] - I assume you'll correct me):

01. Move all data off existing 300GB lvm and shut down fileserver vm 02. Delete lvm setup and raid array 03. Remove 300GB disks 04. Get system disks as sda/sdb (they currently aren't due to oddness in system detection order - want to fix that to make life easier) 05. Connect 300GB disks in carrier 06. Arrange so system disks still sda/sdb 07. Create raid1 array form 300GB disks 08. Create lvm vg with 300GB array as a pv 09. Pass lvm vg to fileserver vm as a disk 10. Create logical volume(s) on the disk on fileserver as XFS 4k blocks (I assume this could be done at vm server level as well) 11. Connect 750GB disks in carrier 12. Create raid1 array from 750GB disks

(Do some/all of these need to be done with fileserver vm offline? i.e. would it see that its disk has grown without a reboot) 13. Add 750GB raid array to lvm as another pv 14. Extend lvm 15. Extend xfs

16. Put desired data back on fileserver
Sander_Marechal

May 18, 2010
2:15 AM EDT
I would do it slightly different. I would first add the two 750G as RAID1 and add it as a PV, then use LVM migrate to move the data from the 300G array to the 750G array, then move the 300G disks from the machine to the drive case. This way you do not necessarily need to backup your current 300G array (though it's smart to do anyway).

Also, I would suggest sticking to ext3 for the OS and just use XFS for your data partitions. The OS is still many small files.
techiem2

May 18, 2010
8:09 AM EDT
Ah. I might try that then.

I'm not planning to do anything to the OS. Its on a separate set of raid/lvm sets on the other 2 300GB disks.
techiem2

May 19, 2010
7:59 PM EDT
Well, my enclosure and new disk are here. I'm currently dumping stuff from one of the 750GB disks to the new 1TB disk. So sometime in the next couple days I'll be messing with the raid/lvm stuff (i.e. once I finish moving data off everything so I can start messing with the disks and have time to play).

Another Question for the future: Say I have my lvm setup with md-fs0 (750GB raid1) and md-fs1 (300GB raid1) as PVs in the VG, which is then partitioned appropriately with xfs filesystems (1 or more, depending on how I partition the VG). Now say I want to swap out the 300GB disks for a couple new 1.5TB disks or some such.

How does one go about that with the xfs-lvm-raid setup?

Sander_Marechal

May 20, 2010
5:13 AM EDT
First, use vgdisplay to see how much free space there is. If you have 300G free (unused extents) then it's easy. Simply tell LVM to move all used extentds on md-fs1 to other disks:

# pvmove /dev/md-fs1


Then you can pvremove md-fs1, stop the array and remove the disks. Insert the new 1.5T disks, create a raid1 array, use pvcreate to make it a PV and use vgextend to add it to your volume group. All done.

If you don't have 300G free then it's a little harder. You could try to free up some space you you can follow the above instructions. Alternatively, if you have some room inside the case, you can first add the 1.5T disks in the case, extend the volume group, pvmove the data off the 300G disks (which will end up on the 1.5T disks) and then remove the 300G disks. After this, you can move the 1.5T disks from the case to the enclosure (this should work, thanks to the UUIDs that mdadm uses for raid arrays. It doesn't matter if you move or rearrange the disks).

Hope this helps!

You cannot post until you login.