RAID + LVM + LUKS + Xen = perfection

Story: How To Resize ext3 Partitions Without Losing DataTotal Replies: 27
Author Content
swbrown

Jan 09, 2007
6:09 PM EST
Since this article mentions the "Hard Way", the way you used to have to be forced to do things in the dark ages of GNU/Linux, I thought I'd mention the "Modern Way".

The ideal setup for a GNU/Linux system I believe is to combine RAID + LVM + LUKS + Xen when you first setup a GNU/Linux system, which will allow you these kinds of features in the future when/if you need them without having an "Oh no, I need to redo my entire system" moment:

1) Data protected from drive failure via RAID1 (RAID).

2) Can expand your partitions by adding more disks without the need to change your partitioning strategy or juggle data around (LVM).

3) Can do offline operations like the Knoppix step mentioned in this article without having to go fully offline (Xen).

4) Can run fully encrypted (root + swap) systems without having to risk accidents causing the system to be unbootable or otherwise deal with the complexity of on-boot decryption (LUKS + Xen).

5) Can run multiple GNU/Linux systems simultaneously and securely, and have the non-encrypted ones boot/resume at startup without a password (Xen).

6) Can reboot the physical hardware without actually rebooting the system running on it (Xen).

So here're the basics - I'll just go for a quick overview rather than a step by step list of commands:

- Say you have two identical drives, hda and hdb.

- Step one is to partition with fdisk so that you have two 'Raid Autodetect' type partitions, one 256MiB, one spanning the rest of the disk. Do this to both drives.

- Use mdadm to create a RAID1 device for both partitions using two disks. Note that if you are doing this from an existing GNU/Linux OS on one of the disks, you can create the RAID partitions with the special "missing" token to indicate a degraded RAID1 array, then hot-add the second disk (the one with your old existing GNU/Linux OS) later. The RAID subsystem will take care of replicating over top that old drive when you do (this is what I did as I wanted to avoid doing these changes via boot disk, and have a backup system to use while troubleshooting).

- You format the 256MiB one ext3 for use as /boot (required for grub - it can deal with RAID1 but not a lot of other stuff), and the rest as LVM.

- Create a 'root' (about 1GiB) and a 'root_swap' (optional) logical volume. This will be for Xen dom0. If you're using the system as a desktop, you might want to make this the larger partition and run X on it. Otherwise, aim for this being a tiny partition with very little memory running only OpenSSH that is used only for management of user domains.

- Use debootstrap to install Etch on the root.

- Mount the root, mount boot under it, chroot into it, mount proc, and install Xen (e.g., the "xen-linux-system-2.6.18-3-xen-686" package) and grub.

- Install mdadm (RAID) on it and do the mdadm --examine --scan stuff to get the ids of your two active arrays and put them in the /etc/mdadm/mdadm.conf file.

- Run update-initramfs with options to update all of them ('-u -k all' I think). This will add in the config from mdadm.conf so the initial ramfs will handle configuring the RAID devices before attempting to mount root.

- Setup the fstab to use the right devices.

- Boot into Xen domain 0.

- Setup Xen networking however you want. Personally, I setup bridging myself and use "xenbr0" (name it that, as Xen looks for that by default when creating user domains) as the primary outgoing connection. Ignore Xen's bridge scripts as they likely don't do what you want (e.g., dom0 will be off the network) and they make everything really confusing to follow by renaming all the interfaces. This way, it's just standard bridging and you can add NAT or whatever as usual.

- Install xen-tools and set the defaults in the config file in /etc/xen-tools.

- use the "xen-create-image" script to create your user domains. E.g., if you are using the 'tiny dom0' server approach, this is the domain you want to make huge. I usually have dom0 be Etch (to support all this by default) and use Sarge as my general-purpose user domain. Even though Sarge doesn't support a lot of this stuff, it's all running transparently underneath it at the dom0 level, so it's technically using it anyway. Neat trick.

- Start it up 'xm create', change whatever DNS stuff you have to point to that domain rather than dom0 unless you're doing the "dom0 as desktop" approach, and enjoy. :)

- If you want a fully encrypted system, create two LVM partitions, one for root and for swap, use 'cryptsetup luksFormat' to initialize the encryption (NOTE: be sure your cryptsetup is using essiv mode or there are some security issues - Etch's uses essiv by default). Use 'cryptsetup luksOpen' and give the resulting devices a prefix of '_decrypted'. Use these devices when you create your Xen user domain. Don't add these to /etc/crypttab or you'll block dom0 rebooting as it will need console input. Instead, after a dom0 reboot, log in and manually open the LUKS devices. The benefit of this is the Xen user domain doesn't know anything about the encryption, so it's shielded from the complexity (you can use any old ghetto GNU/Linux distro).

- If you're using encrypted Xen user domains, be sure to disable Xen saving the instances to disk in the dom0 partition! By default in Etch, it saves their state on dom0 shutdown in /var/lib/xen/save if I remember right. There's an /etc/defaults/xendomains script with the option to disable this.

Huzzah, now you have a modern setup capable of dealing with whatever challenge you throw at it. :) It'd be nice if installers started doing this base setup for you, as it's a pain to setup manually, which kinda defeats the point that this isn't the "Hard Way". Debian's installer is getting there - they already have a LVM + LUKS option, and a somewhat working RAID part, but no Xen, which is critical (and a pain to install).

Now when you want to do something like shrink an ext3 filesystem (which has to be done offline as ext3 doesn't currently support online shrink) you can simply log into the Xen dom0, stop the user domain with the ext3 partition you want to shrink, shrink it and its LVM container, and restart the domain. No physical console access + Knoppix CD required, and any other systems you're running are unaffected.
tuxchick

Jan 09, 2007
6:33 PM EST
This would make a great LXer feature. go on, don't be shy!
jimf

Jan 09, 2007
6:40 PM EST
Heh, I'll need you TC to explain all of it, so this simple mind can figure it all out ;-)
azerthoth

Jan 09, 2007
6:43 PM EST
Handy, thanks. Its bookmarked for future referance.

Good Job.
Sander_Marechal

Jan 09, 2007
9:24 PM EST
Great stuff! Definately make this into an article. Preferably one that's easy to follow for non-experts.

One question: I am in the process of acquiring an old server from work, a dual Xeon 3.2 Ghz with hardware RAID (6 x 40 GB). I'm thinking RAID5 would be nice for that (giving me 200 Gb storage). What would change in your setup instructions if I wanted to use hardware RAID5 instead of software RAID1? Or would you suggest something different alltogether with 6 x 40 GB hardware RAID?
swbrown

Jan 09, 2007
9:48 PM EST
> Great stuff! Definately make this into an article. Preferably one that's easy to follow for non-experts.

I'll take a look at doing it this weekend - I'd need to find some spare hardware to test the steps on so I'm not driving anyone over a cliff by accident. :)

> What would change in your setup instructions if I wanted to use hardware RAID5 instead of software RAID1?

If hardware RAID (/real/ hardware RAID) you don't have to deal with the mdadm steps as Linux will see the RAID controller as SCSI and stick it on /dev/sda. The positive is that you won't be duplicating writes across the PCI bus and can skip all config steps. The negative is that you'll likely be unable to get any status as to the health of the array or manage it without rebooting into the array's BIOS, the data will be tied to that hardware, and it's less flexible.

People debate it all the time, but I'd go with hardware RAID (if it's quality) when available due to the bus bandwidth issue if using a level where writes are heavily duplicated.
Sander_Marechal

Jan 09, 2007
10:58 PM EST
It's a HP ProLiant G3 server, so I assume that's good quality.

Quoting:The negative is that you'll likely be unable to get any status as to the health of the array or manage it without rebooting into the array's BIOS, the data will be tied to that hardware, and it's less flexible.


That doesn't sound too good. The reason I want that server is so I can use Xen and stop worrying about bringing down a server just because I need to upgrade something. Currently I'm running everything off an old HP Netserver dual PII running Etch.

With Xen I hope to split my services up (virtual backup server, virtual music server, virtual webserver, virtual subversion server) and when I need a major upgrade, I can just create a new virtual server, copy the data from the old to then new virtual server, switch, and delete the old virtual server. Zero downtime. So, the notion of rebooting just to check the RAID doesn't seem too pleasant for me.

If I wanted to do software RAID on that new machine (it's fast enough for it) then I would configure the hardware RAID as JBOD and the follow your steps from first post, right?
swbrown

Jan 10, 2007
1:16 AM EST
> If I wanted to do software RAID on that new machine (it's fast enough for it) then I would configure the hardware RAID as JBOD and the follow your steps from first post, right?

Aye, you'd just create the second region RAID5 rather than RAID1 via mdadm. Note that you still need that first 256MiB one (or whatever size you normally use) for /boot as RAID1 regardless so as to not wind up with issues with grub (since it doesn't really understand RAID - hence the trick of feeding it a RAID1 partition). One thing I forgot to mention is that in a software RAID config you also want to load grub into the MBR of all involved drives in case you're rebooting into a situation with the primary failed (run grub, do a 'root (hd0,0)', 'setup (hd0)', and repeat for all drives (hd0, hd1, hd2, etc.)). If you're concerned about unattended boots into a primary-failed scenario, you'll also want to configure the "fallback" option in grub's menu.lst so it'll roll over to a designated fallback drive in the array. I've been too lazy to setup fallback myself, as I couldn't think of a good way to make sure the menu.lst stays up to date (since you essentially need to duplicate the 'normal' entry as your backup and modify it to use the alternate drive, and remember to update it whenever the 'normal' entry is updated (which is likely being done automatically by Debian)).
Sander_Marechal

Mar 08, 2007
1:51 PM EST
Today I've finally recieved the machine I wanted! It's a HP ProLiant G3 with 2 x 3.20 Ghz Xeon processors, 2 x 512 Mb RAM and 6 x 36.4 Gb hardware RAID. There's also a CD drive and an Ultrium tape drive in there (probably I'll never use it. I have no tapes and plan to do network backups). Also, I plan to use the hardware RAID instead of the software RAID for this. I can still get my RAID monitoring after HP has ported their system management software from Sarge to Etch after Etch's release.

A few questions:

* Would it be smart to take one drive out and setup a 5-disk RAID5 so that I can swap out a faulty drive? Or can I use all 6 drives and temporarily run on 5 drives when one dies until I find a replacement? I read that with Debian's software RAID you can designate one drive as a spare which will be used when another disk fails. Is that possible with hardware RAID too? (probably depends on the hardware...)

* Grub's limitation to RAID1 only applies to software RAID (mdadm) right? All grub will see of my RAID5 is plain old /dev/sda? Or should I setup a 256MB RAID1 on all disks for dom0 /boot anyway in hardware RAID?

swbrown

Mar 08, 2007
11:16 PM EST
> Would it be smart to take one drive out and setup a 5-disk RAID5 so that I can swap out a faulty drive? Or can I use all 6 drives and temporarily run on 5 drives when one dies until I find a replacement?

If I remember right, Google's recent paper on drive reliability showed it was more common in practice than previously believed that two in the set can fail in the same window, so having a hot spare to minimize that window might not be a bad idea.

> Is that possible with hardware RAID too? (probably depends on the hardware...)

Depends on the hardware - check the product info for something talking about hot spares.

> * Grub's limitation to RAID1 only applies to software RAID (mdadm) right?

Yeah, software RAID, fakeraid/ataraid, etc. will be problems, but with real RAID where the controller is appearing as a drive at the hardware level, there'll be no problems.
Sander_Marechal

Mar 09, 2007
4:02 AM EST
Thanks :-)

A few other questions if you don't mind, this time regarding Xen.

* Debian/etch has -xen and -vserver-xen packages but their descriptions are the same. What's the difference (if any)? Is one for dom0 and the other for domU? or is the first without and the second with hardware VT support?

* I saw that you need to specify the ammount of RAM for domU guests. Have you got any rules of thumb about how much this needs to be? Can the sum of all DomU's be bigger than the physical RAM (e.g. 3 domU's of 512 MiB RAM running on 1 GiB physical RAM)?

* Related, lot's of tutorials about Xen show very low amounts of RAM for domU's (32 or 64 megs for a Sarge domU in some cases). Do domU's simply use less RAM than a regular system or are those tutorials too conservative/old?
swbrown

Mar 09, 2007
5:01 AM EST
> Debian/etch has -xen and -vserver-xen packages but their descriptions are the same. What's the difference (if any)?

They apparently allow that domU kernel to act as a Virtuozzo/OpenVZ (vserver) kernel as well (a different virtualization system). Not really sure who would want to do that other than hosting companies that sell vservers but want to virtualize their own infrastructure. I'd just ignore the -vserver-xen ones.

> I saw that you need to specify the ammount of RAM for domU guests. Have you got any rules of thumb about how much this needs to be?

However much you'd normally give such a system, considering the workload you intend for it. A rule of thumb would be, if you're running out of memory, you didn't give it enough. :) Remember to leave some space for filesystem cache/buffer memory as well. It's really easy to change, and can be done live as well, so it's not critical to have that planned out in detail in advance.

> Can the sum of all DomU's be bigger than the physical RAM (e.g. 3 domU's of 512 MiB RAM running on 1 GiB physical RAM)?

No, you can't overcommit ram on Xen.

> Related, lot's of tutorials about Xen show very low amounts of RAM for domU's (32 or 64 megs for a Sarge domU in some cases). Do domU's simply use less RAM than a regular system or are those tutorials too conservative/old?

They use the same amount of ram (or at least close enough to not be worth thinking about) as a regular system. They might have been talking about dom0 which you can shrink down if you're just going to use it for management like on a headless server (disable pretty much everything else running to save ram and reduce security risks, make sure no periodic processes are going to run that eat ram, etc.). If you're partitioning services into security contexts, like having BIND run on its own, you might also want to plan out a stripped down domU with little memory allocation, but otherwise, 32MiB isn't going to leave with with a very useful domU.
Sander_Marechal

Mar 10, 2007
5:22 AM EST
Thanks for all the help! I'm starting the install now, so fingers crossed :-)
hkwint

Mar 11, 2007
12:13 PM EST
To swbrown:

When talking about the 'modern way', you should actually use the EVMS 'interface' to manage RAID and LVM(2). It's what I did. EVMS is made by IBM, open source (free software as far as I know, though not fully sure), and it means "Enterprise Volume Management System". The manual is really, really long, but all things which can be done with EVMS, and almost all situation an employer which handles volumes can ever encounter are probably described in the manual. EVMS is rather modular, and uses lots of plugins. For example, filesystems are plugins, so new filesystems can be implemented rather easily. Also, things like snapshot features are a plugin, and so on.

http://evms.sourceforge.net/user_guide/

You might ask me: Why? Well, mainly because it has a GUI, which in this case, is a good thing. When working with rather difficult schemes, it may be a very daunting task to just use the CLI, and even an ncurses interface wont' do. With the EVMS gui, you can extend one partition with free space without even unmounting, with only a few mouseclicks! No need to use the rather nasty LVM-commands vg/pvcreate (I used them in the past, this is rather a hassle), just clicking through a few gtk-menus! I once thought about writing an article about this (I also had hardware encryption layered over it at that time) and creating some kivio-schemes, because only reading text is just too hard. Here's my current scheme:

There's hda1 (300MB) for / (root), hda2 (2GB) for swap/Windows (note: 2GB seemed right for an OS to me at the time of creation, but Windows XP takes far and far more than that) and hda3 (74GB) for a RAID-array. Then, there's hdb1 (300MB) for a kind of mirror of /root, hdb2 (2GB) also for Windows, and hdb3 (74GB) for the RAID array again. Grub and the kernel are on hda1, which will finally also include small dirs like /etc, /lib, /bin, /root and /sbin. 300MB actually is enough for this. Nothing is done to hda2 and hdb2, though hda2 might be set up as (encrypted) swap. After that, hda3 (74GB) and hdb3 (74GB) are added to RAID arryay 0, which is called md0 (evms handles this for you, so no need to use the mdadm-command). MD0 now consists of 148 GB. After that, I created an LVM-container c0 of 148 GB on MD0. All stuff, not needed when booting, is made to EVMS volumes on container c0. So, c0 contains the volumes /dev/evms/usr, /dev/evms/var, /dev/evms/tmp, /dev/evms/opt, /dev/evms/home, it might contain /dev/evms/swap. To make things more easy, hda1, the / (root) partition, is also converted to a evms volume: /dev/evms/root (this is mounted at / , not to be confused with /root, which is part of it). The device-mapper (dev-map) makes this seem like normal partitions to Linux, but actually they are nested. You can create file systems with a few clicks on the EVMS volumes. If you want, you could create an 'LVM compatible volume' /dev/evms/dm/home_unencr (I hope that's the right path, don't have it anymore), which is an unencrypted home partition. On top of this, you could create an encrypted filesystem dm/home_encrypted using cryptsetup-luks. At the time I used this, cryptsetup wasn't in EVMS (people were invited to write it themselves back then), but it might be today. If I have the time, and can find some batteries for my mouse, I might upload some EVMS-GUI screenshots showing #evmsgui and my setup.

Please note, I use a patch for my kernel, because EVMS volumes can't be mounted if non-EVMS volumes are mounted, and the other way around. This actually is a feature in the Linux kernel, but you can turn it of. The patch is hard to find on the internet (oh, I searched and searched for it!), but it is, very handy, just in /usr/share/doc/evms-((version))/kernel/2.{4/6}/ Just so you don't have to search it for hours...

Anyway, let me know about your complex nested setups, I would be glad to read more real life experience with it, since my setup is not the smartest way to set things up. After waiting more than half an hour before my RAID0 array was created, I found out evms can also use the stripe-feature of LVM2, after the LVM volumes are created. This would mean, instead of the way described above, I could also have made 2 LVM containers, hda3 to c0 and hdb3 to c1, and then just 'stripe' my EVMS volumes on c0 and c1. If I did so, I wouldn't have that md0 of 148GB, which is much too large to handle properly (I actually want to shrink it, to make room for other distro's/ OSes. Imagine the horror, if a 'to be shrunk' RAID0 array contains an LVM volume, which contains EVMS volumes, which contain different file-systems).

After that, managing the bootloader becomes quite chaotic: (Grub line) title Gentoo Linux kernel (hd0,0)/boot/2.6.16-gentoo-r7 root=/dev/hda1

(fstab line) /dev/evms/root / ext3 noatime 0 1

The patch is needed to make these differences work, but it can probably be done in different ways too.

If somebody wants an article about it with a plethora of EVMS-screenshots and a nice Kivio-graphic showing the 'graph' of how my setup works, let me know. I thought people wouldn't be interested, since some EVMS-howto's describe this complex stuff very well, though are a bit difficult to grasp at first. Therefore, if I would be to write an article about nested filesystems including RAID, encrypted harddrives etc., it would be one aimed at beginners.

BTW I can't tell anything about Xen, since I never worked with it and hardly knows what it is, and why I would need it. If somebody can explain in simple terms why I would need Xen or what it can do for me on my one year old AMD proc without Pacifica virtualization hardware in it, please be my guest and do so. I'm interested in trying different distro's, and maybe even write a nice review about it.
jdixon

Mar 11, 2007
2:42 PM EST
> ...If somebody can explain in simple terms why I would need Xen or what it can do for me...

Xen, like VMware, lets you run virtual machines on your real machine. This allows you to experiment with new OS'es and the like without touching your physical hardware, and to run multiple OS'es at the same time.
Sander_Marechal

Mar 11, 2007
3:39 PM EST
After a little experimentation I managed to get everything setup correctly. I'll probably write an artcie in the short future about how I did it and what problems I encountered. Some remarks/tips:

* turns out I had two 18.2 Gb and four 36.4 Gb drives so I couldn't make everything into one RAID5. I made a RAID5 and RAID1 and joined them using LVM. * Setting up LVM from the Debian Installer is poorly documented and confusing if you have never done it before. * installing xen 3.0.3 on Debian doesn't setup bridging for you. I had to manually create a xenbr0 bridge in /etc/network/interfaces. * You need the -pae version of Xen if your system supports more then 4Gb RAM, even if you have less than 4Gb actually installed. * xen-create-image only creates a base system for you. You want to run "dpkg-reconfigure locales" and "tasksel --new-install" on a new image. Then create a new user and set "PermitRootLogin" to "no" in /etc/ssh/sshd_config. * When Debian dom0 shuts down I get an error message that the LVM volume group cannot be shut down because there are open volumes. This is most likely caused by the fact that both the root and the swap partition of dom0 are in the LVM volume group as well. Only it's /boot partition isn't.

All-in-all I'm pretty pleased with the setup :-) Thanks for all the tips!
Sander_Marechal

Mar 11, 2007
4:01 PM EST
> what it can do for me on my one year old AMD proc without Pacifica virtualization hardware in it

The thing that makes Xen stand out from other virtualization techniques is that it can run at near normal speeds even without hardware virtualization. The only downside is that you have to run a modified kernel to do it (which means that it cannot run Windows for example. Only Free OS'es). If you do have hardware virtualization then Xen can run unmodified kernels (including Windows).

And I imagine that live migration is very nice, especially for system administrators. Imagine that you buy a new computer. You can then move your OS from one machine to the other without even rebooting it and only 30-100ms of downtime. Sweet :-)
hkwint

Mar 12, 2007
9:39 AM EST
OK, thanks for the Xen info. I am going to try it, when there's some spare time (not soon).

Sander: I also had that problem with LVM sometimes, but it disappeared as I switched to EVMS. As far as I know, it is caused by not unmounted filesystems, which could be nested.
techiem2

Mar 12, 2007
9:50 AM EST
Ok, I'm still trying to understand Xen and just how it functions. I understand the basics of virtualization in general (I've used vmware, vmware server, qemu). But as I understand it Xen is even more advanced? Assuming use of a CPU with virtualization support (is there a list of which ones these are?), is it possible to say, run a base *nix host (bsd/gentoo/debian/whatever), and then run multiple virtualized OSs on it simultaneously (i.e. a Winders session and a *nix session) and switch between them at will without stopping one and starting another? How is the hardware support? I.E. would things like 3d support work for games or whatnot in the installs?

Thanks. Once I get my head around all this maybe I'll get a chance to try it out on something.

swbrown

Mar 12, 2007
10:06 AM EST
> When talking about the 'modern way', you should actually use the EVMS > 'interface' to manage RAID and LVM(2).

EVMS stalled out many years ago when it was rejected by the kernel team. I don't see it as alive enough to be something safe to use at the moment.
Sander_Marechal

Mar 12, 2007
11:57 AM EST
Quoting:Assuming use of a CPU with virtualization support (is there a list of which ones these are?), is it possible to say, run a base *nix host (bsd/gentoo/debian/whatever), and then run multiple virtualized OSs on it simultaneously (i.e. a Winders session and a *nix session) and switch between them at will without stopping one and starting another?


Exactly. And if you don't plan on running Windows as a guest OS then you don't even need a chip with hardware virtualization support. Any CPU will do.

Quoting:How is the hardware support? I.E. would things like 3d support work for games or whatnot in the installs?


Look at http://wiki.xensource.com/xenwiki/XenFaq#head-d5a7a247a51685... Technically it's possible to give one (and only one) guest system access to the hardware, but the (closed source) drivers cannot handle it (yet).
swbrown

Mar 12, 2007
2:09 PM EST
> Technically it's possible to give one (and only one) guest system access to the hardware,

You can partition access based on PCI identifiers as well and hand them out to various guests, but it's not something really useful for the average user.
jimf

Mar 12, 2007
2:32 PM EST
>not something really useful for the average user.

I'm pretty sure Xen isn't 'useful for the average user'. Eventually, maybe? Right now it's useful for professional servers, and of course, geeks at play.
theboomboomcars

Mar 12, 2007
2:52 PM EST
Quoting:You can partition access based on PCI identifiers as well and hand them out to various guests, but it's not something really useful for the average user.


Does that mean that if you have a board with 2 graphics cards and two processors, either dual core or two physical processors, you can have two operating systems running, each with their own video card and processor?

If so that would be cool.
jdixon

Mar 12, 2007
3:24 PM EST
> I'm pretty sure Xen isn't 'useful for the average user'. Eventually, maybe? Right now it's useful for professional servers, and of course, geeks at play.

I haven't played with Xen, so I can't speak for it. As far as I can tell, VMware player IS pretty much ready for the average user, assuming a virtual image exists which does what they want.
Sander_Marechal

Mar 12, 2007
10:32 PM EST
We'll see when the first Distro's with Xen as an option in the installer start shipping.
swbrown

Mar 13, 2007
10:05 AM EST
> I'm pretty sure Xen isn't 'useful for the average user'. Eventually, maybe? Right now it's useful for professional servers, and of course, geeks at play.

It depends on your definition of 'average user'. I can see it having much broader appeal once you could do something like alt+tab between Windows and Linux (I don't mean windowed, I mean like a KVM), but that would require a lot of driver replacement on the Windows side. VMWare has been trying to do that, but it's still a long way away. Probably, the only driver layer you absolutely would need to replace would be the video layer (X wouldn't be difficult to have reinitialize, Windows I dunno how it works).
jimf

Mar 13, 2007
10:27 AM EST
> It depends on your definition of 'average user'.

Very true. I think your assessment is a lot higher than mine :)

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!