USB Device RAID 4, 5, and 6

Jarret B

Well-Known Member
Staff member
Joined
May 22, 2017
Messages
340
Reaction score
367
Credits
11,754
Redundant Array of Inexpensive Disks (RAID) is an implementation to either improve performance of a set of disks and/or allow for data redundancy. Reading and writing performance issues can be helped with RAID. RAID is made up of various levels. This article covers RAID Levels 4, 5 and 6 and how to implement them on a Linux system.

RAID 4, 5 and 6 Overview

RAID 4, 5 and 6 are sometimes referred to as Disk Striping with Parity. Data is written to each disk one block at a time, just like in RAID 0. The difference is that in RAID 4, 5 and 6 there is also Parity.

PARITY

Parity is used for data redundancy. The redundancy allows for the failure of a disk in the RAID Array and the data still be accessible.

Parity works at the Bit level and is distributed according to the RAID Level.
  • RAID 4 – Dedicated Disk

  • RAID 5 – Distributed by block

  • RAID 6 – Distributed two blocks per stripe
The default block size is 512 bytes. So, each stripe on each disk is 512 bytes. In RAID 4, a single disk stores all of the Parity information. In RAID 5, each stripe has parity placed on a disk with the following stripe having Parity on the next disk and so on. RAID 6 has two blocks of Parity per stripe being distributed similar to RAID 5.

Parity checks the bits in the corresponding blocks and then sets the bits to make the total number of 'on' bits to be even. For example, if three disks are used and we take five bits from each, the parity would be:

  • Disk 1: 01001

  • Disk 2: 10001
The parity on the corresponding block for Disk 3 would be: 11000. Bit 1 on Disk 1 is a '0' and Disk 2 is a '1'. The Parity would be set to a '1' to allow for there to be an even number of '1's. The same is true for the following 4 bits. So on Disk 3, Block 1 would contain 10001. If another disk would fail, such as Disk 1, any blocks of data on Disk 2 and 3 can be read as normal. Any data needed from Disk 1 can be generated by using the existing data on Disk 2 or 3 and the Parity on the other remaining Disk.

HARDWARE

For RAID 4 and 5, three or more disks are required, but RAID 6 requires a minimum of four disks. The second block of Parity for RAID 6 takes up more space, but allows for the rebuilding of a failed RAID 6 Array faster than RAID 5. If a second disk fails during the rebuild, all data is lost.

To create the RAID Array, I will use three USB drives called BLUE, ORANGE and GREEN. I named them from the color of the thumb drive. The drives are all Sandisk Cruzer Switches which are USB 2.0 compliant and have a storage of 4 GB (3.7GB).

NOTE: When dealing with RAID arrays, all disks should be the same size. If they are not, they must be partitioned to be the same size. The smallest drive in the array sets the usable size of all of the disks.

I placed all three USB sticks in the same hub and tested the write speed. A file was written to each and timed. The size of the file was 100 MB and took an average time of 11.5 seconds making the average write speed 8.70 MB/sec. I performed a read test and had an average read time of 3.5 seconds making the average read time of 28.6 MB/sec.

To set up the RAID Array, you use the command 'mdadm'. If you do not have the file on your system, you will receive an error in a terminal when you enter the command 'mdadm'.

To get the file on your system use Synaptic or the like, for your Linux distro.

Once installed, you are ready to make a RAID 4, 5 or 6 Array.

Creating the RAID Array

Open a terminal and type 'lsblk' to get a list of your available drives. Make a note of the drives you are using so you do not type in the wrong drive and add it to the Array.

NOTE: Entering the wrong drive can cause a loss of data.

From the listing of the command from above, I am using sdb1, sdd1 and sde1. The command is as follows:

sudo mdadm --create /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdd1 /dev/sde1 --verbose

The command creates (--create) a RAID Array called md0. The RAID Level is 4 and three devices are being used to create the RAID Array – sdb1, sdd1 and sde1.

NOTE: Simply change the 'level=' to either 4, 5 or 6 for the RAID Level you want to create.

The following should occur:

jarret@Symple-PC ~ $ sudo mdadm --create /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdd1 /dev/sde1 --verbose
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array: level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: partition table exists on /dev/sdb1 but will be lost or meaningless after creating array
mdadm: /dev/sdd1 appears to be part of a raid array: level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: partition table exists on /dev/sdd1 but will be lost or meaningless after creating array
mdadm: /dev/sde1 appears to contain an ext2fs file system size=3909632K mtime=Wed Nov 2 17:27:52 2016
mdadm: size set to 3907072K

Continue creating array?

NOTE: If you get an error that the device is busy, then remove 'dmraid'. In a Debian system use the command 'sudo apt-get remove dmraid' and when completed, reboot the system. After the system restarts, try the 'mdadm' command again. You also have to use 'umount' to unmount the drives.

Answer 'y' to the question to 'Continue creating array?' and the following should appear:

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

The RAID Array is created and running, but not yet ready for use.

Prepare md0 for use

You may look around, but the drive md0 is not to be found. Open the GParted application and you will see it there ready to be prepared for use.

By selecting /dev/md0 you will get an error that no Partition Table exists on the RAID Array. Select Device from the top menu and then 'Create Partition Table…'. Specify your partition type and click APPLY.

Now, create the Partition and select your file format to be used. It is suggested to use either EXT3 or EXT4 for formatting the Array. You may also want to select the RAID Flag. Add the Partition scheme. I gave a Label of “RAID 4” and then clicked APPLY to make all the selected changes. The drives should be formatted as selected and the RAID Array is ready to be mounted for use.

Mount RAID Array

Before closing GParted, look at the Partition name as shown in Figure 1. My Partition name is '/dev/md0p1'. The partition name is important for mounting.

Figure 01.jpg

FIGURE 01

You may be able to simply mount 'RAID 4' as I was able to do.

If the mount does not work, then try the following. Go to your '/media' folder and as ROOT create a folder, such as RAID, to be used as a mount point. In a terminal, use the command 'sudo mount /dev/md0p1 /media/RAID' to mount the RAID Array as the media device named RAID.

Now you must take ownership of the RAID Array with the command:

sudo chown -R jarret:jarret /media/RAID

The command uses my username (jarret) and group name (jarret) to take ownership of the mounted RAID Array. Use your own username and mount point.

Now, when I write to the Raid Array my time to write a 100 MB file is an average of 11.33 seconds. The speed to write is now 8.83 MB/sec. Reading a 100 MB file from the RAID Array takes an average of 4 seconds which makes a speed of 25 MB/sec.

As you can see, the speed has dramatically changed (write: 8.70 MB/s to 8.83 MB/s and read: 28.6 MB/s to 25 MB/s). Do remember, if one drive of the Array is removed or fails, the redundancy of the data is lost, but the data still available.

NOTE: The speed may be increased by placing each drive on a separate USB ROOT HUB. To see the number of ROOT HUBs you have and where each device is located, use the command 'lsusb'.

Auto Mount the RAID Array

To have the RAID Array auto mount after each reboot is a simple task. Run the command 'blkid' to get the needed information from the RAID Array. For example, to run it after I mounted my RAID mount point, I would get the following:

/dev/sda2: UUID="73d91c92-9a38-4bc6-a913-048971d2cedd" TYPE="ext4"
/dev/sda3: UUID="9a621be5-750b-4ccd-a5c7-c0f38e60fed6" TYPE="ext4"
/dev/sda4: UUID="78f175aa-e777-4d22-b7b0-430272423c4c" TYPE="ext4"
/dev/sda5: UUID="d5991d2f-225a-4790-bbb9-b9a48e691061" TYPE="swap"
/dev/sdb1: LABEL="GREEN" UUID="5914-5431" TYPE="vfat"
/dev/sdd1: LABEL="ORANGE" UUID="4C76-7987" TYPE="vfat"
/dev/sdc1: LABEL="My Book" UUID="54D8D96AD8D94ABE" TYPE="ntfs"
/dev/sde1: UUID="fb783956-17f6-6eda-a45b-150a56e5af70" UUID_SUB="34f799ec-979e-93ec-b8cd-d3f3b7fb5d28" LABEL="Symple- PC:0" TYPE="linux_raid_member"
/dev/md0p1: LABEL="RAID 4" UUID="a07e8b6a-670a-4465-b3a4- 39387f19d21e" TYPE="ext4"

The needed information is the line with the partition '/dev/md0p1'. The Label is RAID 4 and the UUID is 'a07e8b6a-670a-4465-b3a4-39387f19d21e' and the type is EXT4.

Edit the file '/etc/fstab' as ROOT using an editor you prefer and add a line similar to 'UUID= a07e8b6a-670a-4465-b3a4-39387f19d21e /media/RAID ext4 defaults 0 0'. Here the UUID is used from the blkid command. The mount point of '/media/RAID' shows where the mount point is located. The drive format of ext4 is used. Use the word 'defaults' and then '0 0'. Be sure to use a TAB between each set of commands.

Your RAID 4 drive Array should now be completely operational for use.

NOTE: Looking at the two lines before the /dev/md0p1 you can see that the UUIDs are SUBs and the TYPE is a "linux_raid_member". The listing allows you to see the two original devices being used in RAID Array 4.

Removing the RAID Array

To stop the RAID Array, you need to unmount the RAID mount point then stop the device 'md0p1' as follows:

sudo umount -l /media/RAID
sudo mdadm --stop /dev/md0p1

Once done, you need to reformat the drives and also remove the line from /etc/fstab which enabled it to be be automounted.

Fixing a broken RAID Array

If one of the drives should fail, you can easily replace the drive with a new one and restore the data to it.

Now, let's say from the above, drive sde1 fails. If I enter the 'lsblk' command, the drive sdb1 and sdd1 are shown and still listed as 'md0p1'. The device RAID is still accessible and usable. The Fault Tolerance is unavailable since only the two drives remain.

To determine the faulty drive, use the command: 'cat /proc/mdstat'.

The line which shows '[U_]', with the underscore, shows a break in the RAID Array. The line that shows (F) shows the drive which was the failure. So, you know to remove the failed drive and replace it.

To fix a broken RAID Array, replace the failed drive with a new drive that has a minimum space of the previous drive. After adding a new drive, run 'lsblk' to find the address of the new drive. Say for example it is 'sde1'. First, unmount the drive by using its label with the command 'umount /media/jarret/label'.

To join the new drive to the existing broken RAID 1 Array, the command is:

sudo mdadm --manage /dev/md0p1 --add /dev/sdf1

The RAID partition name is 'md0p1', as shown previously in GParted. The device to add is 'sdf1'.

To see the progress of the rebuild, use the command 'cat /proc/mdstat'.

At any time, the command 'cat /proc/mdstat' can be used to see the state of any existing RAID Array.

If you must remove a drive, you can tell the system that the device has failed. For instance, if I wanted to remove drive sde1 because it was making strange noises and I was afraid it would fail soon, the command would be:

sudo mdadm --manage /dev/md0p1 --remove /dev/sde1

The command 'cat /proc/mdstat' should show the Array has failed. Before you just unplug the device, you need to tell the system to remove it from the Array. The command would be:

sudo mdadm --manage /dev/md0p1 --remove /dev/sde1

You can now remove the drive, add a new one and rebuild the Array as described above.

Hope this helps you understand the RAID 4, 5 and 6 Arrays. Enjoy your RAID Array!
 
Last edited:

Members online


Top