Installing Debian testing On GPT HDDs (> 2TB) From A Grml Live Linux

Version 1.0
Author: Falko Timme
Follow me on Twitter

This tutorial explains how to install Debian testing with the help of debootstrap from a Grml Live Linux system. This should work - with minor changes - for other Debian and Ubuntu versions as well. By following this guide, it is possible to configure the system to your needs (OS version, partitioning, RAID, LVM, etc.) instead of depending on the few pre-configured images that your server provider offers.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

The server I'm using in this tutorial has two hard drives that have more than 2TB of disk space. Tools like fdisk and sfdisk don't work with such big disks because we use a GUID Partition Table (GPT) here. Therefore we use gdisk and sgdisk. I want to use software RAID1 and LVM for this server.

Before you boot the system into rescue mode, you should take note of its network settings so that you can use the same network settings for the new system. For example, if you use Debian or Ubuntu on the system, take a look at /etc/network/interfaces:

cat /etc/network/interfaces

It could look as follows:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
     address <ip_address>
     netmask <netmask>
     broadcast <broadcast>
     gateway <gateway>
iface eth0 inet6 static
  address <ipv6_address>
  netmask 64
  up ip -6 route add <ipv6_gateway> dev eth0
  down ip -6 route del <ipv6_gateway> dev eth0
  up ip -6 route add default via <ipv6_gateway> dev eth0
  down ip -6 route del default via <ipv6_gateway> dev eth0

 

2 Partitioning

Now boot into the Grml rescue system.

In the rescue system, let's check of software RAID and LVM are in use for the hard drives (if so, we need to remove them before we partition the hard drives):

The commands

lvdisplay
vgdisplay
pvdisplay

tell you if there are logical volumes, volume groups, and block devices being used for LVM. If so, you can remove them as follows (make sure you use the correct data, as displayed by the three above commands):

lvremove /dev/vg0/root
vgremove vg0
pvremove /dev/md1

Let's check if software RAID is in use:

cat /proc/mdstat

If the output is as below, you have to RAID devices, /dev/md0 and /dev/md1 which have to be removed:

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
md0 : active raid1 sda1[0] sdb1[1]
      530048 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      104856192 blocks [2/2] [UU]

unused devices: <none>
root@grml ~ #

Remove them as follows:

mdadm --stop /dev/md0
mdadm --stop /dev/md1

Double-check that no RAID devices are left:

cat /proc/mdstat

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
unused devices: <none>
root@grml ~ #

Because I have two identical hard drives, I want to use software RAID1 plus LVM (because of its flexibility). Because LVM cannot be used on the boot partition, I have to create a separate /boot partition where I use RAID1 without LVM. In addition to that, we need a small BIOS boot partition so that GRUB works with our GUID partition table (GPT); 1MB should be enough, however, I use 5MB here. It doesn't matter if the BIOS boot partition is the first or last partition on the disk or somewhere in between.

To do this, I delete the existing partitions from /dev/sda, create two new ones (a small one of about 512MB for /boot and a large one; you could of course use all remaining space for the large partition, but I tend to leave some space unused because bad sectors tend to appear in the outer areas of a spinning disk - if you use an SSD, it's ok to use the whole disk; if you place the BIOS boot partition at the and, please leave some space for it!) with Linux raid autodetect as the partition type and a third partition with BIOS boot partition as the partition type:

gdisk /dev/sda

root@grml ~ # gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.

Command (? for help):
 <-- n
Partition number (1-128, default 1): <-- ENTER
First sector (34-7814037134, default = 2048) or {+-}size{KMGTP}: <-- ENTER
Last sector (2048-7814037134, default = 7814037134) or {+-}size{KMGTP}: <-- +512M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
 <-- L
0700 Microsoft basic data  0c01 Microsoft reserved    2700 Windows RE
4200 Windows LDM data      4201 Windows LDM metadata  7501 IBM GPFS
7f00 ChromeOS kernel       7f01 ChromeOS root         7f02 ChromeOS reserved
8200 Linux swap            8300 Linux filesystem      8301 Linux reserved
8e00 Linux LVM             a500 FreeBSD disklabel     a501 FreeBSD boot
a502 FreeBSD swap          a503 FreeBSD UFS           a504 FreeBSD ZFS
a505 FreeBSD Vinum/RAID    a580 Midnight BSD data     a581 Midnight BSD boot
a582 Midnight BSD swap     a583 Midnight BSD UFS      a584 Midnight BSD ZFS
a585 Midnight BSD Vinum    a800 Apple UFS             a901 NetBSD swap
a902 NetBSD FFS            a903 NetBSD LFS            a904 NetBSD concatenated
a905 NetBSD encrypted      a906 NetBSD RAID           ab00 Apple boot
af00 Apple HFS/HFS+        af01 Apple RAID            af02 Apple RAID offline
af03 Apple label           af04 AppleTV recovery      af05 Apple Core Storage
be00 Solaris boot          bf00 Solaris root          bf01 Solaris /usr & Mac Z
bf02 Solaris swap          bf03 Solaris backup        bf04 Solaris /var
bf05 Solaris /home         bf06 Solaris alternate se  bf07 Solaris Reserved 1
bf08 Solaris Reserved 2    bf09 Solaris Reserved 3    bf0a Solaris Reserved 4
bf0b Solaris Reserved 5    c001 HP-UX data            c002 HP-UX service
ef00 EFI System            ef01 MBR partition scheme  ef02 BIOS boot partition
fd00 Linux RAID
Hex code or GUID (L to show codes, Enter = 8300):
 <-- fd00
Changed type of partition to 'Linux RAID'

Command (? for help):
 <-- n
Partition number (2-128, default 2): <-- ENTER
First sector (34-7814037134, default = 1050624) or {+-}size{KMGTP}: <-- ENTER
Last sector (1050624-7814037134, default = 7814037134) or {+-}size{KMGTP}: <-- 7813000000
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
 <-- fd00
Changed type of partition to 'Linux RAID'

Command (? for help):
 <-- n
Partition number (3-128, default 3): <-- ENTER
First sector (34-7814037134, default = 7813001216) or {+-}size{KMGTP}: <-- ENTER
Last sector (7813001216-7814037134, default = 7814037134) or {+-}size{KMGTP}: <-- +5M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
  <-- ef02
Changed type of partition to 'BIOS boot partition'

Command (? for help):
 <-- w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N):
 <-- Y
OK; writing new GUID partition table (GPT) to /dev/sda.
The operation has completed successfully.
root@grml ~ #

Next I run...

sgdisk -R=/dev/sdb /dev/sda
sgdisk -G /dev/sdb

... to copy the partitioning scheme from /dev/sda to /dev/sdb so that they are identical on both disks.

Run...

mdadm --zero-superblock /dev/sda1
mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

... to remove any remainders from previous RAID arrays from the partitions.

Now we create the RAID array /dev/md0 from /dev/sda1 and /dev/sdb1...

mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1

... and /dev/md1 from /dev/sda2 and /dev/sdb2:

mdadm --create /dev/md1 --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2

Let's check with:

cat /proc/mdstat   

As you see, we have two new RAID1 arrays:

root@grml ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [multipath]
md1 : active raid1 sdb2[1] sda2[0]
      1927269760 blocks [2/2] [UU]
      [>....................]  resync =  0.0% (347136/1927269760) finish=185.0min speed=173568K/sec

md0 : active raid1 sdb1[1] sda1[0]
      530048 blocks [2/2] [UU]

unused devices: <none>
root@grml ~ #

Let's put an ext4 filesystem on /dev/md0:

mkfs.ext4 /dev/md0

Prepare /dev/md1 for LVM:

pvcreate /dev/md1

Create the volume group vg0:

vgcreate vg0 /dev/md1

Create a logical volume for / with a size of 100GB:

lvcreate -n root -L 100G vg0

Create a logical volume for swap with a size of 10GB:

lvcreate -n swap -L 10G vg0

Run:

lvscan

If all logical volumes are shown as ACTIVE, everything is fine:

root@grml ~ # lvscan
  ACTIVE            '/dev/vg0/root' [100,00 GiB] inherit
  ACTIVE            '/dev/vg0/swap' [10,00 GiB] inherit
root@grml ~ #

If not, run...

vgchange -ay

... and check with lvscan again.

Next create filesystems on /dev/vg0/root and /dev/vg0/swap:

mkfs.ext4 /dev/vg0/root
mkswap /dev/vg0/swap

Mount the root volume to /mnt, create a few diretories and mount /dev/md0 to /mnt/boot:

mount /dev/vg0/root /mnt
cd /mnt
mkdir boot
mkdir proc
mkdir dev
mkdir sys
mkdir home
mount /dev/md0 boot/

Create an fstab for the new system:

mkdir etc
cd etc
vi fstab

proc /proc   proc   defaults 0 0
/dev/md0 /boot   ext4   defaults 0 2
/dev/vg0/root /              ext4   defaults 0 1
/dev/vg0/swap          none      swap  defaults,pri=1 0 0
Share this page:

1 Comment(s)