Browsing articles tagged with " partitions"

How To Resize RAID Partitions (Grow) (Software RAID)

Jul 31, 2012   //   by NPV Webmaster   //  Blog  // 

This article describes how you can grow existing software RAID partitions. I have tested this with non-LVM RAID1 partitions that use ext3 as the file system. I will describe this procedure for an intact RAID array.

1. Preliminary Note

The goal of this exercise was to upgrade the drives on the RAID1 array on the file server, without having to move files or re-install a new clean operating system. Essentially, I wanted to swap the drives, and grow the file system.

The current server has (2) 500G SATA drives, making up two raid partitions /dev/md0 (O/S) and /dev/md1 (/home)

[root@waltham ~]# cat /proc/mdstat
Personalities : [raid1] md1 : active raid1 sdb3[1] sda3[0] 1931004864 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0] 20474688 blocks [2/2] [UU]

In summary, I took out the current primary 500G drive, and cloned it onto (2) 2TB drives. The reason for cloning the primary drive, was that the boot sector, is only written to the primary drive. That way, both clones would have a copy of the boot sector, in case that part of the disk is ever corrupted.

In a software raid, only the primary drive retains a copy of the boot sector. I learned this the hard way.

Once both drives were cloned with CLONEZILLA, I took out the old drives, and put in the two new cloned drives, and booted the system. Following are the detailed steps in the process.

Once, I rebooted the system with the two new 2TB drives, the system recognized that the drives were members of an array, but it would not re-establish the array, as you can see.

[root@waltham ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda3[0]
1931004864 blocks [1/2] [_U]

md0 : active raid1 sda1[0]
20474688 blocks [1/2] [_U]

The primary disk came online, but the other one did not.

Know that the data was intact, since one of the drives booted up fine, I ran fdisk on the drive that did not come up.

Fdisk allows me to delete the current 490G partition sdb3, and re-create it using the MAX allowed space. That way, when I recreated the partition, it was now almost 2TB. I then added the partitions to their respective arrays.

mdadm /dev/md0 –add /dev/sdb1
mdadm /dev/md1 –add /dev/sdb3

/dev/sdb2 and /dev/sda2 are swap partitions.

Once this was done, the array started to re-create itself. You can see the progress by typing the following command

cat /proc/mdstat

Once the mirroring completed, I took /dev/sda off, the array, and ran fdisk on /dev/sda3 in order to re-size it to the full size of the disk.

After that was done, you need to re-add the new partition to the array in order for the imaging to start again, on the new (bigger) partition. Since /dev/md1 is still defined at 500G, we need to take the following steps before proceeding.

2 Intact Array

I will describe how to resize the array /dev/md1, made up of /dev/sda3 and /dev/sdb3.

2.1 Growing An Intact Array

Boot into into single user mode. When the GRUB loader comes up, hit ‘e’ for ‘edit’ and select the first boot command, select ‘e’ again, and add the word ‘single’ to the command string, then hit ‘b’ to continue the boot process. At the hash prompt, you will need to unmounts the array that you wish to grow.

umount /home

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm -A --scan

Now we can grow /dev/md1 as follows:

mdadm --grow /dev/md1 --size=max

--size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we run a file system check...

e2fsck -f /dev/md1

..., resize the file system...

resize2fs /dev/md1

... and check the file system again:

e2fsck -f /dev/md1

Afterwards you can boot back into your normal system, and you should have a new filesystem, as you can see with the full size of your grown space.

[root@waltham ~]# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/md0 21G 5.6G 14G 29%  /
tmpfs 4.1G 0 4.1G 0% /dev/shm
/dev/md1 2.0T 259G 1.6T 15% /home