Oct 13

Adding software raid disks to grow a volume group

Category: Linux   — Published by tengo on October 13, 2010 at 6:39 am

This is a follow up post to Creating a software RAID 1 as basis for an LVM drive.

The basic steps of adding a disk to a volume group are for example explained here. What makes our process described here unique is that it uses an underlying set of two disks bundled as RAID 1 as the "disk" we'd like to add to our volume group.

lvm diagram

So the first step after installing our new drives is to marry them together in a Linux md drive, a RAID array. Use any tool you like to find out under which device names your system registered the physical drives on your system, then execute the mdadm command with these values to create the RAID "disk", for example:

sudo mdadm --create /dev/md1 -l 1 -n 2 /dev/sdb /dev/sdc

--create is for creating a new array, after that follows the desired device name for the RAID "disk", -l specifies the RAID level to use, RAID 1 here, and -n tells mdadm to add two drives which follow after that. If you got for example the Red Hat lovume managerGUI running, you should immediately after you issuied the command see the new RAID disk in the listing.

Next, we need to add the new "disk" md1 to our set of physical volumes, although we deal with a "not so physical volume" here. This is called 'converting a disk to a physical volume' in the terms of lvm and can be achieved with pvcreate:

sudo pvcreate /dev/md1

You see, we activate our virtual/RAID1 "disk" as a physical volume with pvcreate here. With the physical volume created, we now need to add this new pv to the volume group (vg) using the vgextend command. As a quick glance at the lvm scheme diagram above explains, the volume-group is the virtual "disk" the lvm exposes to the system, where actual physical drives combined by lvm are the underlying storage hardware.

Th trick to add drives/'physical volumes' to a 'volume group' is to vgextend. (In the previous post we used vgcreate to init a volume group.) Find out the name of the volume group you'd like to extend with vgdisplay and then use thisto build your vgextend command:

vgextend storagevg /dev/md1

So, after we added our RAID1 disk md1 to our vg, its storage space is ready to be allocated to a logical volume. In lvm-speak, a 'logical volume' is the disk lvm exposes to the system. This is independent from how you intend to format it. So as we here are in effect growing the size of the "disk" lvm exposes to the system - so far on the physical layer by installing disks and on a lvm-physical layer by adding 'physical volumes' to our 'volume group' -  we now need to tell lvm that it should use the additional storage space to grow the size of the exposed drive.

To do this we run the lvextend tool providing the size by which we wish to extend the volume. Use lvdisplay to get the path of the 'logical volume' we want to grow inside the 'volume group'. We here want to extend the logical volume by 100% of the newly added storage space, thus we can learn from the man page of lvextend: "lvextend /dev/vg01/lvol01 /dev/sdk3" tries to extend the size of that logical volume by the amount of free space on physical  volume /dev/sdk3. This is equivalent to specifying "-l +100%PVS" on the command line."
Thus our command:

sudo lvextend /dev/storagevg/onelv /dev/md1

Just for the record, the command "sudo lvextend /dev/storagevg/onelv -l +100%PVS"gave me a "segmentation fault" error, thus the above equivalent.

The last step in the process is to resize the file system residing on the logical volume /dev/storagevg/onelv so that it uses the additional space. In my case it is an Ext3 file system, thus I am using the resize2fs command. A previous e2fsck might be needed to make sure everything in this fs is okay:

sudo e2fsck -p /dev/storagevg/onelv

sudo resize2fs /dev/storagevg/onelv

Done.

Aftermath
What looked like an easy and problem free process turned out to have a few surprises in store. After a reboot my LVM was broken. Even more disappointing, the drive /dev/md1 I had just created was now a strange /dev/md_d1 with a slew of other devices named /dev/md_d1/d1p1 etc. Horror.

A bit of forum post reading later, I had found out that my process did not add my new disk RAID1 to my /etc/mdadm/mdadm.conf file, which I fixed by adding:

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=3eaf73fc:0559f59a:e7cc9877:xxxxx

which is effectively the output of "sudo mdadm --detail --scan" . Just issue this command and add the output to your mdmadm.conf file. After that another reboot added /dev/md0 and /dev/md1 properly to my system. I think there's a wizard on every system which creates this file, it should come up when you do "sudo dpkg-reconfigure mdadm".

Use the command "cat /proc/mdstat" to see your RAIDs.

The other thing was my fstab entry for the file system which I layered on top of my logical volume /dev/storagevg/onelv. I just couldn't find its UUID. Thus I reverted to the old format of telling the/dev path in fstab and it seems to work. (The commented out line is the newer UUID based format which newer debians/Ubuntus should use..)

# UUID=FTDD0F-lfGb-Ihw8-kCOK-3VLa-XjXK-xxxxx /home/mount ext3 rw,noatime,relatime 0 0
/dev/storagevg/onelv /home/mount ext3 rw,noatime,relatime 0 0

So far so good.