This instalment of my blog is as usual a bit of a rant and a thinly veiled “Documentation to myself when I next forget how to do this”.
So I’ll set the scene - I have been gradually migrating things back to my home based hosting in Sydney (what with stable power and not abysmal VDSL2 NBN) rather than hosting in AWS, Vultr and about fifty other random SaaS services.
Not to mention that I got very excited to get on board with Redhats Developer program, So what is one to do? Install it obviously.
Well - things have changed a bit since I did my RHCE on RHEL 5 it would seem. First lets cover what we are trying to actually do here
There are four 2TB SATA disks in a five bay hot swap caddy. As you can imagine I want a semi-respectable performance so I chose RAID10. Since I can’t boot on RAID10 I need to also create a RAID1 for /boot. We also want to use LVM because we are not scrubs (or on AWS).
md0 - RAID 1 /boot sda1 sdb1 sdc1 sdd1 md1 - RAID 10 LVM Physical Volume sda2 sdb2 sdc2 sdd2 LVM PV rhel_vg lv_root - / lv_home - /home lv_var - /var lv_swap
Ok that plan looks great - install time!
Note: Total caveat here is EFI. Since my old X9SCL+ hasn’t yet shed MBR based booting I don’t need to stuff around with EFI partitions. So I have skipped them since I was beyond patience by that point. If you need EFI I am keen to hear what lengths it takes to mirror the boot volumes.
So I apologize for what is normally a blog light on screen shots but since this all anaconda GUI here we go.
Select your physical disks.
Don’t click the temping “Click here to create them automatically” button. Nooo we click the + sign button at the bottom left to add our first boot partition.
- Mount point = /boot
- Desired Capacity = 500M
Now we need to change the /boot partition on /dev/sda1 to a RAID1 array.
- Click the Device Type accordion menu then select RAID.
- The RAID level defaults to RAID1. Keep this for our boot volumes.
- Click Update Settings to save this partitions configuration. If you clicked away already joke is on you - start from 1. again. The number of times I fell for this is embarrassing.
Ok now we assign our root mount point by clicking + again.
- Mount point = /
- Desired Capacity = 10G
If your thinking “Oh yeah we just did this, I select RAID in the Device Type here”, your going to hit one of my pet peeves. No actually we Modify the Volume Group (Think about it afterwards and it makes sense).
- Click Modify
- Click the RAID Level accordion menu then select your desired RAID level (We use RAID10 in this example).
- Click Save
Finally we just need to add a swap volume. Clicketh the + button once more.
- Mount point = swap
- Desired Capacity = 1G
Believe it or not the swap volume is the least painful since it is just an additional logical volume on the existing RAID10 physical volume. If you wanted it in another volume group on a different physical volume I feel for you son.
Ok so we are installing at last! Shortly your system should boot and your layout will be something like this.
So that all seems fairly easy? Well hopefully yes! But in case you come unstuck like I did here are the traps I hit.
Zero any disks first
If your reinstalling over an old deployment your almost sure to hit blivet bugs during the partitioner which annoying only happens after you spend a silly amount of time clicking around the partitioner GUI and the installer actually starts trying to get going.
So first boot to rescue mode and create new MSDOS labels on each disk you plan to use for the install but do not create any partitions.
The bugs you will hit look something like this;
:The following was filed automatically by anaconda: :anaconda 126.96.36.199-1 exception report :Traceback (most recent call first): : File "/usr/lib/python2.7/site-packages/blivet/formats/__init__.py", line 405, in destroy : raise FormatDestroyError(msg) : File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 651, in execute : self.format.destroy() : File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 377, in processActions : action.execute(callbacks) : File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 374, in doIt : self.devicetree.processActions(callbacks) : File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 224, in turnOnFilesystems : storage.doIt(callbacks) : File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 186, in doInstall : turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg) : File "/usr/lib64/python2.7/threading.py", line 764, in run : self.__target(*self.__args, **self.__kwargs) : File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run : threading.Thread.run(self, *args, **kwargs) :FormatDestroyError: error wiping old signatures from /dev/mapper/vg01-rootfs: 1
Expanding an existing RAID’ed PV
So you noticed your disks are a little light on the way of free space eh? Turns out that if you didn’t fully allocate your disks during partitioning the installer no longer fills your disks for you. That physical volume is exactly large enough to fit the logical volumes you asked for and not a errant gigabyte more.
So lets expand the disks because actually we really did want to add more logical volumes. Just not during install time.
fdisk each physical member of the RAID array
- Delete the partition containing the RAID array - d
- Recreate the partition - n
- Change the partition type - t, you will want fd for mdadm arrays
- reboot to clear your kernel partition tables
Expand the array via mdadm
$ mdadm –grow /dev/md1 –size=max
Finally resize the PV
$ pvresize /dev/md1