Extending the root LVM on linux live

When dealing with VMs, if you didnt plan your filing and partitions well and went ahead with root, you may one day be extending this. For instance you may only provision 10GB of EBS on EC2 on amazon at first, then as your files grow such as logs and say content for your website you will end up provisioning more space by extending the existing disk. Reinstalling means downtime so going with a new EBS disk might not be a good idea alongside copy pasting/backing up not to mention just copying database files among 2 of the same database softwares might not work either resulting in having to import/export. If you move an existing setup to a larger physical drive and use a raw disk image copy such as dd and have a LVM setup, this will apply too, though the free space will be at the end so make sure your LVM is always on the last partition.

Heres how, lets make this harder, lets say we have a standard primary partitioning setup with LVM sitting in one of them. Lets say we have vda/vda1 boot,vda2 swap,vda3 – vg…/root. Additionally all the following commands should be run as root. These commands should be OS agnostic for linux so should work across major distros (may require installing utilities though). In systems like AWS,GCP, openstack, there is no need to worry about the physical representation since all that is skipped and is just directly handled by the setup which is far more complicated that you only have to worry about the LV and VG.

First thing is backup. either do a software backup or run dd if=/dev/[source] of=[destination] . If restoring just swap the if and of. (if in, of out). This can be used to copy partitions as well and in this manner will create a raw image of a partition that can be directly used to restore as well. If using a virtualisation platform, you can use snapshots to back up and restore quickly as well. Both SUSE and solaris (btrfs and zfs) support quick snapshots as well but may not be sufficient to restore if there is a difference or problem on the physical media however SUSE provides their own comprehensive filesystem/drive tool suite with GUI usable from terminal with their OS (its why i like to use SUSE for machines meant to store).

If you are using a virtualisation platform like aws or openstack, skip to extending the LV since you do not have to deal with the physical layer.

Now once you’ve extended the volume outside of the VM, check by running lsblk

:# lsblk
—————–output————————-
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.4M 1 loop /snap/core18/2128
loop1 7:1 0 70.3M 1 loop /snap/lxd/21029
loop2 7:2 0 32.3M 1 loop /snap/snapd/12704
sr0 11:0 1 1024M 0 rom
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
├─vda2 252:2 0 1G 0 part /boot
└─vda3 252:3 0 6.5G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 6.5G 0 lvm /

Normally i’d use debian but work specced ubuntu. Here i have ubuntu server installed on a 8GB virtual disk that i have just expanded to 24GB (if using proxmox or vmware, its quicker to keep stock installed OS backed up and just restore from backup). The default ubuntu installs a primary partition setup with LVM on the last partition and everything on root. However ubuntu also likes to only install on half the available space if you have a very large disk and the installer isnt very clear either about this (amongst the things i hate about ubuntu aside from taking 3x more space than debian, 3x the storage cost on aws).

Make sure the commands are utilities are installed. Try running pvscan, lvscan,vgscan. These commands should work and return a list of volumes related to LVM. LVM even though simplifies hardware partition management, significantly complicates it on the software/virtual management. You have the physical volume (think physical disks), on top of that you have logical volumes (think partitions) and on top of that you have volume group (think mounts and fake partitions). While theres more complexity, it does allow VMs something called LVM thin where you can assign in total more space than there actually is directly while allowing the virtualisation software/layer to directly deal with the disk and VM disks in raw format. It also allows you to easily swap them around or even mount them on the host OS as well if disk encryption is not used on the VM.

Once the new space is verified to exist in lsblk use fdisk /dev/[disk]. in many cases it may be something like /dev/vda which you can obtain from lsblk or even df -h

:# fdisk /dev/vda
: p (take note of the start position)
: d (we want to delete the partition)
: 3 (select the partition to delete)
: n (create a new partition)
: n (keep the existing signature, note in this case we are using all the disk, otherwise make sure to specify a specific start/end using some math to get the exact size you want)
: w ( write changes)
: quit

If all worked well you should now see the partition we want being larger

:# lsblk
—————output————–
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.4M 1 loop /snap/core18/2128
loop1 7:1 0 70.3M 1 loop /snap/lxd/21029
loop2 7:2 0 32.3M 1 loop /snap/snapd/12704
sr0 11:0 1 1024M 0 rom
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
├─vda2 252:2 0 1G 0 part /boot
└─vda3 252:3 0 22.5G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 6.5G 0 lvm /

We can also use pveresize if we want to just use all the new space as well

:# pvresize /dev/vda3
—————–output——————
Physical volume "/dev/vda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Now to extend the LG, use lvscan to get the volume path/name

:# lvscan
—————output—————-
ACTIVE '/dev/ubuntu-vg/ubuntu-lv' [<6.50 GiB] inherit

then extend (in this case we use 100% of the free space)

:# lvextend -r -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
————————output—————
Size of logical volume ubuntu-vg/ubuntu-lv changed from <6.50 GiB (1663 extents) to <22.50 GiB (5759 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 5897216 (4k) blocks long.

If you prefer setting the size to be extended manually, use command option:

-L, --size [+]LogicalVolumeSize[bBsSkKmMgGtTpPeE]
Where size suffix are:

    M for megabytes
    G for gigabytes
    T for terabytes
    P for petabytes
    E for exabytes

Without the + sign the value is taken as an absolute one.

# Add 20 gigabytes to the current logical volume size
$ sudo lvextend -r -L +20G /dev/name-of-volume-group/root

Use df to check if the new space has been applied

# df -h
————————–output—————-
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 394M 1.2M 392M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 23G 3.3G 18G 16% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop1 71M 71M 0 100% /snap/lxd/21029
/dev/loop0 56M 56M 0 100% /snap/core18/2128
/dev/loop2 33M 33M 0 100% /snap/snapd/12704
/dev/vda2 976M 107M 802M 12% /boot
/dev/vda1 511M 5.3M 506M 2% /boot/efi
tmpfs 394M 0 394M 0% /run/user/1000

If lets say the filesystem still shows the old size, you need to resize it

For ext4 filesystem

:# resize2fs /dev/name-of-volume-group/root

For xfs filesystem (its a long command)

$ xfs_growfs /
meta-data=/dev/mapper/rhel-root isize=512 agcount=4, agsize=1764608 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=7058432, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=3446, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 7058432 to 9679872

I will say though that the one benefit of LVM is that the file systems dont have to be in a specific order (if there is 5GB of partition at the front of the volume, a 10GB partition in the middle and 5GB at the end, lvm lets you create a 10GB volume utilising the free space at the front and end together). You could still move partitions to fully use the free space to create 1 partition in a traditional setup as well but at times that is considered slow.

Definitions: Physical volume – What is considered to be a physical disk/volume. In a VM a virtual disk or volume is exposed to the VM as a physical drive.
Logical volume – This is a virtual volume. By having another layer, this is not limited to physical limitations and we can go further with LVM thinning allowing more space allocation than is physically possible. Using more space however can cause data loss and corruption so this is one thing to avoid. Software stuck writing to disk can also cause the LV to overfill and cause data corruption and other problems that can be resolved by extending the volume, allowing the operations to finish and then shrinking the volume again.
Volume group – groups of logical volumes. In a LVM, you can have many volumes within a volume. The total space assigned to these volumes can be more than the volume group. This is similar to a parttion in a traditional disk setup but isnt limited to only 4 partitions and is considered physically limited in space allocation.
In a traditional disk setup: Physical disk -> partition(s) -> filesystem -> data
in a LVM setup: Physical disk -> LVM disk (PV)-> Volume Group (VG) -> Logical Volume (LV) -> filesystem -> data

In a LVM setup, it is fatal to exceed the amount of physical space that is allocated to the VG which can cause data loss or corruption and can cause an OS to not boot if the root volume is overfilled. Due to the nature of LVM, it is possible to overfill a volume if the actual usage is close to full and software creates multiple new write handles of large sizes. Overprovisioning is common, so take good care to monitor the usage at the VG level at least to avoid potential data loss, corruption and downtime. Most partitioning tools do not know how to deal with LVM and will report the metadata or partition tables to be corrupt, ignore this when dealing with LVM.

I would also highly recommend XFS over ext4 unless you have a small amount of IO and not any large files. Both are robust and have their own tools to diagnose and fix issues but in performance and inodes, xfs beats ext if theres a lot of io or a lot of files however distros like opensuse use btrfs since not only did they come up with it but uses the most recent stable version that other distros dont get or havent yet implemented in a stable branch so when using SUSE, use btrfs instead as the distro also comes with a full suite of GUI over terminal file/disk recovery software should boot fail due to this, just like with solaris and zfs, these distros use their filesystems far better than if they would use ext or xfs. Red had family defaults to xfs while debian family defaults to ext (though you should choose xfs here).