zfs filling up the /boot device with snapshots

So, you are trying to update your system and get this:

Requesting to save current system state
ERROR couldn’t save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%.
Free space on pool “bpool” is xx%

The zfs filesystem is creating a snapshot every time you install a new kernel, this will add up, and suddenly you are not able to update anymore, and you are seeing a lot of error caused by this. If you are unlucky and reboot, it might even brake your system!

Running Ubuntu with zfs will create a pool called bpool as your /boot device. Continuing I will refer to this as bpool.


To get an overview of your zfs pools, use this command: zfspool list

kasper@AsusPro:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bpool 1.88G 1.64G 238M – – 32% 87% 1.00x ONLINE –
rpool 114G 26.3G 87.7G – – 7% 23% 1.00x ONLINE

As you see, my bpool pool is 87% filled up, I have a problem.

To get an overview of snapshots in the bpool pool, use this command: zfs list -t snapshot | grep bpool

kasper@AsusPro:~$ zfs list -t snapshot | grep bpool
bpool/BOOT/ubuntu_nijvpt@autozsys_646r7v 8K – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_5z8j40 8K – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_rknbhc 0B – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_cd3k48 0B – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_e3s23t 0B – 185M –
bpool/BOOT/ubuntu_nijvpt@autozsys_teumkd 0B – 185M –
bpool/BOOT/ubuntu_nijvpt@autozsys_iaxh6k 88K – 279M –
bpool/BOOT/ubuntu_nijvpt@autozsys_8uh3k1 88K – 372M –
bpool/BOOT/ubuntu_nijvpt@autozsys_kpxpbq 88K – 187M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3lafml 0B – 280M –
bpool/BOOT/ubuntu_nijvpt@autozsys_0gvki1 0B – 280M –
bpool/BOOT/ubuntu_nijvpt@autozsys_8j52ch 82.3M – 187M –
bpool/BOOT/ubuntu_nijvpt@autozsys_2sibtv 72K – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3t44c8 56K – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_u58usc 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3okd08 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_v18vua 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_f5185b 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_12i3wc 0B – 288M –
bpool/BOOT/ubuntu_nijvpt@autozsys_o6sgfl 0B – 288M –

Seems like zfs is not doing any housecleaning by itself, we need to help!
To remove a snapshot, we need to destroy it. This is done with the destroy option, like this:

sudo zfs destroy /path/to/snapshot

To remove all snashots in the bpool, you can use the following:

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1);do sudo zfs destroy $i; done

This will destroy all snapshots and free up your bpool pool.

If you don’t want to remove all snapshots, pipe your snapshots listing into tail or head. You can add a creation column to the list like this:

zfs list -t snapshot -o name,creation -s creation

This will list your snapshots in order of creation date. Use wc -l to count them and use head to get x amount fewer.

zfs list -t snapshot -o name,creation -s creation | wc -l
20

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1 | head -15);do sudo zfs destroy $i; done

I did try to format the commands, but as usual Word Press make things so complicated to use, that i had to revert all the special formatting, instead of spending the entire day figuring out why html tags are not respected in “enlighter inline code”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.