So, you are trying to update your system and get this:

Requesting to save current system state
ERROR couldn’t save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%.
Free space on pool “bpool” is xx%

The zfs filesystem is creating a snapshot every time you install a new kernel, this will add up, and suddenly you are not able to update anymore, and you are seeing a lot of error caused by this. If you are unlucky and reboot, it might even brake your system!

Running Ubuntu with zfs will create a pool called bpool as your /boot device. Continuing I will refer to this as bpool.


To get an overview of your zfs pools, use this command: zfspool list

[email protected]:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bpool 1.88G 1.64G 238M – – 32% 87% 1.00x ONLINE –
rpool 114G 26.3G 87.7G – – 7% 23% 1.00x ONLINE

As you see, my bpool pool is 87% filled up, I have a problem.

To get an overview of snapshots in the bpool pool, use this command: zfs list -t snapshot | grep bpool

[email protected]:~$ zfs list -t snapshot | grep bpool
bpool/BOOT/[email protected]_646r7v 8K – 92.8M –
bpool/BOOT/[email protected]_5z8j40 8K – 92.8M –
bpool/BOOT/[email protected]_rknbhc 0B – 92.8M –
bpool/BOOT/[email protected]_cd3k48 0B – 92.8M –
bpool/BOOT/[email protected]_e3s23t 0B – 185M –
bpool/BOOT/[email protected]_teumkd 0B – 185M –
bpool/BOOT/[email protected]_iaxh6k 88K – 279M –
bpool/BOOT/[email protected]_8uh3k1 88K – 372M –
bpool/BOOT/[email protected]_kpxpbq 88K – 187M –
bpool/BOOT/[email protected]_3lafml 0B – 280M –
bpool/BOOT/[email protected]_0gvki1 0B – 280M –
bpool/BOOT/[email protected]_8j52ch 82.3M – 187M –
bpool/BOOT/[email protected]_2sibtv 72K – 284M –
bpool/BOOT/[email protected]_3t44c8 56K – 284M –
bpool/BOOT/[email protected]_u58usc 0B – 284M –
bpool/BOOT/[email protected]_3okd08 0B – 284M –
bpool/BOOT/[email protected]_v18vua 0B – 284M –
bpool/BOOT/[email protected]_f5185b 0B – 284M –
bpool/BOOT/[email protected]_12i3wc 0B – 288M –
bpool/BOOT/[email protected]_o6sgfl 0B – 288M –

Seems like zfs is not doing any housecleaning by itself, we need to help!
To remove a snapshot, we need to destroy it. This is done with the destroy option, like this:

sudo zfs destroy /path/to/snapshot

To remove all snashots in the bpool, you can use the following:

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1);do sudo zfs destroy $i; done

This will destroy all snapshots and free up your bpool pool.

If you don’t want to remove all snapshots, pipe your snapshots listing into tail or head. You can add a creation column to the list like this:

zfs list -t snapshot -o name,creation -s creation

This will list your snapshots in order of creation date. Use wc -l to count them and use head to get x amount fewer.

zfs list -t snapshot -o name,creation -s creation | wc -l
20

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1 | head -15);do sudo zfs destroy $i; done

I did try to format the commands, but as usual Word Press make things so complicated to use, that i had to revert all the special formatting, instead of spending the entire day figuring out why html tags are not respected in “enlighter inline code”

0 Comments

Leave a Reply

Your email address will not be published.

Are you human? * Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.