systemd.mount replacing autofs

I have been a very satisfied user of autofs for several years. Using a laptop in multiple location without reboot, could often cause annoying timeouts with hard mounted NFS shares.

So I found autofs, which did a great job, especially the ghost option is nice, making you able to browse the filesystem when it’s not mounted, and only seeing a small delay when the autofs mounts it for you at your wish.

I have now discovered that systemd has a mount option: https://www.freedesktop.org/software/systemd/man/systemd.mount.html
It can be used as systemd units, but also directly from /etc/fstab where I prefer to have my mount options.

To replicate the autofs functionality, add something like this to your mount options:

noauto,x-systemd.automount,x-systemd.idle-timeout=60,x-systemd.mount-timeout=30,_netdev

The above options will mount when you try to access the share, and it will unmount after 1 minute of idling, and the _netdev tells the system not to mount it before the network is ready.
More or less the same functionality as autofs, but no need to install additional software.


zfs filling up the /boot device with snapshots

So, you are trying to update your system and get this:

Requesting to save current system state
ERROR couldn’t save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%.
Free space on pool “bpool” is xx%

The zfs filesystem is creating a snapshot every time you install a new kernel, this will add up, and suddenly you are not able to update anymore, and you are seeing a lot of error caused by this. If you are unlucky and reboot, it might even brake your system!

Running Ubuntu with zfs will create a pool called bpool as your /boot device. Continuing I will refer to this as bpool.


To get an overview of your zfs pools, use this command: zfspool list

kasper@AsusPro:~$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bpool 1.88G 1.64G 238M – – 32% 87% 1.00x ONLINE –
rpool 114G 26.3G 87.7G – – 7% 23% 1.00x ONLINE

As you see, my bpool pool is 87% filled up, I have a problem.

To get an overview of snapshots in the bpool pool, use this command: zfs list -t snapshot | grep bpool

kasper@AsusPro:~$ zfs list -t snapshot | grep bpool
bpool/BOOT/ubuntu_nijvpt@autozsys_646r7v 8K – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_5z8j40 8K – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_rknbhc 0B – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_cd3k48 0B – 92.8M –
bpool/BOOT/ubuntu_nijvpt@autozsys_e3s23t 0B – 185M –
bpool/BOOT/ubuntu_nijvpt@autozsys_teumkd 0B – 185M –
bpool/BOOT/ubuntu_nijvpt@autozsys_iaxh6k 88K – 279M –
bpool/BOOT/ubuntu_nijvpt@autozsys_8uh3k1 88K – 372M –
bpool/BOOT/ubuntu_nijvpt@autozsys_kpxpbq 88K – 187M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3lafml 0B – 280M –
bpool/BOOT/ubuntu_nijvpt@autozsys_0gvki1 0B – 280M –
bpool/BOOT/ubuntu_nijvpt@autozsys_8j52ch 82.3M – 187M –
bpool/BOOT/ubuntu_nijvpt@autozsys_2sibtv 72K – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3t44c8 56K – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_u58usc 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_3okd08 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_v18vua 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_f5185b 0B – 284M –
bpool/BOOT/ubuntu_nijvpt@autozsys_12i3wc 0B – 288M –
bpool/BOOT/ubuntu_nijvpt@autozsys_o6sgfl 0B – 288M –

Seems like zfs is not doing any housecleaning by itself, we need to help!
To remove a snapshot, we need to destroy it. This is done with the destroy option, like this:

sudo zfs destroy /path/to/snapshot

To remove all snashots in the bpool, you can use the following:

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1);do sudo zfs destroy $i; done

This will destroy all snapshots and free up your bpool pool.

If you don’t want to remove all snapshots, pipe your snapshots listing into tail or head. You can add a creation column to the list like this:

zfs list -t snapshot -o name,creation -s creation

This will list your snapshots in order of creation date. Use wc -l to count them and use head to get x amount fewer.

zfs list -t snapshot -o name,creation -s creation | wc -l
20

for i in $(zfs list -t snapshot | grep bpool | cut -d ” ” -f 1 | head -15);do sudo zfs destroy $i; done

I did try to format the commands, but as usual Word Press make things so complicated to use, that i had to revert all the special formatting, instead of spending the entire day figuring out why html tags are not respected in “enlighter inline code”

Scan for new drives on Linux – the easy way

Adding storage to servers is part of every day work. this can be anything from a virtual disk on a virtual machine, to LUN’s presented from some external storage system.

When the new disk has been added, ususally nothing happens. You need to scan for the new disk/disks in order to operate them. Theres a tool called rescan-scsi-bus.sh for doing this specific task, but sometimes you don’t have that, then you need to do it manually, and it’s actually not that difficult.

You then need to echo “- – -” to the scsi system. I have found a lot of different methods to do the same thing, a lot of them telling you to do something like: echo “- – -” >> /sys/class/scsi_host/hostxx/scan where xx can be anything, and sometimes there’s a lot, so typing all the different host numbers is not optimal. Then someone suggests a for loop, like: for i in $(/sys/class/scsi_host/hosts); do echo “- – -” /sys/class/scsi_host/host$i/scan; done

I have tried them all (I think) but in my opinion, the best and easiest solution is the one below, as it is pretty easy to remember:

echo "- - -" | sudo tee /sys/class/scsi_host/host*/scan

List your newly installed disk with lsblk and partition them with fdisk

Re-Volt, Wine, Multiplayer

My kids are beginning to play on their computers, and when one of them where invited to a socalled LAN-Party it woke up memories in my mind. I remembered how I used to play at LAN parties my self, and I remembered how simple it was to set up a game and play against each others. Todays gaming is all about joining public servers, creating accounts, and answering questions about this and that.

That made me think back on one of my favourite games from around 2000, Re-Volt. I searched the web and found that Re-Volt isn’t dead at all, fan communities are still alive, and theres even a small patch for the latest version that will support modern wide screens, easier multiplayer, etc. Check it out here: http://www.revoltrace.net/ where you will also be able to download the latest version and patch.

The kids are running Windows 7 and Windows 10 and we had no issues installing it, the installer will ask for the directplay module and download it when needed. In order to play multiplayer we had to allow some firewall ports to be used even though the firewall is disabled on the local network. When hosting a game I noticed that it’s using my public IP and not my private, thats probably why I had to allow additional ports to be used – 🙁

Now, lets get back to the actual point for my post. I am running Ubuntu Linux 14.04 on my laptop, and I thought I would be able to play Re-Volt using Wine. No problem at all, it installed just fine and worked like a charm, until I wanted to play multiplayer over the LAN. I got different erros depending if I wanted to host or join a game, but quickly I found out that on Linux I also needed the so-called directplay module, which is easily installed with winetricks that comes with the Wine installation on my system. After some googling I found out that all I needed to do was:

winetricks directplay

This command will install the directplay module and now the game is running smoothly with or without multiplayer. Wine documentation states to run “sh winetricks directplay” which will not work for the package-manager installed version.

Langsom opstart af citrix sessioner på linux.

Jeg har tit undret mig over at mine Citrix xendesktop sessioner der startes op via et webinterfcae på en citrix access gateway, starter langsommere op på linux maskiner end på windows, i dag fandt jeg så en løsning (og en forklaring).

Vi starter med løsningen:

sudo ln -sf /dev/urandom /dev/random

Forklaringen må i få en anden dag 🙂

Og den kommer nu….

Både /dev/random og /dev/urandom bruges til at lave tilfældig data på et linux system. Begge filer opbygges ved at opsamle tilfældige data på systemet f.eks. når man arbejder med et program der placerer data i hukommelsen. Forskellen er at /dev/random stopper med at producere data når der ikke længere produceres data på maskinen, det er derfor man kan starte Citrix sessionen hurtigere op ved at køre musen rundt på skærmen, eller ved at åbne andre programmer mens man venter. /dev/urandom stopper derimod ikke med at producere tilfældig data, den genbruger den data der allerede er opsamlet, og bygger videre med eksisterende data.

Derfor er det også anbefalingen at bruge /dev/random når den tilfældige data skal bruges til at lave kryptering, da der teoretisk er en mulighed for at forudsige den tilfældige data.

Citrix gør altså hvad de skal, men det tager bare så lang tid 🙁

Hvordan gøres dette på Windows? Her starter sessionen også med det samme, ligesom hvis man bruger /dev/urandom på linux.

Fejl i Citrix Receiver for Linux amd64 (x86_64)

Den for tiden nyeste Citrix Receiver for Linux der er frigivet d. 23 April 2012 kan downloades i en 64bit deb pakke fra Citrix’ download område. Men der er et problem med pakken der gør at man får en irriterende fejl under installationen.

Jeg have hentet pakken og forsøgte at installere den på min Ubuntu 12.10 med kommandoen:

sudo dpkg -i icaclient_12.1.0_amd64.deb

Ovenstående resulterer i denne fejl:

E: Sub-process /usr/bin/dpkg returned an error code (1)

Fejlen skyldes at der under installationen laves at arkitektur check der fejler og derfor ikke kan finde den korrekte arkitektur for systemet. Dette rettes ved at finde denne fil:

/var/lib/dpkg/info/icaclient.postinst

Her skal følgende rettes:

echo $Arch|grep “i[0-9]86” >/dev/null
if [ $? -ne 0 ]; then
NotIntel=1
fi

Til:

echo $Arch|grep -E “i[0-9]86|x86_64” >/dev/null
if [ $? -ne 0 ]; then
NotIntel=1
fi

Nu finder scriptet arkitekturen og kører igennem uden fejl. Koden der skal rettes begynder i linie 2648.

Citrix receiver til Linux med flere skærme.

Der sker desværre ikke så meget i udviklingen af citrix klienten til Linux kaldet “Citrix Receiver for Linux”. Windows og Mac versionerne er en del foran når det kommer til brugervenlighed på klienten. Med brugervenlighed mener jeg at man i de andre klienter super nemt kan konfigurere brugen af USB, HDX og brugen af flere skærme, det skal selvfølgelig være super svært og nørdet i Linux klienten. Jeg vil herunder komme med et eksempel fra det virkelige liv (mit) for at understrege forskellighederne.

Hvis man på en windows maskine vil have sin citrix session til at strække sig over flere skærme gør man følgende:

  1. Start en session op og sørg for at den kører i “window” mode. (altså ikke maximeret)
  2. Træk vinduet ind midt på de to skærme, så halvdelen af vinduet med citrix sessionen er på hver skærm.
  3. Maximer vinduet.

Nu kører du i en session der strækker sig over to skærme.

På en linux klient skal man gøre følgende:

  1. Brug adskellige timer, måske dage på at søge i diverse forummer og i dokumentation fra citrix.
  2. Find ud af at wfica programmet i /usr/lib/ICAClient/ mappen understøtter en række attributter.
  3. Prøv at køre programmet med -span attributten og modtag følgende fejlmeddelelse:
    kasper@laptop:/usr/lib/ICAClient$ ./wfica -span h
    Error: 12 (E_MISSING_INI_ENTRY)
    Please refer to the documentation.
    Error in configuration file.
    Section “ApplicationServers” must contain an entry “”.
    kasper@laptop:/usr/lib/ICAClient$ ./wfica -span o
    Error: 12 (E_MISSING_INI_ENTRY)
    Please refer to the documentation.
    Error in configuration file.
    Section “ApplicationServers” must contain an entry “”.
    kasper@laptop:/usr/lib/ICAClient$ ./wfica -span a
    Error: 12 (E_MISSING_INI_ENTRY)
    Please refer to the documentation.
    Error in configuration file.
    Section “ApplicationServers” must contain an entry “”.
    kasper@laptop:/usr/lib/ICAClient$
  4. Find på ubeskrivelig vis frem til at der skal stå “-span 1,2” og forsøg nu at finde ud af hvordan man sørger for at denne attribut bliver kørt når man vil starte en virtuel desktop fra citrix webinterfacet.
  5. Find ud af at attributten skal sættes ind som en environment variabel.

Kommandoen “wfica -span h” burde returnere en liste med de skærme man kan angive som aktive, og som man gerne vil have strukket sessionen ud over, jeg aner ikke hvorfor jeg får ovenstående fejl.

Altså: for at citrix sessionen skal strække sig over 2 eller flere skærme skal man definere environment variablen WFICA_OPTS. Det gøres sådan her:

export WFICA_OPTS=”-span 1,2″

Dette kan f.eks. sættes ind i /$HOME$/.bash_profile eller en anden fil der køres når maskinen starter op.

Nu skulle det gerne virke 🙂

Jeg siger ikke at alle er ligeså tungnemme som mig, og skal bruge både halve og hele dage på at finde ud af ovenstående, men min påstand er altså at det er langt nemmere på windows platformen. Og det irriterer mig grænseløst når man tænker på at hele virtualiserings cirkusset (for Vmware og Citrix’ vedkommende) er bygget på Linux.