[How to] Use systemd to mount remote shares

You will get better performance out of remote shares if they are mounted in OSMC’s filesystem. You can do this by putting lines in fstab as described here. But there have been reports of time-outs when OSMC is shutting down. This is probably because systemd is stopping services in the wrong order. Using systemd to mount, rather than fstab, gives us greater control. Here’s how.

We are going to share a folder on server 192.168.1.11 which has a share name of ShareFolder and needs username yourname and password yourpassword and we are going to mount it at /mnt/ShareFolder. Your server IP address will be different. You can use any names you like for the share name and mountpoint. They don’t have to be the same but they must not have spaces in them. The systemd unit files we are going to create must have the same name as the mountpoint, ie /mnt/ShareFolder → mnt-ShareFolder.mount.

First make sure you can access the share from Kodi or by using smbclient as described in the above Wiki article.

Open a commandline on your OSMC device as described here. Then go:

cd /lib/systemd/system
sudo nano mnt-ShareFolder.mount

Now enter the following lines:

[Unit]
Description=Mount smb shared folder
Wants=connman.service network-online.target wpa_supplicant.service
After=connman.service network-online.target wpa_supplicant.service

[Mount]
What=//192.168.1.11/ShareFolder
Where=/mnt/ShareFolder
Type=cifs
Options=noauto,rw,iocharset=utf8,user=yourname,password=yourpassword,uid=osmc,gid=osmc,file_mode=0770,dir_mode=0770

[Install]
#nothing needed here

Save the file with Ctrl-X, then:

sudo nano mnt-ShareFolder.automount

and enter these lines:

[Unit]
Description=Automount smb shared folder

[Automount]
Where=/mnt/ShareFolder

[Install]
WantedBy=network.target

Save that file then go:

sudo systemctl enable mnt-ShareFolder.automount
sudo systemctl daemon-reload

You can now test mount the share with:

sudo systemctl start mnt-ShareFolder.automount
ls /mnt/ShareFolder

You should see the contents of 192.168.1.11/ShareFolder listed. If all is good, add the share as source in Kodi (browse to Root filesystem then /mnt/ShareFolder) and remove the corresponding source which starts smb://.

If you were using fstab, delete the relevant lines. It also works with nfs - your [Mount] part will look something like this:

[Mount]
What=192.168.1.1:/media/NASdrive
Where=/mnt/NAS
Type=nfs
Options=noauto,rw

and the [Automount]:

[Automount]
Where=/mnt/NAS

the rest is all the same as for smb (aka cifs). For this example the unit files will be called mnt-NAS.mount and mnt-NAS.automount.

I’ve seen this information elsewhere for mounting shares in Open/LibreElec. One question, perhaps for @sam_nazarko, will this method also provide the same read-ahead as fstab mounting?

Duke

The better approach may be to use a systemd dropin. It allows parameters to be overriden. As such, we can force remote-fs.target (which I believe handles fstab) to wait for connman to initialise.

Should do: check /proc/mounts; if it’s mounted in the same way as other shares, then yes.

As I understand it, systemd is just parsing fstab, so it seemed to me the more direct approach is the ‘better’ way to do it. Much like you shouldn’t, these days, be messing with rc.local.

I thought @grahamh’s post was dealing with those rare cases where time-outs occurred when OSMC was shutting down.

Nevertheless, I agree it does make sense for remote-fs.target to wait until network-online.target has been reached.

One oddity I’ve noticed is that connman-wait-for-network.service is disabled by default on OSMC. This causes network-online.target to be reached a lot earlier (arguably too early) than when connman-wait-for-network.service is enabled.

If, as seems to be the case, the network isn’t properly online until connman is fully up and running, it seems sensible for connman-wait-for-network.service to be enabled by default - or made a requirement in another systemd unit, eg remote-fs.service.

As for using a drop-in, is there any benefit over copying /lib/systemd/system/remote-fs.target to /etc/systemd/system and then simply editing the copy?

Rare? It happened every time with me - very reproducible. After writing the above, I realised the same result can probably be achieved by putting the dependency on wpa_suppicant into an fstab line. It was wifi shutting down before the umount that was the problem. I haven’t tried that though.

What I couldn’t figure was how to get the mount to wait until the network was ready for it. Hence the use of automount.

I thought connman-wait-for-service was only invoked by the ‘wait for network’ setting.

Me too – and that can be fixed by adjusting the dependencies of remote-fs.target with a systemd dropin.

That’s intentional: not everyone wants to wait for a network.

It’s adjustable under My OSMC → Network.

All that we need to do is drop in and Require: connman-wait-for-network. Then it will always run. But we may want to do it only if remote shares are present in fstab, which makes it a bit trickier.

I see the advantage of using a drop-in as possibly being less affected by any change to the unit’s config file, whereas simply copying it to /etc/systemd/system and making the changes there is cleaner and clearer.

This is my /etc/systemd/system/remote-fs.service:

osmc@osmc:~$ cat /etc/systemd/system/remote-fs.target 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Remote File Systems
Documentation=man:systemd.special(7)
Requires=connman-wait-for-network.service
After=connman-wait-for-network.service
DefaultDependencies=no
Conflicts=shutdown.target

[Install]
WantedBy=multi-user.target

The Requires bit ensures that connman-wait-for-network.service will run even if/when disabled.

If you’re interested, you can see the before and after effect of the new unit using systemd-analyze plot > plot.svg. You’ll then need some kind of svg viewer to see the output. The difference is significant. The command is very helpful for many systemd-related issues.

@sam_nazarko Ah yes, the GUI interface. Sometimes I forget to look beyond the CLI. :wink: Checking if there are remote shares in /etc/fstab is probably a step in the right direction but I think that it goes beyond that. There are potentially many use-cases where we want network-online.target to really mean what it says. Without connman-wait-for-network, network-online.target is reached too early, IMO.

The above image might shed some light on @grahamh’s problem.

Edit: clarified one point.

1 Like

Online target is reached too early for what?

On the desktop only some services will wait for this target.

Well, as you said above:

suggesting that we should wait for connman to initialise before allowing remote-fs-target to be reached. As things stand, network-online.target is reached well before connman has initialised, which I take as meaning that connman-wait-for-network.service has finished. If the two are not the same, then what does connman-wait-for-network.service actually mean?

From my readings, no-one can agree on what network-online.target means: connected to a transport? got an IP address? got a route to the interweb? connman-wait-for-network runs a loop until one of these criteria are met (can’t remember which one) so it’s a more reliable indicator of useful connectivity.

It seemed to me systemd parses fstab when it’s setting up local disk mounts (very early) and I couldn’t fathom whether it comes back to parse the remote mounts which it couldn’t mount before the network is ready, and what switches you need to flick to make it do that.

On the first point, network.target is meaningless, so we’ve had to rely on network-online.target to tell us when there’s a working online connection. Of course, connman seems to be out on bit of a limb and it’s not clear if it feeds back to systemd for determining network-online.target. It’s quite possible that connman-wait-for-network.service is the connman equivalent of network-online.target. It’s difficult to say.

I came across this thread on stackoverflow:

On the second point, that systemd-analyze plot I ran showed local-fs.target was reached a long way before remote-fs.target, and the latter was reached well before connman-wait-for-network would have indicated a working connection.

Just confirms it’s up to the network manager to decide what network-online means and I suspect connman still hasn’t implemented it, anyway. Hence the (rather ugly) connman-wait-for-network.

Finally, a chance to reply to all of this.

Recent versions of ConnMan have a target that can now be used as a systemd trigger. A couple of years ago when @DBMandrake and I were working on networking, this didn’t exist. ConnMan (the way it’s built) needs a bit of a tidy up, and it’s on the list.

Perhaps we’d want to deprecate our version and use their target instead.

Our method:

[Unit]
Description=Wait for Network to be Configured
Requisite=connman.service
After=connman.service
Before=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/bash -c "if grep -q nfsroot /proc/cmdline; then exit 0; fi; count=60; while [ $count -gt 0 ]; do if connmanctl state | grep -iq 'ready\|online'; then break; fi; sleep 1; let count-=1; done; exit 0"

[Install]
WantedBy=network-online.target

Note that we run this with Before=network-online.target. The objective is to only wait for local network connectivity, not the Internet (this measure alone is sometimes unreliable).

Included, but inactive OSMC, is a potential successor, connman-wait-online.service, which looks like this:

[Unit]
Description=Wait for network to be configured by ConnMan
Requisite=connman.service
After=connman.service
Before=network-online.target
DefaultDependencies=no
Conflicts=shutdown.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/connmand-wait-online
RemainAfterExit=yes

[Install]
WantedBy=network-online.target

DefaultDependencies=No is used to prevent long waits on shutdown. Although I remember @DBMandrake and I made some suggestions about how to resolve some ConnMan issues with delays. ConnMan picked some of these changes; but as a result things weren’t entirely resolved (and possibly worse) because of the way that the changes were picked. This was almost two years ago however; so it’s hard to remember the specifics. The story can be found across the ConnMan JIRA and mailing list.

It seems there is now an /usr/sbin/connmand-wait-online binary which has some options. It may be better to use this binary to wait for the network. This is the same approach that NetworkManager uses: a NetworkManager-wait-online.service; but it should be noted that this isn’t enabled by default; or the boot would be rather slow.

But – I don’t think any changes here are needed for us to solve the slow shutdown when shares are mounted. Rather; the ordering of the services simply needs changing or adjusting.

Remote shares should always mount properly; because ConnMan is required beforehand, i.e.

osmc/package/network-osmc/files/lib/systemd/system/connman.service at master · osmc/osmc · GitHub.

Chucking DefaultDependencies=No might be enough to fix this in connman.service.

TLDR: I think the systemd unit just needs tweaking; but there are better ways to assess connectivity these days.

Sam

Well, thanks for taking the time. I really don’t think there’s an issue. It was just when I tried fzinken’s fstab I had a problem which I solved as above. Actually I don’t have any smb mounts!

Yes, we should use the latest stuff if it’s going to work better, but if it ain’t broke…

Correct - there is no formal standard for what network-online.target means in systemd.

So when we wrote connman-wait-for-network.service we chose to check the connman status, and we consider the network to be online (and thus network-online.target satisfied) when it is in “ready” or “online” state.

Ready state in connman means something different for static IP configuration or DHCP. For static IP you will get ready state purely from having an active Eithernet Link. (Even to a switch which doesn’t go anywhere else)

For DHCP (which most users will be using) you won’t get ready state unless you also get a successful DHCP lease or renewal after the link goes up. No DNS checks or pings to default gateway etc are done.

When in ready state connman will try to connect to a server online to establish if it should switch to “online” state, our version of connman is customised to connect to an osmc server rather than the default server used by connman, and it also checks for a specific HTTP response so it isn’t fooled by walled gardens or DNS redirection etc. If it says ready you can at least connect to an osmc server.

We chose to make the ready state sufficient for network-online.target (hence the test for ready|online) since there’s no guarantee that there will be an internet connection available for a given installation and more often than not it is not required - just connectivity to a server on the lan.

The reason why the service is disabled by default is quite simple - if it was enabled by default there would be a 60 second timeout every time you boot up before Kodi will launch, as Kodi depends on network-online.target, since you want the network to be up before Kodi runs IF your Kodi installation depends on the network. (Not everyone’s does…) It can be enabled in the network GUI and in fact when you configure a MySQL install we automatically force it on as that’s the sensible thing to do.

We did at one point consider changing things so that connman-wait-for-network is always enabled and that the GUI settings “Wait for network” only changed the dependency in the mediacenter (Kodi) service to wait for it or not. (probably using a systemd dropin) In hindsight this is probably a better way to do it as other services that rely on network-online.target would always be able to do this independent of Kodi, however we have just not got around to implementing this.

As for the original problem with network shares - the recommended way to do this is to use x-systemd.automount,noauto to generate a systemd mount unit which mounts on the fly on first access.

This is fully asynchronous during boot - processes not trying to access files within the mount path are not delayed during boot however any process which does try to access it will trigger a mount attempt and will be suspended (at the filesystem system call) until the mount is finished and then resumed, a bit like the old autofs.

I think this is a lot cleaner method than simply causing all network mounts to wait for the network to be available as claimed by network-online.target as that delays the entire boot process waiting for every last remote mount to mount, even if they’re not being used. Mounting asynchronously on demand works very well in practice and is quite resilient against temporary server outages.

Yes, indeed. Thanks for filling in the gaps in the picture. None of this addresses the unmount issue, though, which was the original reason for this thread.

Was the issue only over Wifi ?

For an Ethernet mount the dependencies in the connman service should ensure an orderly shutdown so that mounts are unmounted before connman takes the network down. About a year ago some change was made to systemd in Debian that broke our previous ordering which I think Sam fixed, (I haven’t been very active recently) however I believe there is still an issue with the service ordering with Wifi where Wifi can go down before mounts are unmounted, and from memory the problem was wpa_supplicant being stopped too soon.

It’s probably an incorrect ordering dependency in wpa_supplicant, although we may be able to work around it using a dropin to modify the Debian service.

I solved the problem by making a dependency on wpa_supplicant as you can see above. I didn’t test whether the problem was there with just ethernet. In passing, I attempted to avoid automount, but as you say, it works better with automount.

From the feedback, I get the sense ppl are not happy with my solution, but I don’t understand what the objection is. Bear in mind, I’m only at systemd 101 (don’t know what a drop-in is, eg) so happy to accept advice from experts.