Using NFS with Synology

We’ve had a few issues recently on the Help and Support board with people having difficulties accessing their Synology DiskStations from OSMC via NFS. Not having a Synoogy box means that I’ve been flying blind when helping out and have had no real idea whether Synology’s version of NFS really is very quirky or if it’s just the more usual configuration issues getting in the way.

So I enlisted the help of a friend who has a Synology box, albeit one not on the latest version of the DSM software, and ran a few basic tests to see how bad (or good) Synology’s implementation of NFS server really is. AFAICT, the version of DSM that I was using (5.2) does support access control lists (ACLs) but none of the data on his server showed the tell-tale plus sign signifying ACL control.

So on to the tests. It’s a Synology DiskStation DS210j running DSM 5.2-5967 Update 2. It’s a very modest piece of kit by today’s standards but I was more interested in testing its implementation of NFS server. And the “Executive Summary” is that it worked exactly as it should have done in all the tests I performed, barring one small issue that was very likely related to the limited configuration in the VM I used to access the box. I didn’t test everything but what I did test worked just fine on NFS v3. I did not enable NFS v4 because it carries all sorts of unnecessary complications.

Here’s a summary of my notes:

Control Panel -> File Services

Enable NFS (v4 support not enabled)
Advanced:Settings: Apply default Unix permissions.

Shared Folders -> NFS Permissions

These are the options available in the menu and the corresponding line that is created in /etc/exports:

Squash: No Mapping = no_root_squash

Squash: Map root to admin = root_squash + anonuid=1024

Squash: Map root to guest = root_squash + anonuid=1025

Squash: Map all users to admin = all_squash + anonuid=1024

For reference, these are the relevant UIDs and GIDs on the box:

id 1024 = uid=1024(admin) gid=100(users) groups=100(users),101(administrators),25(smmsp)
id 1025 = uid=1025(guest) gid=100(users) groups=100(users)
id 1026 = uid=1026(dtd)  gid=100(users) groups=100(users)

Enable asynchronous = async

Allow connection from non-privileged ports = insecure

Allow users to access mounted subfolders = crossmnt

One curious omission from the squash menu was “Map all users to guest”, which would treat remote root users as normal guests (UID=1025). As you’ll see, I enabled this option by editing /etc/exports. (Warning: doing so might adversely affect the GUI interface.)

Note: the options no_wdelay, insecure_locks and sec=sys could not be changed from the GUI.

In NFS there are two main security methods for granting access to data:

  • by using an anonymous UID and GID (anonuid and anongid) in which all users (except root) have the same set of privileges; or
  • by using the user’s actual UID and GID to access the remote shares.

###Access using anonuid and anongid

For the test, I manually edited /etc/exports and added the following line:

/volume1/dtd/kodi.test *(all_squash,anonuid=1025,anongid=100)

effectively creating the missing menu option of “Maps all users to guest”, and created this file:

Syno_2TB> ls -l /volume1/dtd/kodi.test/readme
total 4
-rw-r--r-- 1 dtd users 12 May 12 18:59 readme

under user dtd (UID 1026, GID 100).

I then tried to mount the share remotely:

[user@sm30 ~]$ sudo mount -t nfs /home/user/kodi/
Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details.

mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.

This was on Fedora 24 and rpc.statd wasn’t running or even installed, it being a minimal installation. So I added -o nolock, as instructed:

[user@sm30 ~]$ sudo mount -t nfs -o nolock /home/user/kodi/
[user@sm30 ~]$ ls -l kodi
total 4
-rw-r--r-- 1 1026 users 12 May 12 10:59 readme
[user@sm30 ~]$ cat kodi/readme
Hello world


Remove global access:

[user@sm30 kodi]$ ls -l
total 4
-rw-r----- 1 1026 users 12 May 12 10:59 readme
[user@sm30 kodi]$ cat readme
Hello world

Success! This is because, although we no longer have global read access to the file, anongid=100 still gives us group read access.

Remove group access:

[user@sm30 kodi]$ ls -l
total 4
-rw------- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
cat: readme: Permission denied

With global and group access removed, the anonuid of 1025 does not have permission to read a file owned by UID 1026. So it’s working as expected. Success!

###Access using UID and GID

This time the /etc/exports file looked like this (yes, really only one option):

/volume1/dtd/kodi.test *(root_squash)

so that we are now relying on the UIDs and GIDs to match across different machines. On the remote machine, my username is (confusingly) “user”, its UID is 1000 and GID is 1000. For the test, I also created a new group on my remote system called “users”, with a GID = 100, to match that on the Synology server.

Global access

[user@sm30 kodi]$ ls -l
total 4
-rw-r--r-- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
Hello world

No problems with global access, even though UIDs and GIDs don’t match across systems.

Remove global access

[user@sm30 kodi]$ ls -l readme
-rw-r----- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
cat: readme: Permission denied

Permission denied, because I am not in group 100 (users):

[user@sm30 kodi]$ id
uid=1000(user) gid=1000(user) groups=1000(user)

Add me to group “users” (100) on the remote system

sudo usermod -a -G users user

then logoff and logon again, and re-mount the share:

[user@sm30 ~]$ sudo mount -t nfs -o nolock /home/user/kodi
[user@sm30 ~]$ id
uid=1000(user) gid=1000(user) groups=1000(user),100(users)
[user@sm30 ~]$ cd kodi
[user@sm30 kodi]$ ls -l readme
-rw-r----- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
Hello world

This time I’ve been added to group users (GID=100) and can access the data as a result. This is how it should work.

For completeness, it’s worth mentioning that I would never have had access on the basis of UID, since my remote UID=1000 and the file owner’s UID=1026. So in any system where UIDs are incompatible, you still have the option to use the GID or make the file world readable.

That’s all I tested. Because of time constraints, I didn’t check out any of the root squash options or whether I could write to the data. I was also unable to test it from a Kodi system, since I had to VPN to the box but as far as vanilla NFS was concerned, it worked just fine. And kudos to Synology for supporting a 2010 model for so long. The latest version of DSM is 6.1 and the hardware is now too limited to run DSM 6.1 but apparently it still receives some security updates.

Edit: Clarified a few points.

1 Like

Hey @dillthedog! I wonder if you’re still around these forums but I’m a bit confused about your setup. I also have a Synology and am trying to get an NFS share mounted on an Ubuntu client but I can’t seem to have the permissions line up.

On my client device I’m still seeing the ‘nobody’ user and a giant string of numbers for ‘group’ when I ls -halt on the client.


drwxrwxrwx  9 nobody 4294967294 4.0K Aug  1 14:05 ghost

In my /etc/exports I have the following flags set:


1000,1000 is also both the uid and gid of the client user who is mounting the export.

Any thoughts on what I might be missing?

Not dillthedog (according to his user profile he posted something just two days ago) but maybe I can help.

Did you set up NFS permissions on your DiskStation according to the Kodi Wiki?

Does mounting work on a Raspberry Pi or a Vero? For mounting the NFS shares of my DS, I basically stuck to chillbo’s tutorial, created mount points in /mnt and mounted them in fstab. It looks like this:

# rootfs is not mounted in fstab as we do it via initramfs. Uncomment for remount (slower boot)
#/dev/vero-nand/root / ext4 defaults,noatime 0 0
IP-of-your-DiskStation:/volume1/Name-of-Shared-Folder /mnt/Name-of-Mount-Point nfs noauto,x-systemd.automount 0 0

1 Like

It looks like it might be an NFSv4 issue. Try either adding nfsvers=3 to the client fstab or change the Syno to NFSv3, if possible.