We’ve had a few issues recently on the Help and Support board with people having difficulties accessing their Synology DiskStations from OSMC via NFS. Not having a Synoogy box means that I’ve been flying blind when helping out and have had no real idea whether Synology’s version of NFS really is very quirky or if it’s just the more usual configuration issues getting in the way.
So I enlisted the help of a friend who has a Synology box, albeit one not on the latest version of the DSM software, and ran a few basic tests to see how bad (or good) Synology’s implementation of NFS server really is. AFAICT, the version of DSM that I was using (5.2) does support access control lists (ACLs) but none of the data on his server showed the tell-tale plus sign signifying ACL control.
So on to the tests. It’s a Synology DiskStation DS210j running DSM 5.2-5967 Update 2. It’s a very modest piece of kit by today’s standards but I was more interested in testing its implementation of NFS server. And the “Executive Summary” is that it worked exactly as it should have done in all the tests I performed, barring one small issue that was very likely related to the limited configuration in the VM I used to access the box. I didn’t test everything but what I did test worked just fine on NFS v3. I did not enable NFS v4 because it carries all sorts of unnecessary complications.
Here’s a summary of my notes:
Control Panel -> File Services
Enable NFS (v4 support not enabled)
Advanced:Settings: Apply default Unix permissions.
Shared Folders -> NFS Permissions
These are the options available in the menu and the corresponding line that is created in /etc/exports:
Squash: No Mapping = no_root_squash
(rw,sync,no_wdelay,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
Squash: Map root to admin = root_squash + anonuid=1024
(rw,sync,no_wdelay,root_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
Squash: Map root to guest = root_squash + anonuid=1025
(rw,sync,no_wdelay,root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
Squash: Map all users to admin = all_squash + anonuid=1024
(rw,sync,no_wdelay,all_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
For reference, these are the relevant UIDs and GIDs on the box:
id 1024 = uid=1024(admin) gid=100(users) groups=100(users),101(administrators),25(smmsp)
id 1025 = uid=1025(guest) gid=100(users) groups=100(users)
id 1026 = uid=1026(dtd) gid=100(users) groups=100(users)
Enable asynchronous = async
(rw,async,no_wdelay,all_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
Allow connection from non-privileged ports = insecure
(rw,async,no_wdelay,insecure,all_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
Allow users to access mounted subfolders = crossmnt
(rw,async,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
One curious omission from the squash menu was “Map all users to guest”, which would treat remote root users as normal guests (UID=1025). As you’ll see, I enabled this option by editing /etc/exports. (Warning: doing so might adversely affect the GUI interface.)
Note: the options no_wdelay
, insecure_locks
and sec=sys
could not be changed from the GUI.
In NFS there are two main security methods for granting access to data:
- by using an anonymous UID and GID (anonuid and anongid) in which all users (except root) have the same set of privileges; or
- by using the user’s actual UID and GID to access the remote shares.
###Access using anonuid and anongid
For the test, I manually edited /etc/exports and added the following line:
/volume1/dtd/kodi.test *(all_squash,anonuid=1025,anongid=100)
effectively creating the missing menu option of “Maps all users to guest”, and created this file:
Syno_2TB> ls -l /volume1/dtd/kodi.test/readme
total 4
-rw-r--r-- 1 dtd users 12 May 12 18:59 readme
under user dtd (UID 1026, GID 100).
I then tried to mount the share remotely:
[user@sm30 ~]$ sudo mount -t nfs 192.168.2.20:/volume1/dtd/kodi.test /home/user/kodi/
Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details.
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
This was on Fedora 24 and rpc.statd wasn’t running or even installed, it being a minimal installation. So I added -o nolock
, as instructed:
[user@sm30 ~]$ sudo mount -t nfs -o nolock 192.168.2.20:/volume1/dtd/kodi.test /home/user/kodi/
[user@sm30 ~]$ ls -l kodi
total 4
-rw-r--r-- 1 1026 users 12 May 12 10:59 readme
[user@sm30 ~]$ cat kodi/readme
Hello world
Success!
Remove global access:
[user@sm30 kodi]$ ls -l
total 4
-rw-r----- 1 1026 users 12 May 12 10:59 readme
[user@sm30 kodi]$ cat readme
Hello world
Success! This is because, although we no longer have global read access to the file, anongid=100 still gives us group read access.
Remove group access:
[user@sm30 kodi]$ ls -l
total 4
-rw------- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
cat: readme: Permission denied
With global and group access removed, the anonuid of 1025 does not have permission to read a file owned by UID 1026. So it’s working as expected. Success!
###Access using UID and GID
This time the /etc/exports file looked like this (yes, really only one option):
/volume1/dtd/kodi.test *(root_squash)
so that we are now relying on the UIDs and GIDs to match across different machines. On the remote machine, my username is (confusingly) “user”, its UID is 1000 and GID is 1000. For the test, I also created a new group on my remote system called “users”, with a GID = 100, to match that on the Synology server.
Global access
[user@sm30 kodi]$ ls -l
total 4
-rw-r--r-- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
Hello world
No problems with global access, even though UIDs and GIDs don’t match across systems.
Remove global access
[user@sm30 kodi]$ ls -l readme
-rw-r----- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
cat: readme: Permission denied
Permission denied, because I am not in group 100 (users):
[user@sm30 kodi]$ id
uid=1000(user) gid=1000(user) groups=1000(user)
Add me to group “users” (100) on the remote system
sudo usermod -a -G users user
then logoff and logon again, and re-mount the share:
[user@sm30 ~]$ sudo mount -t nfs -o nolock 192.168.2.20:/volume1/dtd/kodi.test /home/user/kodi
[user@sm30 ~]$ id
uid=1000(user) gid=1000(user) groups=1000(user),100(users)
[user@sm30 ~]$ cd kodi
[user@sm30 kodi]$ ls -l readme
-rw-r----- 1 1026 users 12 May 12 18:59 readme
[user@sm30 kodi]$ cat readme
Hello world
This time I’ve been added to group users (GID=100) and can access the data as a result. This is how it should work.
For completeness, it’s worth mentioning that I would never have had access on the basis of UID, since my remote UID=1000 and the file owner’s UID=1026. So in any system where UIDs are incompatible, you still have the option to use the GID or make the file world readable.
That’s all I tested. Because of time constraints, I didn’t check out any of the root squash options or whether I could write to the data. I was also unable to test it from a Kodi system, since I had to VPN to the box but as far as vanilla NFS was concerned, it worked just fine. And kudos to Synology for supporting a 2010 model for so long. The latest version of DSM is 6.1 and the hardware is now too limited to run DSM 6.1 but apparently it still receives some security updates.
Edit: Clarified a few points.