Problem sharing ntfs disc with nfs

I’m at the end of my Latin. I simply can’t get access to the hard drive connected to the vero4k. The steps I’ve taken:
sudo apt-get update
sudo apt-get install nfs-kernel-server
sudo mkdir /mnt/mySharedDrive
→ gives me this: /dev/sda1: LABEL=“HakunaMatata” UUID=“14D838B1D83892CA” TYPE=“ntfs” PARTLABEL=“My Book” PARTUUID=“31493f65-3977-4f94-bac1-04d7a7e5c028”
sudo nano /etc/fstab
UUID=“31493f65-3977-4f94-bac1-04d7a7e5c028” /mnt/mySharedDrive ntfs defaults,noatime,auto,nofail,x-systemd.mount-timeout=30 0 0
sudo nano /etc/exports
sudo /etc/init.d/nfs-kernel-server restart

In Windows 10 Enterprise, I’ve activated the NFS Service, and tried to access the Drive via Explorer and \(ip.of.Vero4k)\ but getting the “can’t access”-Error

I’ve also tried with
in /etc/exports, but I keep getting same Error ??

Please, help :frowning:

try \\<ip>\mnt\mySharedDrive
or sudo systemctl restart nfs-kernel-server
or exportfs -a on the server

Of course, I entered double backslash, mistyped in the first post, and forgot a \.
I don’t know if it was the “sudo systemctl restart nfs-kernel-server” or the regedit “allow insecure connections” in Windows 10, but now I have at least access! :slight_smile:
Problem is: the folder is empty, I don’t see any files … is there still a umount/mount command missing somewhere?

like sudo mount -a?

Folder is still empty in Windows, after sudo mount -a … :thinking: I can access the Drive within the Vero4k …

try re-booting vero.

Nope, doesn’t help… It seems to me the hard drive is not tied to the folder. But it’s right to take the PARTUUID and not the UUID, isn’t it?

Probably not. Mine’s mounted with the UUID.
Or according to this usb - What's the difference between UUID and PARTUUID? - Raspberry Pi Stack Exchange use PARTUUID= in fstab.
Another thought: since your drive is ntfs there could be a permissions issue. My ntfs partition is mounted (automatically) with (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks)
It should be safer to use samba for ntfs discs.

Puh, thats complicated… Too much for today, I’ve been trying for four hours, giving me headache… Do u think exFat would be a better format? Maybe I give it another try tomorrow. Samba is waaay too slow for me, I need the Speed of NFS… Thanks anyways for your help! (Y)

ext4 would be a better format. If it’s speed you’re worried about, something tells me what you are attempting is likely to be as slow as it gets. ntfs on linux is not that fast and I don’t suppose Micro$oft spent much time optimising their nfs interface.

+1 for using any Linux file system over NTFS or FAT-whatever whenever connected to any Unix system. Non-Linux file systems use FUSE drivers which have always been much slower than kernel-based file systems.

I think a mapping file is required to convert Windows and Unix userid/groupid settings. It has been many years since I touched NFS on Windows, but keeping that mapping file up to date was a huge hassle in our environment.

If Windows is involved, I’d use CIFS and deal with the performance issues on that side - or buy a commercial NFS client implementation for Windows.

But you are on-the-ground with you hardware and settings, so only you can decide the best course of action.

I’m looking in to ways to improve performance (particularly for NTFS as this is cropping up). A suitable compromise may be to devolve NTFS access to the kernel module; but this would be read-only. Perhaps introducing an option to improve performance at loss of RW capability in the settings.

Sorry for necro-posting. Just remembered
Is the NTFS mount including the big_writes option?
Options that I use (besides gid/uid):

The big_writes option has brought NTFS performance up much closer to the theoretical disk write capabilities. Also, I use autofs and avoid using gvfs anytime the storage is connected for more than a few minutes. gvfs performance is terrible.

Currently – no.
Can you paste your full mount and cat /proc/mounts?

I wouldn’t be surprised at all if there’s low hanging fruit to improve NTFS-3G performance. A quick Google also gives some suggestions.

We’d need some testers however :slight_smile:


Those are the options. I seldom have NTFS disks connected and never actually connect them to any Raspberry Pi/SBC devices, only amd64/Intel stuff.
Here’s the /etc/auto.misc full line.
250G -nodev,nosuid,noatime,async,big_writes,timeout=2,fstype=ntfs,uid=1000,gid=1000 LABEL=250G

The /etc/auto.master file (line for auto.misc):
/misc /etc/auto.misc --timeout=60 --ghost

No NTFS devices currently connected, so showing the mount info isn’t useful. It is basically used as a sneaker-net between some video equipment that doesn’t have any networking and only supports NTFS or FAT32 in the hardware.

For OSMC, you’d just default the uid/gid to be that for OSMC. uid=1000(osmc) gid=1000(osmc) - so, 1000:1000 appear to work. I don’t know any downsides to using the “big_writes” option. Actually, maybe the NTFS FUSE driver team should just make that the default? In the days of 40MB HDDs, maybe it made sense, but these days, having 128Kb buffers even on Pi hardware is a trivial amount of memory for a 5x speed improvement. My RAID stripes use 256Kb (after doing some hdparm tuning), so I cannot see how it would be bad for a single disk.

Had to visit the place with the NTFS drive unexpectedly.
/dev/sdb1 on /misc/250G type fuseblk (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdb1 /misc/250G fuseblk rw,nosuid,nodev,noatime,user_id=0,group_id=0,defaul t_permissions,allow_other,blksize=4096 0 0