Consistently interrupted playback

Sorry, I still haven’t had the time today to sit down and do some proper tests - hopefully very shortly …

Meanwhile - NFS …
@DBMandrake, I’m curious - how did you arrive at that combination of options? I’m wondering whether that’s a carefully considered selection based on extensive NFS protocol level knowledge, or put together from Internet “How to tune your NFS!” articles (not that either source is a problem)?

“You can tune a piano but you can’t tuna fish”.

[[Bonus point: Bonus points if you’re old, or GG (Generation Google), enough to know a) It’s the title of an album by REO Speedwagon; more importantly, b) why I apparently randomly quoted it here :slightly_smiling:]]

I ask (NFS options, not pianos or tuna …) because I see these sort of options crop up all over the place, and they’re almost invariably re-hashes of older NFS articles going back to the days when stone axes were the latest thing in super-weapons.
So I’m genuinely interested - you may very well know more about what works under these circumstances than I, so here are some thoughts, and any comments you might have will doubtless be very interesting.

Your options:
noatime,noauto,x-systemd.automount,async,nfsvers=3,rsize=8192,wsize=8192,nolock,nofail,local_lock=all,soft,retrans=2,tcp

Read/write sizes:
You’re specifying read and write buffer sizes of 8K.
In the dawn of pre-history, NFS hurled 512 byte chunks of data around. Fits into a UDP packet, same size as a typical sector, people didn’t (usually …) transfer massive files over NFS, so it was a relatively efficient size.
If you needed to give NFS a hint about streaming data, you might try and tell the server to chuck out a bit more at once - rsize and wsize affect the buffer size negotiation within the NFS protocol (more precisely, they set an upper limit on the transfer sizes, but …).
So “increase your ‘rsize’ and ‘wsize’!” used to be about the first words in these tuning guides.
However things have, mercifully, moved on a bit. “Advanced Format” drives have (at least) a 4K sector size, not 512 bytes; kernels have vastly more buffer space; … So you’ll probably never see an NFS server negotiating anything less than 32K these days.
That’s all very well in theory - let’s see what’s happening in practice:

osmc@Arthur:~$ nfsstat -m
/home/Multimedia from gateway.firstgrade.co.uk:/home/Multimedia
Flags: rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=193.119.171.63,local_lock=none, addr=193.119.171.123

osmc@Arthur:~$

I’m sure you recognise that magic number 1048576 :slightly_smiling:
So the server has offered 1Mb; the kernel client on the OSMC box has said “Yup, I’ll have that please …”. You can see the effect specifying an 8K buffer size is going to have (save the mental arithmetic - 128 transfers for each 1 that would otherwise be made).

However that’s from my NFS server - YMMV.

sync v.s. async:
Didn’t even think that was available as an option any more :slightly_smiling: Async is what you want (as you’re using) but it’s probably not going to make any difference as you’re not (I assume?) going to be writing much over the share.

locking v.s. no locking v.s. local locking v.s. …:
Yup, turning off the locking and faking it locally - no problem there. Again though I’m interested in why you think it’s necessary; if I look on our server (and remembering “Arthur”, the OSMC box, has negotiated NFSv4):

% nfsstat -s

lock
0 0%

%

So it’s never actually requested a lock. Ever.

Good! :slightly_smiling:

(same thing, incidentally, if I run nfsstat -c on the OSMC box - zero locking activity).

OK, now to the interesting ones …

soft and tcp:
That’s an interesting combination.
I’d have thought you’d be using hard mounts, as they behave closest to a disc. Soft mounts are really a throwback to the days of NFSv1 and Sun Microsystems’ discless work-stations - they’re very much the NFS equivalent of UDP (“Yeah, it might get there …”) and are effectively completely stateless.
To that, you’re adding a degree of persistent state, by way of a TCP connection.
Again, I’m sure you’re going to have good reasons for that combination - put me out of my misery and share them, would ya? :slightly_smiling:

Now all that is interesting, and a nice technical discussion (which is why I’ve gone into a bit more detail than I’d otherwise need to - in case anybody else is interested in some of the details).
Unfortunately, none of that helps at all with libNFS