Buffering issues

Hi, I ended up adding a single generic substitution:

    <pathsubstitution>
            <substitute>
                    <from>nfs://192.168.0.6/</from>
                    <to>/mnt/</to>
            </substitute>
    </pathsubstitution>

It has indeed sped up the initial analysis of the 44GB, 620 file BDMV from tens of seconds until the “play main title” selection apperars down to a couple of seconds, thanks! Now gotta go back and implement other substitutions on the other clients. :slight_smile:
(I am not allowed more than 3 replies, so adding the updates in an edit)

I am not a person to leave optimization opportunities on the table, so I have read up on NFS options:
worth documenting here for fellow users of the Hanewin NFS Server the server side default of max. 8k rsize and wsize read and write windows is a limiting factor. After raising it to 32768 it did help performance:

$ ll Terminator.Dark.Fate.2019.2160p.mkv
-rw-r–r-- 1 osmc osmc 32073967827 Jan 17 22:54 Terminator.Dark.Fate.2019.2160p.mkv
$ time cp Terminator.Dark.Fate.2019.2160p.mkv /dev/null
real 5m46.194s

Meaning 32,073,967,827byte / 346.194 = 92,647,382.18166693 → 92MByte/s
I think it is close enough to the theoretical bandwidth, especially considering that I have two hops in my network between server and client (although the router hop may perfrom Switching in ASIC).

I did try to increase it to 1048576 based on Consistently interrupted playback - #11 by THEM , but that only seems to work for Linux.
Hanewin NFS Server maxes out at 65536, maybe because it does not support NFSv4.
NFSv2 is limited to 8k and NFSv3 to 64k and according to 5. Optimizing NFS Performance - maybe old docs from the linux 2.4/2.5 times.
Oracle say NFSv3 should already be unlimited: File Transfer Size Negotiation - Managing Network File Systems in Oracle® Solaris 11.2
IBM also says they do max 512KB on both NFSv3 and NFSv4: https://www.ibm.com/support/pages/nfs-read-rsize-and-write-wsize-limit
(another update by edit for not being able to post a reply)

Increasing the Hanewin NFS Server rsize / wsize to 64k did help a bit further: 100,089,776.5 bytes/sec! Note: this file is on another external 5400 RPM SATA drive that is connected by USB3 instead of TB3. Bottomline USB3 external drives can still almost saturate Gigabit Ethernet!

$ time cp wonderwoman-uhd-hyperx.mkv /dev/null
real 9m39.832s
user 0m0.940s
sys 1m43.010s
$ ll wonderwoman-uhd-hyperx.mkv
-rw-r–r-- 1 osmc osmc 58035255291 Mar 3 2018 wonderwoman-uhd-hyperx.mkv

(another update by edit for not being able to post a reply)

So this is probably the best I can optimize out of my current setup:

  1. Swithed back to a file on the 5400RPM SATA WD drive in faster OWC Thunderbay 4 Thunderbolt3 exterlal enclosure.
  2. Stopped other IO intensive services on the server.
  3. Added mount options to the autofs file for NFS (-fstype=nfs,noatime,nolock,local_lock=all,async)

$ ll kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 55011086067 Jan 12 02:06 kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 17308 Jan 12 02:06 kin.2160p.remux-no1.nfo
$ time cp kin.2160p.remux-no1.mkv /dev/null
real 7m49.472s

Results: net 117,176,500.55 bytes / sec file read speed, so very close to the theoretical 118,660,598 bytes/sec of TCP/IP on Gigabit Ethernet at 1500 byte MTU (What is the actual maximum throughput on Gigabit Ethernet? – Gigabit Wireless) and using jumbo frames for 123M theoretical bytes/s is not currently an option as stated by @sam_nazarko in Vero4K+ - Jumbo frames - #6 by nabsltd.
…and in between are the overheads for NFSv3, NTFS, CPU, TB3 etc. Hope this is a good reference point for the network capabilities of the Vero 4k+ , thanks for getting me started, @darwindesign! :slight_smile:

1 Like