That is just a descriptive name on what is displayed. It’s up to you what you enter there you could either use /mnt/synoserver/share
if you like that or you could write Syno Share
Thanks @sam_nazarko and @fzinken, I watched a 4k UHD film last night and had no issues. Looks like the OS mount works. I still don’t know why it was dropping out before when I had mounted via fstab, but I prefer the autofs solution as I have a few servers I could map, but they’re not always on 24/7. I’ve bookmarked this thread and the autofs how-to for future reference
Am I right to guess mounting can not made to work if the library is shared between multiple clients (MariaDB, hanewin NFS server)?
That is not an issue. Hanewin doesn’t factor in, and with the MySQL db you can either switch all clients and redo your library (assuming that is an option) or alternatively you can just do a path substitution on any machine you want to setup that way which works with your existing setup. If you check out the howto I linked to a couple posts up it should make this clear how it works.
I don’t see why you think that is easy:
- Let’s say osmc puts it into the DB with /mnt/moviesH/…
- How will that match up with another kodi on Windows for example mapped H:\movies…
How do you suggest getting the same content on the same path on different clients as they currently are with kodi speaking NFS e.g. nfs:/192.168.1.1/moviesH/…
https://kodi.wiki/view/Path_substitution
If you want to use a shared mysql database for several Kodi clients that use different methods to access the shared media (e.g. a mix of SMB, NFS and/or fstab based access) you could use the path substitution function of Kodi to align the access. But watch out it is important that you only add new files to the database from a single client otherwise you would need to have different path substitutions on each client.
Please see [HowTo] Repairing File Paths with Path Substitution for details.
Did you read the howto I wrote? This is all covered. Path substitution is a Kodi thing and it works on all versions of Kodi. As such you have the option to either maintain a DB with a universal path that works across all clients and then add a path sub on the Vero to redirect to a system mount, or else do whatever and then fix it on every machine with a path sub. The only really important part is that every client should be running the exact same Kodi sources.
As long as all clients use the same sources in Kodi it can be updated from any machine. Kodi stores absolute paths based on what is in sources.xml and the path sub never factors in.
Sounds hairy, but might try it, then the client adding the files better be osmc running an update maybe every hour…
It really isn’t. You have an existing install that works. If you make a system mount and then add a path sub, and change nothing else, and it just works as your already using it just with a bit more performance.
As many mounts and substitutions as many clients. So not just works as before. But gotta admit uhd movies do buffer and lag.
Is the system still going to add the new movies with the original format, so substitutions keep working going forward?
I cover this in the guide. You don’t change your source, and in turn the DB does not change how the file paths are stored. All you are doing with this kind of path subs is redirecting where it opens the file, nothing more. Literally everything else is the same. Your just telling Kodi essentially when it goes to read from path x, read from path y instead. If your sources.xml is pointing to path x, then the database will store path x. If you remove the path sub then that machine goes back to reading from path x.
Thanks, will try!
Got to the point of autofs mount working, tester the performance cp to /dev/null is 65-85MBytes/s should be enough for everyone. For the record my home network at the testing:
- Client OSMC Vero 4K+ (AMLogic S905D Quad Core 1.6GHz 64-bit ARMv4 aarch64 SoC, 2GB DDR3 Memory, Realtek RTL8211F Gigabit Ethernet)
- Linux software bridge fitlet mini-PC (AMD A10 Micro-6700T APU, 8GB RAM, 2xIntel I211 Gigabit Network Connection)
- NetGear R7000 router (Broadcom BCM4709A0 @1 GHz, 2xARM Cortex A9, 256 MiB RAM)
- NUC 7i7 Windows 10 “server” Hanewin NFSd, single WD120EFAX 5400 RPM SATA disk in a Thunderbolt3 external enclosure
Without any mount-option or similar tuning
Hi, I ended up adding a single generic substitution:
<pathsubstitution> <substitute> <from>nfs://192.168.0.6/</from> <to>/mnt/</to> </substitute> </pathsubstitution>
It has indeed sped up the initial analysis of the 44GB, 620 file BDMV from tens of seconds until the “play main title” selection apperars down to a couple of seconds, thanks! Now gotta go back and implement other substitutions on the other clients.
(I am not allowed more than 3 replies, so adding the updates in an edit)
I am not a person to leave optimization opportunities on the table, so I have read up on NFS options:
worth documenting here for fellow users of the Hanewin NFS Server the server side default of max. 8k rsize and wsize read and write windows is a limiting factor. After raising it to 32768 it did help performance:
$ ll Terminator.Dark.Fate.2019.2160p.mkv
-rw-r–r-- 1 osmc osmc 32073967827 Jan 17 22:54 Terminator.Dark.Fate.2019.2160p.mkv
$ time cp Terminator.Dark.Fate.2019.2160p.mkv /dev/null
real 5m46.194s
Meaning 32,073,967,827byte / 346.194 = 92,647,382.18166693 → 92MByte/s
I think it is close enough to the theoretical bandwidth, especially considering that I have two hops in my network between server and client (although the router hop may perfrom Switching in ASIC).
I did try to increase it to 1048576 based on Consistently interrupted playback - #11 by THEM , but that only seems to work for Linux.
Hanewin NFS Server maxes out at 65536, maybe because it does not support NFSv4.
NFSv2 is limited to 8k and NFSv3 to 64k and according to 5. Optimizing NFS Performance - maybe old docs from the linux 2.4/2.5 times.
Oracle say NFSv3 should already be unlimited: File Transfer Size Negotiation - Managing Network File Systems in Oracle® Solaris 11.2
IBM also says they do max 512KB on both NFSv3 and NFSv4: https://www.ibm.com/support/pages/nfs-read-rsize-and-write-wsize-limit
(another update by edit for not being able to post a reply)
Increasing the Hanewin NFS Server rsize / wsize to 64k did help a bit further: 100,089,776.5 bytes/sec! Note: this file is on another external 5400 RPM SATA drive that is connected by USB3 instead of TB3. Bottomline USB3 external drives can still almost saturate Gigabit Ethernet!
$ time cp wonderwoman-uhd-hyperx.mkv /dev/null
real 9m39.832s
user 0m0.940s
sys 1m43.010s
$ ll wonderwoman-uhd-hyperx.mkv
-rw-r–r-- 1 osmc osmc 58035255291 Mar 3 2018 wonderwoman-uhd-hyperx.mkv
(another update by edit for not being able to post a reply)
So this is probably the best I can optimize out of my current setup:
- Swithed back to a file on the 5400RPM SATA WD drive in faster OWC Thunderbay 4 Thunderbolt3 exterlal enclosure.
- Stopped other IO intensive services on the server.
- Added mount options to the autofs file for NFS (-fstype=nfs,noatime,nolock,local_lock=all,async)
$ ll kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 55011086067 Jan 12 02:06 kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 17308 Jan 12 02:06 kin.2160p.remux-no1.nfo
$ time cp kin.2160p.remux-no1.mkv /dev/null
real 7m49.472s
Results: net 117,176,500.55 bytes / sec file read speed, so very close to the theoretical 118,660,598 bytes/sec of TCP/IP on Gigabit Ethernet at 1500 byte MTU (What is the actual maximum throughput on Gigabit Ethernet? – Gigabit Wireless) and using jumbo frames for 123M theoretical bytes/s is not currently an option as stated by @sam_nazarko in Vero4K+ - Jumbo frames - #6 by nabsltd.
…and in between are the overheads for NFSv3, NTFS, CPU, TB3 etc. Hope this is a good reference point for the network capabilities of the Vero 4k+ , thanks for getting me started, @darwindesign!
fixed
Did you find any actual difference in use though by going through this last bit of optimizations? The UHD blu-ray spec maxes out at 16MB/s so going from 65-85MB/s to something faster seems like it would likely have little impact for playback purposes.
You are right in that it is no longer a buffering issue level of optimization.
- But I would assume it makes seeking around and the initial opening of files less inconvenient (more snappy).
- I can also imagine people with less performant config:
- a NAS server not powered by a gen 7 i7; (above should help)
- having an older router; (above should help)
- FAT filesystem; (replace with NTFS for Windows, ext4 for Linux)
- fragmented filesystem; (weekly scheduled fs optimisation)
- network full of traffic (above may help)
- wireless only connection (aboce may help)
… will start optimisation from 15-20mbytes/s and work their way up to something more acceptable, but e.g. less than half of what I could get.
- Research and hacking is always fun.
-
Doubtful. Transfer rate shouldn’t have much impact on access time as long as you have a margin over streams bandwidth requirement.
- no
- no
- not for a network setup
- network connection doesn’t act different based on source drive fragmentation
- not neccessarily
- If you have limited bandwidth, wireless or not, anything helps but is a different topic to the question I raised
- okay
A better way to test this would be to use dd instead of time cp. You can control the size of the test, so the original file size will not matter. For example:
dd if=kin.2160p.remux-no1.mkv of=/dev/null count=10M status=progress
will copy the first 5G of the file. Example of the output
$ dd if=Dolittle\(2020\).atmos.uhd.mkv of=/dev/null count=10M status=progress
5347820032 bytes (5.3 GB, 5.0 GiB) copied, 64 s, 83.6 MB/s
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 64.2474 s, 83.6 MB/s
If you want to do 10G instead of 5 then use count=20M
dd is a very handy test tool.
I don’t think I have to convince you, we can agree I think above is helpful, and you think not.
But please think again about these examples:
- Upon opening does not only mean access time: sometimes kodi runs around an image reading various parts just to offer you which feature to play (main, alternative, extras etc.) this did speed up significantly for me. I feel it much snappier even if you don’t agree.
- NAS with weak CPU: very much yes! Imagine it has to prepare 8x less operations, read, can possibly (give proper network interface and driver) offload more with 64k windows instead of 8k. Also not having to perform (although improbable) locking operations and not having to communicate them through the net. Same CPU and network saving for noatime as well. Because of the network traffic avoided better for weeker routers connecting server and client.
Overall better net content data to gross traffic (both in terms of bytes and in terms of no. Packages and in terms of nfs ops) helps accross the whole chain IMHO.
YMMV of course.
Okay, good alternative, but why is that better? Maybe you mean single command already outputs the speed, so no need to perform the division?
- I was thinking cp works more similarly like kodi when opening and reading files, as dd works block by block, how could cp be worse?
- Why manually select the initial 5G?
- running longer minimises effects of TCP initially scaling up and memory caching.
- running longer will include the effect of more of the natural coinciding network and server events that will also happen during your BAU playback.
- there is a natural end of file so you don’t need to think about how much to test on. Still if you are worried about manual calculation you can
TESTFILE=“kin.2160p.remux-no1.mkv”;TESTSIZE=$(stat --printf=“%s” “$TESTFILE”)
then put time cp into $() and finally use bc to get the bytes / sec completely automated.