Buffering issues

It really isn’t. You have an existing install that works. If you make a system mount and then add a path sub, and change nothing else, and it just works as your already using it just with a bit more performance.

As many mounts and substitutions as many clients. So not just works as before. But gotta admit uhd movies do buffer and lag.

Is the system still going to add the new movies with the original format, so substitutions keep working going forward?

I cover this in the guide. You don’t change your source, and in turn the DB does not change how the file paths are stored. All you are doing with this kind of path subs is redirecting where it opens the file, nothing more. Literally everything else is the same. Your just telling Kodi essentially when it goes to read from path x, read from path y instead. If your sources.xml is pointing to path x, then the database will store path x. If you remove the path sub then that machine goes back to reading from path x.

:+1: Thanks, will try!

Got to the point of autofs mount working, tester the performance cp to /dev/null is 65-85MBytes/s should be enough for everyone. For the record my home network at the testing:

  1. Client OSMC Vero 4K+ (AMLogic S905D Quad Core 1.6GHz 64-bit ARMv4 aarch64 SoC, 2GB DDR3 Memory, Realtek RTL8211F Gigabit Ethernet)
  2. Linux software bridge fitlet mini-PC (AMD A10 Micro-6700T APU, 8GB RAM, 2xIntel I211 Gigabit Network Connection)
  3. NetGear R7000 router (Broadcom BCM4709A0 @1 GHz, 2xARM Cortex A9, 256 MiB RAM)
  4. NUC 7i7 Windows 10 “server” Hanewin NFSd, single WD120EFAX 5400 RPM SATA disk in a Thunderbolt3 external enclosure
    Without any mount-option or similar tuning :sweat_smile:

Hi, I ended up adding a single generic substitution:


It has indeed sped up the initial analysis of the 44GB, 620 file BDMV from tens of seconds until the “play main title” selection apperars down to a couple of seconds, thanks! Now gotta go back and implement other substitutions on the other clients. :slight_smile:
(I am not allowed more than 3 replies, so adding the updates in an edit)

I am not a person to leave optimization opportunities on the table, so I have read up on NFS options:
worth documenting here for fellow users of the Hanewin NFS Server the server side default of max. 8k rsize and wsize read and write windows is a limiting factor. After raising it to 32768 it did help performance:

$ ll Terminator.Dark.Fate.2019.2160p.mkv
-rw-r–r-- 1 osmc osmc 32073967827 Jan 17 22:54 Terminator.Dark.Fate.2019.2160p.mkv
$ time cp Terminator.Dark.Fate.2019.2160p.mkv /dev/null
real 5m46.194s

Meaning 32,073,967,827byte / 346.194 = 92,647,382.18166693 → 92MByte/s
I think it is close enough to the theoretical bandwidth, especially considering that I have two hops in my network between server and client (although the router hop may perfrom Switching in ASIC).

I did try to increase it to 1048576 based on Consistently interrupted playback - #11 by THEM , but that only seems to work for Linux.
Hanewin NFS Server maxes out at 65536, maybe because it does not support NFSv4.
NFSv2 is limited to 8k and NFSv3 to 64k and according to 5. Optimizing NFS Performance - maybe old docs from the linux 2.4/2.5 times.
Oracle say NFSv3 should already be unlimited: File Transfer Size Negotiation - Managing Network File Systems in Oracle® Solaris 11.2
IBM also says they do max 512KB on both NFSv3 and NFSv4: NFS read (rsize) and write (wsize) limit
(another update by edit for not being able to post a reply)

Increasing the Hanewin NFS Server rsize / wsize to 64k did help a bit further: 100,089,776.5 bytes/sec! Note: this file is on another external 5400 RPM SATA drive that is connected by USB3 instead of TB3. Bottomline USB3 external drives can still almost saturate Gigabit Ethernet!

$ time cp wonderwoman-uhd-hyperx.mkv /dev/null
real 9m39.832s
user 0m0.940s
sys 1m43.010s
$ ll wonderwoman-uhd-hyperx.mkv
-rw-r–r-- 1 osmc osmc 58035255291 Mar 3 2018 wonderwoman-uhd-hyperx.mkv

(another update by edit for not being able to post a reply)

So this is probably the best I can optimize out of my current setup:

  1. Swithed back to a file on the 5400RPM SATA WD drive in faster OWC Thunderbay 4 Thunderbolt3 exterlal enclosure.
  2. Stopped other IO intensive services on the server.
  3. Added mount options to the autofs file for NFS (-fstype=nfs,noatime,nolock,local_lock=all,async)

$ ll kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 55011086067 Jan 12 02:06 kin.2160p.remux-no1.mkv
-rw-r–r-- 1 osmc osmc 17308 Jan 12 02:06 kin.2160p.remux-no1.nfo
$ time cp kin.2160p.remux-no1.mkv /dev/null
real 7m49.472s

Results: net 117,176,500.55 bytes / sec file read speed, so very close to the theoretical 118,660,598 bytes/sec of TCP/IP on Gigabit Ethernet at 1500 byte MTU (https://www.gigabit-wireless.com/gigabit-wireless/actual-maximum-throughput-gigabit-ethernet/) and using jumbo frames for 123M theoretical bytes/s is not currently an option as stated by @sam_nazarko in Vero4K+ - Jumbo frames - #6 by nabsltd.
…and in between are the overheads for NFSv3, NTFS, CPU, TB3 etc. Hope this is a good reference point for the network capabilities of the Vero 4k+ , thanks for getting me started, @darwindesign! :slight_smile:

1 Like


Did you find any actual difference in use though by going through this last bit of optimizations? The UHD blu-ray spec maxes out at 16MB/s so going from 65-85MB/s to something faster seems like it would likely have little impact for playback purposes.

You are right in that it is no longer a buffering issue level of optimization.

  1. But I would assume it makes seeking around and the initial opening of files less inconvenient (more snappy).
  2. I can also imagine people with less performant config:
  • a NAS server not powered by a gen 7 i7; (above should help)
  • having an older router; (above should help)
  • FAT filesystem; (replace with NTFS for Windows, ext4 for Linux)
  • fragmented filesystem; (weekly scheduled fs optimisation)
  • network full of traffic (above may help)
  • wireless only connection (aboce may help)
    … will start optimisation from 15-20mbytes/s and work their way up to something more acceptable, but e.g. less than half of what I could get.
  1. Research and hacking is always fun. :wink:
  1. Doubtful. Transfer rate shouldn’t have much impact on access time as long as you have a margin over streams bandwidth requirement.

  • no
  • no
  • not for a network setup
  • network connection doesn’t act different based on source drive fragmentation
  • not neccessarily
  • If you have limited bandwidth, wireless or not, anything helps but is a different topic to the question I raised
  • okay

A better way to test this would be to use dd instead of time cp. You can control the size of the test, so the original file size will not matter. For example:

dd if=kin.2160p.remux-no1.mkv of=/dev/null count=10M status=progress

will copy the first 5G of the file. Example of the output

$ dd if=Dolittle\(2020\).atmos.uhd.mkv of=/dev/null count=10M status=progress
5347820032 bytes (5.3 GB, 5.0 GiB) copied, 64 s, 83.6 MB/s
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 64.2474 s, 83.6 MB/s

If you want to do 10G instead of 5 then use count=20M

dd is a very handy test tool.

I don’t think I have to convince you, we can agree I think above is helpful, and you think not.
But please think again about these examples:

  1. Upon opening does not only mean access time: sometimes kodi runs around an image reading various parts just to offer you which feature to play (main, alternative, extras etc.) this did speed up significantly for me. I feel it much snappier even if you don’t agree.
  2. NAS with weak CPU: very much yes! Imagine it has to prepare 8x less operations, read, can possibly (give proper network interface and driver) offload more with 64k windows instead of 8k. Also not having to perform (although improbable) locking operations and not having to communicate them through the net. Same CPU and network saving for noatime as well. Because of the network traffic avoided better for weeker routers connecting server and client.
    Overall better net content data to gross traffic (both in terms of bytes and in terms of no. Packages and in terms of nfs ops) helps accross the whole chain IMHO.
    YMMV of course.

Okay, good alternative, but why is that better? Maybe you mean single command already outputs the speed, so no need to perform the division?

  1. I was thinking cp works more similarly like kodi when opening and reading files, as dd works block by block, how could cp be worse?
  2. Why manually select the initial 5G?
  • running longer minimises effects of TCP initially scaling up and memory caching.
  • running longer will include the effect of more of the natural coinciding network and server events that will also happen during your BAU playback.
  • there is a natural end of file so you don’t need to think about how much to test on. Still if you are worried about manual calculation you can

TESTFILE=“kin.2160p.remux-no1.mkv”;TESTSIZE=$(stat --printf="%s" “$TESTFILE”)
then put time cp into $() and finally use bc to get the bytes / sec completely automated.

cp is not worse. It’s just not as easy. Doing the cp uses 2 commands (cp and time) and you then have to calculate the speed. dd is one command and calculates the speed for you. Also dd gives an update every second so you can see possible glitches in the network that you may not notice when just doing a cp.

Because that’s what I picked for my example. You can choose any size, or leave the size out to test the complete file.

I don’t agree with this at all. You can copy a full movie file in far less time than it takes to watch it. When actually watching, only bits of the file are moved in bursts instead of the complete file.

If you find doing all that easier than just running one simple command, go for it. I will continue to use (and recommend) dd as my go to test.

To tell the truth I run into major pain on OSMC as it does not include (by default):

  1. bc, so you have to make do with bash arithmetic expansion $(( )), meaning no split seconds
  2. external time binary, only the bash bulit in, and capturing the output of that is a major pain on the level of:
exec 3>&1 4>&2
var=$(TIMEFORMAT='%0R'; { time cp $FILENAME /dev/null 1>&3 2>&4; } 2>&1)
exec 3>&- 4>&-

So you do dd with manually selected size, I do a manual division, win-win. :sweat_smile:

As I already stated, you don’t need to do the count option. Without that dd will copy the entire file.

But I can see this isn’t worth discussing anymore as you’ve decided that your complicated cp that only ends up giving you a summary is better than a simple dd that not only gives you a summary but gives you progress. To each his own I guess.

1 Like

I just had a similar problem. 50GB UHD Blu-ray ISO rip of a 85 minute movie. That is 80 Mbps average which should work fine on my Vero 4 (which has 100 Mbps ethernet) with a large enough RAM buffer. However, it stutters like crazy. Using the playerdebug screen I can see the buffer stays at 0 B all the time and the percentage goes wild up and down until it hits below 5% and there is the stutter. I retried with a BDMV and it has the same issue. Copying the m2ts file and putting it in a separate folder (not named BDMV) fixed the problem. Now Kodi is buffering and the movie plays without stuttering.

It seem this is a known Kodi issue: see how to let KODI 18.6 buffer bluray ISO file and Buffering Problem with Kodi and blu-ray folders

Perhaps with Vero 4k+ and enough network bandwidth it will work fine, but Vero 4k really needs the RAM cache (buffer) as complex UHD scenes can require more than the 100 Mbps network bandwidth it has available.

I have an original Vero 4K, and have never had issues (except read below). I can watch UHD Gemini Man 60fps with no stutters. And that’s a direct rip using MakeMKV.

But, I did have problems for a few weeks that were driving me crazy. It turned out to be a failing drive. Replaced that drive and have not had a problem since.

The ISO for the Gemini Man is 85GB for a 117 minute movie. That is 99 Mbps on average. There is no way you can watch this smoothly without RAM buffer using a 100 Mbps network card. You’re talking about converting to MKV first which is using heavy compression (which is very noticeable on UHD) to get around the bandwidth problem.

Kodi (and OSMC) has no problem using the RAM buffer for MKV files. So you never actually had this problem, my mistake.

Um, no. There is no compression involved. MakeMKV simply extracts the main movie. It does not do ANY compression. It will remove un-wanted audio and subs.

Ok, if all you are doing is change the container from m2ts to mkv you do get the added benefit from the RAM buffer because that issue only applies to ISO files and BDMV folders. 99 Mbps on average for 117 minutes over a 100 Mbps interface is quite a feat. Well done Vero 4k.