[SOLVED] Yet another video 4K buffering issue

Good afternoon.

I have recently bought vero 4K+ to be able to play 4K films on my TV.
I have connected the osmc with /fstab NFS mounts to my home NAS with the following options:
ro,noatime,noauto,x-systemd.automount,async,nfsvers=3,rsize=8192,wsize=8192,nolock,nofail,local_lock=all,soft,retrans=2,tcp 0 0

I have problems playing BluRay remux or FULL backups of BR in 4K.

More compressed MKVs work fine, I can even fastforward without the need to wait for caching. I can also run jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv located on the same NAS without issue if I let it cache some seconds.

Vero 4k logs:

mediainfo of one of the ofending films:


I don’t now why I only get ~400mbits on Iperf. I have directly connect to the server and get the same probably the Ethernet or server is a little bit saturated but 400Mbits should be enough to play the file right?

If it’s consistent, it’s enough, but that seems low.

Can you test in both directions and put the results here, as well as confirm that you’re not using power line?



sorry, forgot to paste it on OP: https://paste.osmc.tv/iqasawixel.avrasm the result are consistent every-time.

What do you mean by power line?

I’d also recommend that you remove those (very small) block sizes.

Consistent, but they’re almost 2x faster from the Vero4K to the server than the other way. When you run the reverse test (-R), can you show us the iperf3 output on the server end?

Poweline is a physical adapter that uses the power cables in the house as a network medium. It tends to be unreliable,

new iperf

I don’t user power line. cat6 cable

Your log shows that the Vero4K+ is getting 1000 Mbits / full duplex, which is what we’d expect to see. It might be a problem with the server, or cable(s) or any routers/switches in between the devices. I’d certainly recommend that you change / substitute your cables. It’s a simple procedure that often produces results. Cat 5e should be good enough if you’re short of Cat 6.

As already mentioned, remove the NFS block sizes. If you’re operating right on the limit of your network speed, a larger block size will help with NFS performance.

From the test I have been doing (swaping cables, trying iperf against my latop etc)
There is a problem with the network card/driver on the server which make it peak at 500Mbits

I have order a new one which should give the usual ~930Mbits. Will tell if it fixed the buffering problem or not.

When I test with the new network card should i test with the advancedsettings.xml that I have or the default one?

Suggest the default one as they are already optimized.

Have you tried messing around with the Ethernet driver settings? I’ve had problems on my Windows box in the past that were resolved by switching off all the Ethernet card hardware acceleration options.

Thanks, but I’m not running windows but OpenBSD on the NAS.

FTR: https://marc.info/?l=openbsd-misc&m=157180012621486&w=2

Well, sure, but your Ethernet hardware could be experiencing the same hardware-based issues that mine was.

Thanks for the help guys after trying with the new network card on the server I can play UHD BDrips without any problems.

Sorry for the noise!