High throughput but slow buffering

I’m experiencing stutter when playing back content in general. Onscreen debug show that the buffer is filled slower than the playback consumption.

Vero4K is connected via wifi Asus RT-AC68U mounting the NAS’s SMB share in fstab.

iperf3 between NAS and Vero4K shows the following:

[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 5] 0.00-1.00 sec 26.0 MBytes 218 Mbits/sec 5 757 KBytes
[ 5] 1.00-2.00 sec 25.0 MBytes 210 Mbits/sec 210 542 KBytes
[ 5] 2.00-3.00 sec 31.2 MBytes 262 Mbits/sec 0 574 KBytes
[ 5] 3.00-4.00 sec 32.5 MBytes 273 Mbits/sec 0 594 KBytes
[ 5] 4.00-5.00 sec 32.5 MBytes 273 Mbits/sec 0 604 KBytes
[ 5] 5.00-6.00 sec 32.5 MBytes 273 Mbits/sec 0 607 KBytes
[ 5] 6.00-7.00 sec 33.8 MBytes 283 Mbits/sec 0 607 KBytes
[ 5] 7.00-8.00 sec 30.0 MBytes 252 Mbits/sec 0 608 KBytes
[ 5] 8.00-9.00 sec 32.5 MBytes 273 Mbits/sec 0 611 KBytes
[ 5] 9.00-10.00 sec 32.5 MBytes 273 Mbits/sec 0 624 KBytes
[ 5] 10.00-10.04 sec 1.25 MBytes 242 Mbits/sec 0 624 KBytes

so the throughput should be available to the device.
advancedsettings.xml are set:

    <cache>
            <buffermode>1</buffermode>
            <memorysize>536870912</memorysize>
            <readfactor>8</readfactor> 
    </cache>

readfactor I have tried and failed: none, 4, 6, 8, 20

Any suggestions?

Hi, with iperf3 you already checked the network stacks on OSMC, your NAS and all in between.
To be sure the root cause is not somewhere else on your NAS in the backend, I suggest you also check to dump one of the files you want to play to the null device being logged in via ssh:

dd if=<fqp of your video file> of=/dev/null status=progress

So, what is the average throughput after this short test?

Did you tried with totally removed buffer settings?

The cache can fall and rise to deal with peaks and troughs in the bandwidth requirements of the video but if the video bandwidth permanently equals or exceeds the capability of the network the cache will remain close to zero.

I’ve reformatted your iperf3 figures:

[ ID] Interval     Transfer    Bandwidth     Retr  Cwnd
[ 5] 0.00-1.00 sec 26.0 MBytes 218 Mbits/sec   5   757 KBytes
[ 5] 1.00-2.00 sec 25.0 MBytes 210 Mbits/sec   210 542 KBytes
[ 5] 2.00-3.00 sec 31.2 MBytes 262 Mbits/sec   0   574 KBytes
[ 5] 3.00-4.00 sec 32.5 MBytes 273 Mbits/sec   0   594 KBytes
[ 5] 4.00-5.00 sec 32.5 MBytes 273 Mbits/sec   0   604 KBytes
[ 5] 5.00-6.00 sec 32.5 MBytes 273 Mbits/sec   0   607 KBytes
[ 5] 6.00-7.00 sec 33.8 MBytes 283 Mbits/sec   0   607 KBytes
[ 5] 7.00-8.00 sec 30.0 MBytes 252 Mbits/sec   0   608 KBytes
[ 5] 8.00-9.00 sec 32.5 MBytes 273 Mbits/sec   0   611 KBytes
[ 5] 9.00-10.00 sec 32.5 MBytes 273 Mbits/sec  0   624 KBytes
[ 5] 10.00-10.04 sec 1.25 MBytes 242 Mbits/sec 0   624 KBytes

As you can see, the second line is showing 210 retries (in 1 second). There seems to be a network issue.

1 Like

Hey,

So i’ve performed the tests suggested. As for the suggested command of dd on the back-end I’m running an older coreutil version so “status=progress” was not an option for me. Instead google suggests pv to record transfer rate.

Server → null | performed on server/NAS
Using

pv [filepath] | md5sum

Result

40 MiB 0:00:08 [4.77MiB/s]

Server → Win10Ent | performed on Win10Ent
Windows Explore drag n drop transfer about 90 MB/s

Server → OSMC | performed on Vero4K

dd if=[filepath via fstab mountpoint] of=/dev/null status=progress

Result

184254464 bytes (184 MB, 176 MiB) copied, 51.4637 s, 3.6 MB/s

What i find odd is that the performance gap between pv/dd transfer number is worse than a Windows transfer…

Any suggestions?
Kindly appreciate your help and suggestions :slight_smile:

Thank you for the suggestion. The playback has improved, but the issue remains. The buffer is depleted slower than before. La La Land 4K will now playback for 8 seconds before buffering again. Previously it was around 5 seconds.

I tried plugging in a LAN cable, with smooth playback throughout. Yet the 100mbit interface is too slow for movies such as John Wick.

Whats was/is the reason you take Asus Wifi dongle instead of the internal WLAN interface of the Vero4k?
I remember that lot of folks here state that the internal 100 mbit LAN interface of the Vero4k is sufficient for the most 4k material.
Nevertheless, I also use a Gigabit LAN adapter to get higher throughput when pushing videos to the connected HDD at my Vero4k. With that I get up to 33 MB/s.

@dillthedog Has pointed to the most crucial information which was present all the time: It looks like your WLAN isn’t stable enough for your purposes with that retry count. Perhaps, you can monitor the 2.4 and 5 GHz channels in your environment and switch to one not used at all by you and your neighbours.

AC68U is just the Access Point / Router in my apartment. I’m connecting via the internal WLAN interface of the Vero4K

This suggestion I will try, though I did check the channels when I initially setup my router, some neighbours may have changed their settings since then.

Playing around with the channels did nothing noticeable with the buffering performance.

I’ve access to a newer router/AP from Asus and will try it this weekend outside the city to eliminate disturbance from neighbouring routers and also see if my own router perform better with the Vero 4K in a more isolated environment.

Testing is done and here are my findings.
I tested with the following routers:

  • Asus RT-AC68U
  • Asus RT-AC88U

WiFi networks from neighbors were limited to 4 on 2Ghz and 0 on 5Ghz.

The test file used was La La Land
File size : 54.5 GiB
Duration : 2h 7mn
Overall bit rate : 61.0 Mbps

Server mounted through SMB in fstab.

Test 1
Vero 4K connected via RJ45 cable through the on-board ethernet interface.
As expected the iperf3 test showed avg. 93.1Mbit/s
Playing the film the incoming data stream could barely saturate the buffer to 100%. But it managed okay, with only a few skipped frames through out the movies. 1 drop 29 skipped in the first 10 minutes.

Test 2
Vero 4K connected to AC68U via 5GHz through the internal wifi antenna.
iperf3 test showed avg. 235Mbit/s
Playback was barely possible. Buffering every 20-30 seconds. Looking at the bitrate of the video and audio stream, it looks like the actual throughput was 40-50Mbit/s. Because as soon as the film jumped to 50+Mbit/s the buffer would fall. Pausing at letting the buffer fill up only postponed the same outcome.

Test 3
Vero 4K connected to AC68U via 5GHz through the internal wifi antenna.
Changed the TxPower to 178mW, 150mW, 130mW, 100mW
No changes to the playback. Same results as Test 2 showed, except 100mW here the iperf3 test started to show that the throughput was lower.

Test 4.0
Vero 4K connected to AC88U via 5GHz through the internal wifi antenna.
TxPower at 100% (no idea how much in mW as the firmware only shows procent)
iperf3 showed avg. 73Mbit/s
I tested the film anyway, but playback took couple of minutes to begin. Did not even feel like doing measurements since it was very clear that the buffer barely could fill up…

Test4.1
Changed the TxPower to 50%
iperf3 showed avg. 231Mbit/s
Playback displayed exact same behaviour as test 2

Extra info
In all tests the alignment of the Vero4K box was flipped on 3 different angles but showed no difference in iperf3 or playback. Vero4K was placed the recommend distance from the router at 2.5 meters.

Conclusion
The problem lies not with the server, so that leaves the router and the antenna of the Vero4K
The odds of the routers being the problem seems very low. I could try doing the same tests on my phone and tablet, but the fault seems to lie not with the wifi connectivity but with the way Kodi ingest the bitstreams.

That’s a very comprehensive – and useful – set of tests you’ve run.

Could you explain why you reached that conclusion? I’ve read through the thread and can’t find any details about your NAS or understand why you’ve excluded the NAS as being a part of the problem. What hardware/software is it?

I think it’s important to clarify one point: the iperf3 figures are a good guide to the raw network performance between two points, excluding any protocol overheads. It’s quite possible that the problem somehow lies with the SMB protocol or some strange interaction between the WiFi driver and SMB.

In post #5 you (allegedly) performed a Server → null test using pv but in fact piped the data to md5sum, which could have introduced a significant CPU overhead, depending on the processor of the NAS. You could still have used dd and sent the output to /dev/null but excluded the status=progress option.

Again post #5, Server → OSMC was only 3.6 MB/sec, so roughly 30 Mbits/sec. That’s using SMB, whereas post #1 showed iperf3 – which doesn’t use SMB – was giving around 260-270 Mbits/sec (and 235 Mbits/sec in Test 2 of post #10).

Server → Win10Ent Is your Win10Ent machine connected via cable or WiFi? And does it follow the same path all the way to the NAS as the Vero4K?

Finally, it would help greatly if you could supply full debug logs. From the Kodi menu System → System Settings → Logging → Enable debug logging. Then play a problematic movie and from a separate SSH session run grab-logs -A and post the URL it returns.

I can see your meaning that it’s not very clear why I’m excluding the NAS. The reason I’m excluding it is because Kodi on my main windows pc or Macbook show no buffering issues on either cable or wifi. I tried using a “bad” cable that I have to kind of force a 100mbit/s handshake between windows pc and the switch. Here the buffering is as good as when I plug in the Vero 4K directly (switch in between) the buffering seems to be consistant of 60Mbit/s ish (iperf3 93Mbit/s) which makes sense since some overhead properly are present with the SMB protocol.

That makes sense. The processor is i5-3750K 4.2 Ghz but how well the instruction set for hashing is I do not know. htop on the NAS (Server) show no significant spikes in CPU Utilization.

Connected via cable and follows the same path. Only difference is that instead of mounting in fstab like on Linux, I’ve just “mounted” the server through Kodi.

I’ll post again as soon is I’ve had time to do the tests.

I sincerely appreciate your comments, time and help :slight_smile: