cp is not worse. It’s just not as easy. Doing the cp uses 2 commands (cp and time) and you then have to calculate the speed. dd is one command and calculates the speed for you. Also dd gives an update every second so you can see possible glitches in the network that you may not notice when just doing a cp.
Because that’s what I picked for my example. You can choose any size, or leave the size out to test the complete file.
I don’t agree with this at all. You can copy a full movie file in far less time than it takes to watch it. When actually watching, only bits of the file are moved in bursts instead of the complete file.
If you find doing all that easier than just running one simple command, go for it. I will continue to use (and recommend) dd as my go to test.
To tell the truth I run into major pain on OSMC as it does not include (by default):
- bc, so you have to make do with bash arithmetic expansion $(( )), meaning no split seconds
- external time binary, only the bash bulit in, and capturing the output of that is a major pain on the level of:
exec 3>&1 4>&2
var=$(TIMEFORMAT='%0R'; { time cp $FILENAME /dev/null 1>&3 2>&4; } 2>&1)
exec 3>&- 4>&-
So you do dd with manually selected size, I do a manual division, win-win.
As I already stated, you don’t need to do the count option. Without that dd will copy the entire file.
But I can see this isn’t worth discussing anymore as you’ve decided that your complicated cp that only ends up giving you a summary is better than a simple dd that not only gives you a summary but gives you progress. To each his own I guess.
1 Like
I just had a similar problem. 50GB UHD Blu-ray ISO rip of a 85 minute movie. That is 80 Mbps average which should work fine on my Vero 4 (which has 100 Mbps ethernet) with a large enough RAM buffer. However, it stutters like crazy. Using the playerdebug screen I can see the buffer stays at 0 B all the time and the percentage goes wild up and down until it hits below 5% and there is the stutter. I retried with a BDMV and it has the same issue. Copying the m2ts file and putting it in a separate folder (not named BDMV) fixed the problem. Now Kodi is buffering and the movie plays without stuttering.
It seem this is a known Kodi issue: see how to let KODI 18.6 buffer bluray ISO file and Buffering Problem with Kodi and blu-ray folders
Perhaps with Vero 4k+ and enough network bandwidth it will work fine, but Vero 4k really needs the RAM cache (buffer) as complex UHD scenes can require more than the 100 Mbps network bandwidth it has available.
I have an original Vero 4K, and have never had issues (except read below). I can watch UHD Gemini Man 60fps with no stutters. And that’s a direct rip using MakeMKV.
But, I did have problems for a few weeks that were driving me crazy. It turned out to be a failing drive. Replaced that drive and have not had a problem since.
The ISO for the Gemini Man is 85GB for a 117 minute movie. That is 99 Mbps on average. There is no way you can watch this smoothly without RAM buffer using a 100 Mbps network card. You’re talking about converting to MKV first which is using heavy compression (which is very noticeable on UHD) to get around the bandwidth problem.
Kodi (and OSMC) has no problem using the RAM buffer for MKV files. So you never actually had this problem, my mistake.
Um, no. There is no compression involved. MakeMKV simply extracts the main movie. It does not do ANY compression. It will remove un-wanted audio and subs.
Ok, if all you are doing is change the container from m2ts to mkv you do get the added benefit from the RAM buffer because that issue only applies to ISO files and BDMV folders. 99 Mbps on average for 117 minutes over a 100 Mbps interface is quite a feat. Well done Vero 4k.
A “100 Mbit” ethernet connection will give around 93-94 Mbits/sec. You can subtract 5-10 Mbits/sec for storage and transport protocol overheads.
What people forget is that the buffer can only be filled from spare bandwidth. If a video is continuously running near the maximum network speed, your cache will remain close to empty.