As subject line says, which algorithms are available on the 4K for upscaling and deinterlacing?
To break that down a little:
When hardware acceleration is being used, what do we know about how the S905X chip handles those tasks?
When using software decoding, there is a named list of upscaling algorithms, but the deinterlacing options are rather vaguely labelled. What algorithms do they correspond to?
Is there any fine-tuning one can do with deinterlacing? For example, can it be locked into always using film-mode (“weaving”) or always using video-mode, rather than guessing as it goes along?
Realistically, is the S905X actually powerful enough to decode MPEG2 DVD rips in software? (I note that’s not the default).
Betraying my ignorance here but is hardware acceleration an “all or nothing” thing, or can one adopt a hybrid approach where you use hardware acceleration to decode the video, but do deinterlacing and/or upscaling in software? (And, if you can, could that allow more CPU-intensive deinterlacing algorithms like Yadif 2x? Or, at least, could it allow Lanczos3 upscaling of a video which is otherwise handled in hardware?)
Why would you care how a 100£ device does this if you have professional video processor a Lumagen Worth 5000 £ at home that’s been designed Specially to do this things ?
I mean I don’t know about you but once I’ll get my hands on a lumagen all I’ll care for is an option to get the untouched video feed into the lumagen and let the lumagen do its magic from there
Also partly that the RadiancePro isn’t actually particular good at deinterlacing video material. It’s fine for film, but the lack of diagonal filtering in video-mode is quite visible sometimes. My old Oppo 105D is definitely better in that respect. Lumagen may eventually add diagonal filtering, but it’s possible that it’ll never happen, and even it does it likely won’t be soon.
Do we know anything about the S905X’s scaling algorithm?
There’s one particular option which would be very useful, and that’s the ability to lock the deinterlacing into film mode - i.e. always “weave” rather than doing motion-adaptive.
I’m working from memory, here, but I don’t recall Yadif(2x) being one of the deinterlacing options I was offered… not by name, anyway. If the Vero 4K+ can reliably decode a native MPEG2 DVD rip and apply Yadif(2x) deinterlacing, that would be very useful. (Even more so when/if untouched 480p and 576p becomes available).
That’s not quite what I was asking, I don’t think… I understand that you can choose which types of video to do in software and which to do in hardware, but what I was asking is if it’s possible to have part of the process done in hardware and part in software for the same video.
For example, could you have the decoding of the MPEG2 done in hardware, but the scaling and deinterlacing done in software? Or deinterlace in hardware but upscale in software?
This might not actually be useful. If the Vero 4K+ can decode remuxed DVD video, and apply Yadif(2x)-quality deinterlacing, and do Lanczos3 scaling, and handle all of that in software, and still have some CPU cycles to spare, then that’s all you need. But I was speculating that it might not be able to handle all of that in software, in which case offloading part of the process to hardware but not all might be useful.
(I’m not sure but I think using MMAL rather than OMXPlayer on Raspberry Pi works a bit like this).
From what I know, I can’t think of an easy way to pass a hardware decoded stream back to software then back to the HDMI device. Doubtless it could be done but you would have to dig deep into the code.
Not trivially. Bandwidth tax would be high too because of a lack of zero-copy with this kind of implementation.
Lanczos3 is not possible.
Both MMAL and OMX use RIL components on Pi to accelerate decode. The only hybrid decoding solution on Pi is for HEVC which uses the QPUs to accelerate parts of the decode pathway. It uses opaque pointers to avoid memcpy taxes.