What scaling and deinterlacing algorithms are available on the Vero 4K(+)?


As subject line says, which algorithms are available on the 4K for upscaling and deinterlacing?

To break that down a little:

  • When hardware acceleration is being used, what do we know about how the S905X chip handles those tasks?

  • When using software decoding, there is a named list of upscaling algorithms, but the deinterlacing options are rather vaguely labelled. What algorithms do they correspond to?

  • Is there any fine-tuning one can do with deinterlacing? For example, can it be locked into always using film-mode (“weaving”) or always using video-mode, rather than guessing as it goes along?

  • Realistically, is the S905X actually powerful enough to decode MPEG2 DVD rips in software? (I note that’s not the default).

  • Betraying my ignorance here :slight_smile: but is hardware acceleration an “all or nothing” thing, or can one adopt a hybrid approach where you use hardware acceleration to decode the video, but do deinterlacing and/or upscaling in software? (And, if you can, could that allow more CPU-intensive deinterlacing algorithms like Yadif 2x? Or, at least, could it allow Lanczos3 upscaling of a video which is otherwise handled in hardware?)


Why would you care how a 100£ device does this if you have professional video processor a Lumagen Worth 5000 £ at home that’s been designed Specially to do this things ?

I mean I don’t know about you but once I’ll get my hands on a lumagen all I’ll care for is an option to get the untouched video feed into the lumagen and let the lumagen do its magic from there


To be perfectly honest, I’m not sure it’s any of your business why I care. :stuck_out_tongue_closed_eyes:


Just curious I guess since if someone shouldn’t have to care it’s someone that owns a lumagen;)


Hmm… fair enough. :stuck_out_tongue:

It’s partly just intellectual curiosity.

Also partly that the RadiancePro isn’t actually particular good at deinterlacing video material. It’s fine for film, but the lack of diagonal filtering in video-mode is quite visible sometimes. My old Oppo 105D is definitely better in that respect. Lumagen may eventually add diagonal filtering, but it’s possible that it’ll never happen, and even it does it likely won’t be soon.

And finally there are several situations now (and always will be one or two) where you can’t get the Vero 4K to output video without doing any scaling (see When playing a standard definition video, or a 720p/24 video, is there any way to completely avoid upscaling? ). It would be useful to have some more info about what it does to figure out the best way of dealing with those situations.


I wouldn’t mind some answers to my original questions, here… :slight_smile:


If it’s possible to find out, the only way would be to look through the kernel code, which you are welcome to do.


I think we covered state of playing at native format.

Advanced Motion Adaptive Edge Enhancing deinterlacing. This can be enabled / disabled on the fly.

They are provided by ffmpeg

sysfs offers some customisation, but there’s not much in terms of gui configurability yet.

Yes. You can set MPEG2 to HD and up.
You could do DVD with Yadif x2 in software.

You can choose which content to accelerate under Settings -> Playback.



Do we know anything about the S905X’s scaling algorithm?

There’s one particular option which would be very useful, and that’s the ability to lock the deinterlacing into film mode - i.e. always “weave” rather than doing motion-adaptive.

I’m working from memory, here, but I don’t recall Yadif(2x) being one of the deinterlacing options I was offered… not by name, anyway. If the Vero 4K+ can reliably decode a native MPEG2 DVD rip and apply Yadif(2x) deinterlacing, that would be very useful. (Even more so when/if untouched 480p and 576p becomes available).

That’s not quite what I was asking, I don’t think… I understand that you can choose which types of video to do in software and which to do in hardware, but what I was asking is if it’s possible to have part of the process done in hardware and part in software for the same video.

For example, could you have the decoding of the MPEG2 done in hardware, but the scaling and deinterlacing done in software? Or deinterlace in hardware but upscale in software?

This might not actually be useful. :slight_smile: If the Vero 4K+ can decode remuxed DVD video, and apply Yadif(2x)-quality deinterlacing, and do Lanczos3 scaling, and handle all of that in software, and still have some CPU cycles to spare, then that’s all you need. But I was speculating that it might not be able to handle all of that in software, in which case offloading part of the process to hardware but not all might be useful.

(I’m not sure but I think using MMAL rather than OMXPlayer on Raspberry Pi works a bit like this).


From what I know, I can’t think of an easy way to pass a hardware decoded stream back to software then back to the HDMI device. Doubtless it could be done but you would have to dig deep into the code.


You need to dig in to kernel for that.

Not trivially. Bandwidth tax would be high too because of a lack of zero-copy with this kind of implementation.

Lanczos3 is not possible.

Both MMAL and OMX use RIL components on Pi to accelerate decode. The only hybrid decoding solution on Pi is for HEVC which uses the QPUs to accelerate parts of the decode pathway. It uses opaque pointers to avoid memcpy taxes.