Support for TV-led Dolby Vision on all devices

@sam_nazarko Wouldn’t your existing IPT colour space conversion work already be handling the mapping that P5 files can have?

I’ve only been looking at the display management portion of the metadata until now, but I’m now running into problems that seem to be due to ignoring the mapping.

If you are handling the mapping, is that work somewhere I can look at to see if I can use?

We are not outputting as DV, only HDR10 (or if TV isn’t capable) SDR.

We have had a couple of reports that P5 looks better on Vero V than dovi.ko on a licensed device, but ‘looks better’ is subjective and I have no such device to compare with.

I understand that. I was asking about the first part of that process involving polynomial coefficients / MMR coefficients that does reshaping

Understood.

Unfortunately this is subject to NDA. It is implemented in the form of an OPTEE Trusted Application (not GPL or derived work) which also requires access to the AMLogic TDK (subject to license)

That is unfortunate.

Acknowledging that you may have restrictions on what you can say, are you able to give any hints about how it works / would it be possible to do without a NDA / license?

For example, my thoughts are that these devices don’t have enough cpu power to do the reshaping process in software. The operations are also non-linear which rules out doing the reshaping with color-space conversion hardware. It does appear though that the VPP has a 289 element video OEFT mapping LUT per channel, I’m wondering if this could be used to approximate the reshaping - if it is only a polynomial reshaping. Another thought was the reshaping process could be done entirely in GPU code - people play games on these devices so it should have the power for these fairly basic calculations.

Can you provide any comments on the above thoughts that may give some insight into how the reshaping is currently being done with the trusted application / could be done more generally?

What you are asking would break existing agreements. Unfortunately I can’t provide any information regarding this functionality.

Some of the shaders and filters that use the GPU usually use GL. We are limited to GLES (hence why we have fewer binary add-ons such as screensavers).

Now works by just playing a file.

Only works properly for p8.1, p7 MEL, or p7 FEL played as MEL.

Given OSMC can do the reshaping needed for profile 5, you guys should be able to get that running properly as well.

4 Likes

Help wanted

To embed the metadata, it is needed to write to a layer above the video that has a native resolution of the output. This is where the need to disable the gui scaling comes from as the metadata is currently being written into the giu layer. Unfortunately, the native 4K gui is causing performance problems, and using the gui layer can also result in corruption of the metadata when the gui overly updates the top pixels.

This could be avoided if the second osd layer (that all the amlogic hardware seems to support) could be enabled over the top of the existing gui layer. While native resolution would still be needed, the layer would only need to be 4 x 3840 pixels, so should not come the problems of a native 4K gui that (I think I saw somewhere) are related to memory bandwidth.

I have spent a fair bit of time trying to enable the second OSD layer, but have got nowhere. If any of the devs that know the kernel better could please provide some help with doing this and/or provide some input into if enabling a new 4 x 3840 pixel layer over the top of the existing gui layer is possible, it would be greatly appreciated.

Surely the OSD is going to be scaled to the video resolution? So your 4 lines will be blown up to 2160 (unless the SoC blows up attempting it).

Is it not possible to disable the scaling and have it only cover a portion of the screen at native resolution?

So I’ve found a thread about enabling the second osd to use as a cursor plane for a N2(+) running chrome OS. In this post, a patch is provided for some version of 5.15 that implements hardware cursor support with OSD2.

If I’m understanding it correctly, it seems enable a buffer that covers a portion of the screen is possible and is exactly what I’d like to do

/*
 * The format of these registers is (x2 << 16 | x1),
 * where x2 is exclusive.
 * e.g. +30x1920 would be (1919 << 16) | 30
 */
/* For now, OSD Cursor is always on top of the primary plane */

However, whatever kernel that is patching seems fairly different to the amlogic one.

I have no clue how that patch works myself, but does anyone here understand that patch? If so, does it indeed do the functionality I was asking about and could it be adapted to the amlogic kernel?

I’m not sure if I can be of any help here. I haven’t looked much into OSD code yet. But I agree, I think you should use OSD. There already is some code in the 4.9 kernel (I can’t tell about 5.x) that implements a hardware cursor using OSD. Have a look at osd_hw.c and search for “cursor”. When we started working on the 4.9 kernel some years ago, we’ve had to turn it off because it was always visible.

I’ve also had a look at your solution in your repo. You’re using a canvas object to write into, right? IIRC (but I may be wrong) OSD is based on frame buffers. I can’t tell if it’s possible to attach a canvas object to an OSD layer.

HTH in one way or another …

I am more than happy to start with using the 4.9 kernel (that is where I first started, so have a working build for that). I’ve had a go at that trying to enable the hw cursor, but couldn’t get anything to be visible. Do you happen to remember what you did to turn it off so I can reverse that?

Yes, I’m using a canvas. I’m not actually sure what the difference between a canvas object and a frame buffer actually is. Only using the canvas because I found it was over the top of the video layer and I could write to it.

As long as a frame buffer can be written on a frame-by-frame basis somehow, I imagine that should also work.

Not from the top of my head. We had to turn it off in the 4.9.123(?) kernel. I can’t remember the exact version anymore. Later kernels did not show that behaviour, so we did not need that change anymore.

Canvas objects are used for the Y, U and V components of a frame, one canvas for each of the components. So a frame always consists of 3 canvas objects. The framebuffer is used to hold RGB data with an additional alpha channel.

I don’t think that’s a problem.

Thanks for all the info.

In that case, the canvas object that is currently used is actually in the form of a framebuffer. Only one component is used, and it is in a RGBA format.

Ok, I was wrong. It was not the hardware cursor that caused the problem that I’ve mentioned above. It was the OSD logo layer that caused a white dot being displayed in the top left corner. We “fixed” this by removing the call to set_osd_logo_freescaler(). In later kernel that “fix” was not needed anymore.