Hi there,
I know a lot has already been written about how helpful static tone mapping support would be for HDR, particularly for projector users. I’m curious to know if there’s any hope at all for a more advanced feature than has been discussed, which is to say DYNAMIC tone mapping. On newer tvs/projectors and external processors like Lumagen or windows MadVR software, something in the signal path actually goes in and shifts the tone mapping from scene to scene so that the display device is always maximizing available contrast at any given moment.
Is there any chance of seeing this in a future version (or device) update?
I’d pay good money for it
Are you referring to dynamic metadata?
This is what all displays should do. It’s difficult/impossible for a device like vero to do it as we don’t know the characteristics of the display. It’s not just about nits.
He means tone-mapping scene-by-scene even when the metadata is static: adjusting the luminance values of the pixels in the output in a way that depends on what the other pixels in the scene are doing.
What would you need to know about the display? And whatever it is, can’t the necessary values be configured manually by each user?
Well I’m guessing a display manufacturer would take into account for starters:
- max display luminance
- max average luminance (which may be time-dependent and need to take account of regulations on maximum power dissipation )
- black level
- the available colour gamut
- the type of backlight - technology and zoning
- bad characteristics of the display that the manufacturer is trying to compensate for
- and the things none of us mortals know about and manufacturers will never divulge.
Then when you’ve fiddled with the signal, you’ll find the display as still doing it’s own fiddling and probably working against you. Simple example: my Panasonic goes dimmer when subtitles appear on the screen. I think I’ve managed to turn that off but what other smarts is it still doing?
The display should do the work.
The only time we should send something dynamic is if the user is playing something with dynamic meta-data i.e. HDR10+/DV.
Philosophically I agree with you, but practically that often doesn’t happen. With JVC alone there’s several generations of HDR capable projector’s without proper static or dynamic tone mapping.
Then I don’t think we can improve that on our side if the display is ignoring what it should be processing.
I’m not sure we’re on the same page here. If it’s not practical to add on Vero I get that. But It is possible to add this functionality externally and it’s available from multiple hardware and software sources (as I mentioned it’s done well in MadVR). Conceptually I don’t see how it’s that different from providing HDR to SDR mapping, or traditional down sampling/up sampling, all of which are things displays typically have internal options for but nevertheless are provided on Vero/OSMC because you guys do it better.
I think you’re overcomplicating things a bit there, @grahamh. Lumagen’s implementation of DTM, for example, works very well, and the only information about the display it has access to is the peak luminance.
I’m not suggesting adding this would be easy, or even possible; but gathering sufficient information about the display would likely be the least of your problems…
It’s very different. We would need to analyse every frame (or better a sequence of frames) work out the ‘best’ tonemap, apply that and send the buffered frames through to HDMI. MadVR will have a lot more cpu horsepower to do that than we have. Somehow I doubt vero4k could do it but I’m no expert.
Adjustable static tonemaps on the other hand (Panasonic style) just apply a fixed curve, which we can do. I don’t think Panasonic have knobs to adjust any dynamic mapping parameters they apply after/before applying that static curve. I may be wrong - can @Chillbo confirm?
Panasonic has three options:
-
HDR-SDR conversion: clipping, dark tone mapping and bright tone mapping adjustment
-
Maximum nit output: adjustment of the tone mapping curve applied by adjusting the brightness fall-off (with a minimum of 350nits)
-
HDR optimizer: frame by frame analysis adjusting the tone mapping curve for each frame
All three options can be used together when connected to a SDR TV. And the second two can be used with a HDR TV.
The third option is a dynamic tone mapping adjustment IMHO where it’s obviously unclear what kind of algo is applied. The optimizer is quite similar to what JVC has implemented into their projectors now with Frame Adapt HDR. From my understanding, I therefore can’t confirm, @grahamh.
1 Like
OK thanks. So dynamic tone mapping is a black box with no knobs on it to hint at what they are doing.
From what I can see, yes. But maybe there’s more information out there… Haven’t looked so far. Maybe a user knows more?
Could you limit it to just the analysis stage by converting HDR10 to HDR10Plus on the fly? Scan each frame and then set the dynamic equivalent of MaxCLL on a frame-by-frame basis, then let an HDR10Plus TV figure out how to tone-map the result.
What would the dynamic equivalent be?
How would we know it’s correct?
1 Like
Surely, the whole point of HDR10+ is the video maker will have written the metadata so as to render in the best way on displays with a smaller gamut than the mastering monitor.
If it was as simple as generating dynamic metadata from the frame contents via a single algo dynamic metadata would be unnecessary.
Normally, yes. But some blu ray players (e.g. from Oppo and Sony) can convert HDR10 to Dolby Vision on the fly. If the way the TV tone-maps DV is preferable to the way it handles HDR10, this can significantly upgrade image quality.
I didn’t phrase that well. I know it’s a much bigger technical challenge; I only meant that from the POV of “this is solely the realm of display device functionality” that’s not actually the case. I was hoping this could be possible in a future Vero device upgrade. I’d happily pay a premium for a “Vero pro” with high quality up scaling or Dynamic HDR tone mapping and judging by AV forums I’m not alone.