I’m not familiar enough with HDR10Plus to know what it’s called, but I assume there must be an equivalent to MaxCLL that applies to a specific scene and changes from scene to scene.
You would calculate its value by reading the luminance of each pixel in the frame, but you wouldn’t then have to change any of the pixel values before outputting them, you’d simply make a change to the dynamic metadata instead, and output the pixels untouched.
This might well not work - I don’t know that HDR10Plus can handle the metadata changing frame by frame. And it might still require more computational power than is available. But it would be more efficient than player-driven DTM.
I don’t think it would work well. I know for sure that if this approach of generating dynamic metadata on the fly has some merit, a big TV company will probably do it better.
I don’t think we want to invent metadata and we should stay as true to source as possible, possibly with an option to override MaxFALL/MaxCLL.
Actually, is that true? It’s true for Dolby Vision; but I recall a statement from LG saying that the reason why they won’t support HDR10Plus on their TVs is that (in their opinion) it actually just does exactly the same thing that their own HDR10 dynamic tone-mapping function already does.
Do we believe that though? Or is it because the technology is spearheaded by Samsung, and LG want to try and capture a different market.
DV does truly take sovereignty of the metadata and process it in what at least should be a uniform way on every set. HDR10 doesn’t guarantee that and devolves that processing to the display manufacturer.
We haven’t really seen any benefits from HDR10+ yet. Maybe our displays aren’t good enough, maybe they don’t do much with the data or maybe it’s just unnecessary.
From the POV of what I was asking for in this feature request, I don’t think the HDR10 to HDR10+ Conversion would have much utility, as I’d imagine that most Sets that support HDR10+ would have decent tone mapping on their own. If you’re not going to offer DTM for any HDR capable set, the effort it would take to implement this feature would probably only benefit a fraction of the users that uniform dynamic mapping would benefit. Not to say it shouldn’t be done, but HDR10+ conversion wouldn’t replace the utility of DTM in my case.
No argument there. I suggested it simply because it might require a little less computational power to implement.
1 Like