Exactly. 99.9% of 4K titles are HDR and it was quite clear that this is precisely what we were speaking about. DD’s abject failure to understand this and to then implicitly accuse another forum member of “passive aggressive BS” was completely uncalled for. I am extremely disappointed to see such an unprofessional attitude like that on display in this forum from someone who is supposed to be a moderator.
Thanks for your feedback. I’m not sure yet how to translate the depth information to “screen coordinates” for the left and right eye which generate the 3d depth effect. If you think that subtitles are closer to the viewer than necessary then I can change that. My initial version resulted in subtitles which were sitting on your nose - practically impossible to read them .
Regarding Kodi’s central setting: I see your point. In my opinion this central setting could be used to correct the depth, e.g. if it’s too close or too far away just correct it there. But if you think that wouldn’t be necessary I can change that, too. I don’t have that many 3D titles here to check that out.
Sorry, no. I’ll have a look at it shortly.
Ah! I hadn’t realised that, I had assumed there was a set formula that was part of the spec. That complicates things; and I can see that you might want to give the user the ability to tweak the overall depth if there’s some uncertainty in the formula to begin with. I’m still not sure that you’d want to use the same offset value that is used to control the depth of SRT subtitles, though. If you’re going back and forth between films that have with-depth PGS subtitles and films that use downloaded SRT subtitles, it would be nice not to have to keep changing the setting.
Incidentally, on the question of subtitles in 3D m2ts files, do I understand correctly that you still have access to the depth offset information in the video stream i.e. you know how far forward or backward from the default plane each title is supposed to be, but that you don’t have the default plane that this is supposed to be an offset from? If so then in theory it would be nice if you could specify a default plane on a title by title basis… but in practice I think that’s too much added complexity for such an obscure format.
I’ll check out a few more films and see what the depth is like with the current formula.
Spec? Which spec? I haven’t found one . But yes, it’s all about the correct formula. I know that it’s not comfortable to always tweak the Kodi depth setting. If we come up with a formula that almost always returns the correct conversion then I think we can (for sure) ignore the Kodi depth setting for depth calculation in that case.
Regarding m2ts: yes, the depth info for all planes is kept in the video stream, so it should be present in the m2ts file, as long as it wasn’t filtered out. What I’m thinking about is to have a look at all planes at the start of the video stream/resume point and find the first plane with a different depth value than 0 (which is default). And then just use this plane for the rest of the movie.
I watched a few other things. In general I’d say the subs are definitely too close at the moment. The depth is about right for subs next to the screen, but they get too close by a larger and larger amount as they get closer to the viewer.
I don’t know what the formula is for the distance of the subtitle in front of the screen, but if we suppose that it’s mp + c where p is some sort of plane index and m and c are constants, I’d say it’s m that’s too large rather than c. Or, to put it another way, the planes are too thick.
This makes me think I don’t understand the depth formula! If the plane value in the video header is (say) 10, does that mean that all subtitles are in plane 10 unless the video specifies a non-zero plane value, or does it mean that each subtitle is at 10 +/- an offset value specified by the video?
A plane is just a set of depth values. A subtitle track is assigned to a plane. So e.g. if the English subtitle stream is assigned to plane #3 then the depth values of that plane will be used throughout the whole movie. It’s also possible that a couple of subtitle tracks share the same plane.
Regarding the formula: currently it’s depth[plane][frame] * 2 + c, where c is the Kodi generic depth value and depth[plane][frame] is the depth value of a specific video frame of the assigned plane.
Er… so, at any given moment, the video stream might have depth values for more than one plane; the depth associated with (say) plane 3 varies from moment to moment; and any given subtitle will always use the same plane value throughout the film, but different subtitle tracks may be associated with different planes (but aren’t always). Is that right?
If I’m understanding that correctly (which is a very big if!) then I’d say your *2 multiplier is significantly too high.
yes
yes or better: it varies from frame to frame
yes
yes
According to your remarks above I’d say yes, you’re right .
Okay then, it seems like the logical thing to do with m2ts files would be, as you say, to use the first plane that has a non-zero depth value; but, if there are two planes with non-zero depths, to use the plane that has the furthest forward depth.
but what if that plane with the furthest forward depth value at the beginning of the stream has the furthest back values afterwards ?
If you want to monitor it frame by frame and switch planes every time a different plane has a higher depth than the currently selected one, be my guest.
A slightly more serious suggestion: would you like to do a temporary debug build where the depth formula is
depth[plane][frame] * (1 + (c / 10))
?
Or maybe even:
(depth[plane][frame] * (1 + (X / 10))) + c
where X is some other value I can tweak via the UI (for example the stereoscopic depth of the Kodi UI)?
That way I can experiment with different multipliers and tell you which one looks right. (Or which one matches what I get playing the original disc).
Sure, if you’re willing to help here, that would be great! To make it as easy as possible I suggest to use the Kodi stereoscopic depth as ‘c’ in the above mentioned formula. I can give you that debug build tomorrow.
Yes, of course! It might take me a day or two to get back to you - it would probably be tactful for me to spend some time with my partner over the long weekend.
Sure. That’s what I was suggesting with my second formula - use the Kodi UI depth setting as the multiplier, and the subtitle depth setting as a static offset (as now). That gives me plenty of control over what it’s doing.
Being as specific as possible is important. When most people refer to 4K, what they’re really referring to is a 4K resolution, with an HDR EOTF, HDR metadata, and a BT2020 colour space. But videos can exist with all of these individual characteristics. For example, you can get 1080P HLG which will flag an EOTF and is a form of HDR but will have no metadata.
The best way of clarifying things is usually with MediaInfo.
We generally know what people mean though.
I wouldn’t worry too much about things. It takes all sorts to make a world. This seems like a minor misunderstanding here and I think it will be water under a bridge shortly.
Let’s continue our focus on testing and improving OSMC
Cheers
Sam
Great! Then I’ll implement the formula depth[plane][frame] * (1 + c / 10)
where c
is the Kodi stereoscopic depth which you can control in the UI. I’ll also add some additional log messages which should help you to find out what’s happening while you’re testing.
I’ll send you a PM with the instructions where to get the debug build and how to install/use it.
Thanks for your help!