That’s right. Since you’ve asked, I’d like to give you an update now . When I was working on the “flip eye” support for HSBS/HTAB videos a couple of months ago I’ve also had a look at Full-SBS/TAB support. What I found out was just a nightmare. The 3D code in the kernel is just working by chance it seems. The code is in a very bad condition, duplicate code everywhere, 3D related code that influences 2D playback, dead code, or code that just isn’t doing what it should.
While I was working on the above mentioned flip eye support I was also cleaning and refactoring parts of that code to prepare it for Full-SBS/TAB support. So I’ve already implemented parts of it, but tbh, the more I think about it, the more I think throwing away all that 3D code in the kernel and rewriting it from scratch would be the better solution.
To make a long story short: Full-SBS/TAB support is still on my todo list. As soon as Kodi 20 is out I’ll look at the 2D/3D subtitle related issues (you remember ?). Then I could start working on it (if nothing more important shows up).
Yes, you’re right - at first sight. But there’s a difference: HSBS means that both parts of a frame, the left eye and the right eye view need to be stretched to get the right dimensions of e.g. 1920x1080. But in the FSBS case, no stretching is needed because the views already have the correct dimensions. Unfortunately, the current 3D implementation just stretches everything when it would get a 4k frame holding the two views. So you would get a 3840 x 2160 view as a result.
You might ask why is that stretching (or not) needed? The device (Vero) should just forward the frame as it is to the TV and stretching or not is handled by the TV and not the device.
And that’s also true, but if you want to support e.g. flip eyes, then you need to cut the frame into pieces, swap left and right views and combine them back to a frame that will finally be sent to the TV. The code that does that assumes that it gets HSBS/HTAB as input, and all the calculations that are needed to cut the frames and combine them afterwards rely on that. So for a 4K frame the wrong dimensions are used and you get garbage at the end.
Additionally, the current code relies on the dimension of the input frame and uses that as the resolution for each view. So e.g 1920x1080 means that each view will have that resolution. In case of 4k input the code assumes that each view will have a 4k resolution, but that’s not the case for Full-SBS/TAB. That’s another PITA that needs to be addressed.
So to make another long story short: I’m on it but it will take some time …