This is how I understand it, and why I belive it isn’t as common with digital-videostreams editing in realtime, since they are often encoded/compressed.
Anything adding to a “digital-videostream” in real time, often needs some sort of hardware bits, since if you have a hardware decoding, writing to a software controlled buffer, where you could add something like random graining and then displaying that buffer, you could add a bit of lag.
I assume it must be possible to alpha-blend an overlay onto the video? That’s basically what you’re doing when you render PGS subtitles.
Suppose we have a sequence of pre-generated 1080p images, each 32-bit (RGBA), with all of the RGB values of the pixels set to black, and all of the alpha values set to somewhere between completely transparent and slightly opaque, with the alpha values distributed in a pseudo-random pattern.
Alpha-blend each image in turn over the top of each actual video frame (cycling once every few seconds so you don’t run out of images) and you’d end up with a pseudo-random pattern of slightly darker speckles overlayed on the video, with a different speckle pattern in each frame. I think that might look a bit like film grain if the overlaid images are generated properly.
PGS?, hmm. I was also thinking subtitles replacement , but still would the “uniqueness of grains” be perceived via 60-isg pre rendered alpha channel pics, and how often could you change subs(grain prerendered pic), with out hitting CPU-limit/make a noticeable worse experience? How would you differentiate from multiple resolutions in an “general addon”-solutions? And is grain always under exposure, like black dots or is it both under and over exposure like both black and white dots? Since I never really thought about it, but haveing just “fallouts” feels less grainy. I mean “ant-wars”, the analogue signal lost screen, is both white and black so I thought a grainy image should have both too???
If you’re watching (say) a DVD film upscaled to 1080p, then applying a 1080p grain pattern over the upscaled image would probably still look all right. So in most cases you’d only need 1080p and 4K grain images. (Disable the add-on if the output resolution is anything else!)
I was assuming you would have a different grain image for each frame, with about 120 of them pre-calculated and stored in the add-on; then cycle through the whole sequence once every five seconds for 24fps, once every two seconds for 60fps.
Generating the grain images in real time would probably be a nightmare, but so long as they are pre-rendered and all you have to do is load them, that might be easier.
Obviously it may not actually be possible to alpha-blend the grain pattern onto a frame quickly enough to sustain the frame rate. In fact, come to think of it, since the Vero 4K can’t actually display 4K/60fps stuff with subtitles without skipping frames, maybe this isn’t such a good idea!
True, perhaps not he best idea for a 4k video on the current platform. But heck it’s an idea, for a beefier system. I don’t know if there is even a grainer-addon for kodi at all? If we could abuse the subtitles renderer in kodi, and just for a start change prerendered every 5 frames, which i belive could be enough to put a real load on some systems.
Now subs isn’t a specialty for me, a PGS is a a pre-rendered stream that overlays the picture by the player? Not like a .SRT that has timestamps and chars that gets rendered on the fly.
PGS stored as “bitmaps in time sequense”? Cuz I’m thinking, with the addon you create a .SUP with random placements of a a quarter resolution pic, 1-4 pics per 5 frames, randomize number of quarters on screen per timeframe, and then create a grain track for that video file, on the fly, if none has been created or a sup file allready exsists in “sub folder”. Seems there is some palette info too, so you might be able to do black and white dropouts.
Edit: So by prerendering “the grain” in a .sup file we dont have to let the system do any real time work except the overlay. Now is the system fast enough to do it in a few seconds while a “intro video file is played”??
I assume it’s a sequence of images and timestamps, so each successive image is displayed at the right time and for the right duration.
What I’m not 100% sure about is if it’s actually a full alpha-blend, or if it just uses a mask - the latter meaning every pixel is either transparent or opaque and there’s nothing in between.
If you’re going with a subtitles file, there’s no reason why it couldn’t be pre-generated and stored on permanent basis - you wouldn’t need to do it at the beginning of the film. I don’t see a need for different grain patterns for different films.
But I suspect the file size would be uncomfortably huge if you’ve got one new image every few frames and there’s a saved track for the entire film.
But also, to do this I think you’d have to disable all other subtitle tracks during playback, as (as far as I’m aware) you can only have one subtitle track active at a time. And disabling all subtitles, even forced, would be quite a big deal - can you imagine trying to (say) watch Avatar without any translation of the Na’avi dialogue?
Parting question - any ideas for how to process the video in an MKV container to achieve this whilst preserving 10-bit HDR / 4K - for free (LOL). Can be Mac or PC, I have both though preference is with a Mac work-flow.
I’m sure ffmpeg has filters that can do this. I’m assuming you want grain that looks like some films like 300.
You should be able to pass ffmpeg options in most encoding tools like Handbrake under the additional options section. It’s going to take you several encodes to probably get what you want, so I’d do small clips to start with.
So it’s easier for my family to access without installing VLC, etc, I wrote a bunch of web apps to serve up media around the house from my web/media server. I took that JavaScript code above and copied it to my media server in my apache www directory. My media files are sym-linked under my web directory, and I can play them using a php script I wrote that calls that grainy JavaScript filter and places it on top of the entire web page playing the video underneath. The results are pretty good and definitely look better than the waxy standard def video, especially from a slight distance. The downside is this only works from a web browser, and you have to be fairly knowledgeable about working with a web server and writing some php/html to make it work. Also, clicking the full screen button pulls the video to the foreground above the filter, but to get around that limitation I use F11 in Chrome, and the video will play full-screen with the filter still above it.
Someone here might find that info useful, but honestly if watching on a PC, it’s probably easier to just use VLC and turn on the grain effect in the settings. I just did the web stuff for the heck of it
Still keeping my fingers crossed someone can write a grain effect addon for Kodi one day!
DNR tends to suppress grain that is already there. We were talking about generating grain that isn’t there. Unless you can make the DNR negative and use it introduce noise?