The process of compositing, or placing actors in front of a background that’s not actually there, is as old as filmmaking itself — and it’s always been a pain. Netflix has a new technique that relies on machine learning to do some of the hard work, but it requires lighting actors in a garish magenta.
For decades the simplest method of compositing was chroma keying, in which actors stand against a brightly colored background (originally blue, later green) that can easily be identified and replaced with anything from a weather map to a battle with Thanos. The foreground is said to be “matted” and the background is a transparent “alpha” channel manipulated along with the red, green, and blue channels.
It’s easy and cheap, but there are a few downsides to this, among them problems with transparent objects, fine details like hair, and of course anything else with a similar color to the background. It’s usually good enough, though, that attempts to replace it with more sophisticated and expensive methods (toptechtrends.com/2016/04/11/lytro-cinema-is-giving-filmmakers-400-gigabytes-per-second-of-creative-freedom/”>like a light field camera) have languished.
Netflix researchers are taking a shot at it, though, with a combination of old and new that could make for simple, immaculate compositing — at the cost of a hellish on-set lighting setup.
As described in a recently published paper, their “Magenta Green Screen” produces impressive results by, essentially, putting the actors in a lighting sandwich. Behind them, bright green (actively lit, not a backdrop); in front, a mix of red and blue, making for a dramatically contrasting colors.
The resulting on-set look likely makes even the most seasons post-production artist balk. Ordinarily you want to light your actors brightly with a fairly natural light, so although they might require a little punching up here and there, their in-camera appearance is relatively normal. But if they are lit exclusively with red and blue light, it completely distorts that look, since of course normal light doesn’t have a huge chunk of its spectrum cut out.
But the technique is also clever in that by making the foreground only red/blue and the background only green, it simplifies the process of separating the two. A regular camera that would normally capture those colors instead captures red, blue, and alpha. This makes the resulting mattes extremely accurate, lacking the artifacts that come from having to separate a full-spectrum input from a limited-spectrum key background.
Of course they seem to have just substituted one difficulty for another: the process of compositing is now easy, but restoring the green channel to the magenta-lit subjects is hard.
It must be done systematically and adaptively, since subjects and compositions differ, but a “naive” linear approach to injecting green results in a washed out, yellowish look. How can it be automated? AI to the rescue!
The team trained a machine learning model on training data of their own, essentially “rehearsal” takes of similar scenes, but lit normally. The convolutional neural network is given patches of the full-spectrum image to compare to the magenta-lit ones, and develops a process for quickly restoring the missing green channel in a more intelligent manner than a simple algorithm.
So the color can be restored surprisingly well in post (it’s “virtually indistinguishable” from an in-camera ground truth) — but there’s still the problem of the actors and set having to be lit in this horrible way. Many actors already complain of how unnatural it is to work in front of a green screen — imagine doing it while lit in a harsh, inhuman light.
The paper addresses this, however, with the possibility of “time-multiplexing” the lighting, essentially switching the magenta/green lighting on and off multiple times per second. This is distracting (even dangerous) to do 24 times per second (i.e. the framerate most films and TV are shot), but if they switch the light up faster — 144 times per second — it appears “nearly constant.”
This however requires a complex synchronization with the camera, which must only capture light during the brief moments the scene is magenta. And they must account for missing frames for motion as well…
As you can tell, this is still very experimental. But it’s also an interesting way of taking on an age-old problem in media production with a fresh, high-tech approach. This wouldn’t have been possible five years ago, and while it may or may not get adopted on set, it’s clearly worth trying out.
toptechtrends.com/2023/07/10/netflixs-ai-assisted-green-screen-bathes-actors-in-eye-searing-magenta/”>Netflix’s AI-assisted green screen bathes actors in eye-searing magenta by toptechtrends.com/author/devin-coldewey/”>Devin Coldewey originally published on toptechtrends.com/”>TechCrunch