In 3D animated films, Physically-Based Rendering (PBR) provides filmmakers with the opportunity to design and modify expressive and artistic lighting environments throughout the development process. However, in live-action filmmaking, contemporary visual effects (VFX) techniques allow for the addition and subtraction of entities in filmed performances, but provide only limited opportunities to substantially change the lighting environment in post-production. In this paper we describe a method for relighting a video of an actor’s performance with a novel 3D lighting environment by combining AI-based image manipulation with traditional 3D visual effects techniques. Our approach is to segment foreground and background onto two different image planes positioned and scaled to fill a virtual camera’s entire field of view as it moves in a 3D lighting environment, with distance-to-camera inferred using monocular metric depth models. Consistent lighting is achieved by inferring material properties of each video frame on each plane, e.g., normal maps, to enable physically-based rendering in 3D modeling software (Blender). We demonstrate that performances originally captured in diffuse lighting environments can be later relit in artistic ways, offering filmmakers a new option for low-cost virtual production.
Back





