Relightable Gaussian Splatting for Virtual Production Using Image-Based Illumination

Showcase Datasets Code

Video Overview & Abtract




In-camera visual effects in virtual production (VP) use LED walls to provide background imagery and image-based lighting. While this enables on-set compositing, it tightly couples lighting, background and scene appearance, limiting flexibility for downstream editing. Contributing to this, we propose a VP-specific framework for 3D reconstruction and relighting using Gaussian Splatting. Similarly, research on inverse rendering relies on physically-based rendering frameworks that jointly estimate 3D geometry and lighting, with environment maps. However, the maps are typically low-resolution and assume far-field lighting. In VP, with near-field and high-resolution image-based lighting, this leads to inaccurate results as well as a non-trivial editing process. Instead, our framework leverages the known background imagery to condition the relighting process. This avoids relying on environment maps and reduces compositing to a simple background-image editing task. To drive our framework, we introduce a capture process and dataset that captures real VP scenes under varying background content and illumination. This footage is used to decompose a 3D scene into fixed appearance and variable lighting components. The variable lighting process simulates light transport by parameterizing each primitive with a learnable UV coordinate, intensity value and resolution modifier. Using mipmaps, these parameters directly sample the background texture in image space - implicitly capturing reflections and refractions without having to rely on physically-based rendering processes. Combined with the fixed appearance, this allows us to render relit scenes using the vanilla Gaussian Splatting rasterizer. Compared to the baselines, our implementation achieves high-quality 3D reconstruction and controllable relighting. It is efficient (less than 3 GB RAM, less than 5 GB VRAM, less than 2 hours training, ~35 FPS) and supports rendering useful AOVs including depth, lighting intensity, variable lighting color, and unlit renders. A subjective study also demonstrates the practical benefits of using our framework for photorealistic relighting in post production.
We propose a VP-specific framework that avoids physically based IR. We introduce a multi-view, multi-illumination dataset that captures VP scenes under varying LED wall content, and leverage the known background imagery to decompose Gaussian Splatting appearance into fixed and variable lighting components. Variable lighting is modeled by assigning each Gaussian a UV coordinate sampling the background texture, modulated by a learnable per-Gaussian intensity parameter. Combined with a base color representing fixed appearance, this enables relighting through direct manipulation of the background image. Our method achieves high-quality 3D reconstruction and controllable relighting without requiring geometry, normals, or environment maps. It is efficient ( less than 3 GB RAM, less than 5 GB VRAM, les than 2 hours training, ~35 FPS) and supports rendering useful AOVs including depth, lighting intensity, variable lighting color, and unlit renders.


Main Showcase

Showcasing our novel view and lighting results on a minature virtual production stage. Please pause the video if needed.

Dataset 1:
Timestamp [00-01s] shows the novel view and lighting results
Timestamp [01-39s] shows the dynamic (video) lighting result



Dataset 3:
Timestamp [00-10s] shows the novel view and lighting results
Timestamp [10-40s] shows a star exploding in space
Timestamp [40-60s] shows lightning
Timestamp [60-99s] shows bubbles moving around and changing color



Novel View Synthesis Capabilities

The following demonstrates the novel view synthesis and light capabilities of our reconstruction and relighting framework.
Timestamp [00-12s] Dataset 1
Timestamp [12-24s] Dataset 2
Timestamp [24-40s] Dataset 3



Editing the real LED Wall's Exposure Settings

Mentioned in the main paper, our light intensity parameter models the magnitude of outgoing light captured by the camera. This also captures the LED wall's brightness and color settings. Therefore, as we decompose our scene into fixed and variable lighting components, we can directly modify the variable light intensity parameter to reduce only the LED walls brightness while preserving the color and intensity of the fixed lighting setup. This is done by multiplying the light intensity value with a value between 0-1, prior to rendering.



Live background & lighting design for VP

By simulating the virtual production set users can design, edit and modify the background textures in real-time and without even visiting the virtual production set in-person. This example was streamed from Photoshop.



Editable IBL Goemetry (post-training) & Video IBL Synthesis Examples (end of video)


Novel View & Lighting Synthesis Results

Dataset 3: Comparison to TensoIR and Proxy Baselines
[Frame 01-10] Scene/Subset 1
[Frame 11-20] Scene/Subset 2
[Frame 21-30] Scene/Subset 3



Dataset 2: Comparison to TensoIR and Proxy Baselines
[Frame 01-10] Scene/Subset 1
[Frame 11-20] Scene/Subset 2
[Frame 21-30] Scene/Subset 3



Dataset 1: Comparison to TensoIR and Ablation when the Light Intensity is parametrized as 3-channel (RGB) instead of 1-channel (Greyscale)
[Frame 01-10] Scene/Subset 1
[Frame 11-20] Scene/Subset 2
[Frame 21-30] Scene/Subset 3



Testing Local Lighting Response

We visualize the local lighting repsonse using a plain blue background with a white square moving across the screen. This reveals limitiations regarding the consistency of the local Gaussian lighting reponse.



Subjective Study Workflow: A New Paradigm for VFX

The subjective study compares our approach, to (A) purely manual footage relighting, and (B) manual relighting with the help of AOVs generated using our approach. This presents a new paradigm for VFX, so below we provide educational videos that break down the workflow for tasks (A) and (B).

Note, the length of the videos does not correspond to the total time; his was fixed to one hour for each task and artist.

(A) Purely Manual Relighting


(B) Manual Relighting with the help of AOVS. This includes the relit image rendered from our method and the light intensity, scale, UV, residual color and canonical color maps.


BibTeX

@article{...}