Robust Novel View Synthesis with Color Transform Module

Robust View Synthesis with Color Transform Module

Dept. of Electrical and Computer Engineering, Seoul National University

With color transform module, we can accurately reconstruct the geometry and radiance of the low-textured regions.

Abstract

The advancements of the Neural Radiance Field (NeRF) and its variants have demonstrated remarkable capabilities in generating photo-realistic novel views from a small set of input images. While recent works suggest various techniques and model architectures that enhance speed or reconstruction quality, little attention is paid to exploring the RGB color space of input images. In this paper, we propose a universal color transform module that can maximally harness the captured evidence for the neural networks at hand. The color transform module utilizes an encoder-decoder framework that maps the RGB color space into a new latent space, enhancing the expressiveness of the input domain. We attach the encoder and the decoder at the input and output of a NeRF model of choice, respectively, and jointly optimize them to maintain the cycle consistency of the proposed transform, in addition to minimizing the reconstruction errors in the feature domain. Our comprehensive experiments demonstrate that the learned color space can significantly improve the quality of reconstructions compared to the conventional RGB representation. Its benefits are particularly pronounced in challenging scenarios characterized by low-light environments and scenes with low-textured regions. The proposed color transform pushes the boundaries of limitations in the input domain and offers a promising avenue for advancing the reconstruction capabilities of various neural representations.

Method Overview

Our color transform module comprises an encoder f(c) and a decoder g(l). (a) With f(c), we transform the colors of input images into latent values and train the radiance fields using reconstruction losses in both the transformed and color domains. (b) To ensure meaningful values in the latent domain, we incorporate cycle consistency loss as a regularization technique.

Qualitative Results

Qualitative results of optimizing NeRF in challenging environments. The first row showcases the rendered images of each scene, with the lower triangle enhanced for better visibility in dark regions. The second row presents the corresponding depth images. The effectiveness of our CTM is prominent in dark scenes. With a limited color range in the input image, the vanilla NeRF even does not converge. Our module not only stabilizes the convergence but also automatically finds an adequate mapping to increase the performance. We also noticed that the feature grid of DVGO can sometimes fail to converge despite its fast convergence.

Comparison Between Other Color Spaces

We compare our scene-specific color transformation against existing handcrafted color maps. We replace our method with transformations based on HLS, HSV, Lab, and Luv color spaces. Additionally, we apply a warping technique to transform the cylindrical positions of the Hue channel into Cartesian coordinates (denoted as cart). Our method yields the best performance in a challenging scenario.

BibTeX

@inproceedings{kim2023robust,
  title={Robust Novel View Synthesis with Color Transform Module},
  author={Kim, SM and Choi, C and Heo, H and Kim, YM},
  booktitle={Computer Graphics Forum},
  volume={42},
  number={7},
  pages={e14931},
  year={2023},
  organization={Wiley Online Library}
}
Copyright 2024, 3D Vision Laboratory, Dept. of Electrical and Computer Engineering, Seoul National University.
Contact: Room 916, Building 301, 1 Gwanak-ro, Gwanak-gu, Seoul, Republic of Korea