Kun Zhou · Wenbo Li · Nianjuan Jiang · Xiaoguang Han · Jiangbo Lu#
Paper (IEEE Early Access) Paper (ARXIV) Code NeRFLiX (CVPR 2023)New: : Our show that NeRFLiX++ can also effectively enhance the visual quality of city-scale scenes without requiring any additional finetuning procedures.
Neural radiance fields (NeRF) have shown great success in novel view synthesis. However, recovering high-quality details from
real-world scenes is still challenging for the existing NeRF-based approaches, due to the potential imperfect calibration information and scene
representation inaccuracy. Even with high-quality training frames, the synthetic novel views produced by NeRF models still suffer from notable
rendering artifacts, such as noise and blur. To address this, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a
degradation-driven inter-viewpoint mixer. Specially, we design a NeRF-style degradation modeling approach and construct large-scale training
data, enabling the possibility of effectively removing NeRF-native rendering artifacts for deep neural networks. Moreover, beyond the
degradation removal, we propose an inter-viewpoint aggregation framework that fuses highly related high-quality training images, pushing the
performance of cutting-edge NeRF models to entirely new levels and producing highly photo-realistic synthetic views. Based on this paradigm,
we further present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer, achieving superior
performance with significantly improved computational efficiency. Notably, NeRFLiX++ is capable of restoring photo-realistic
ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views. Extensive experiments demonstrate the excellent restoration
ability of NeRFLiX++ on various novel view synthesis benchmarks.
We show how NeRFLiX++ can be used to enhance the rendered novel views of various NeRF models on different in-the-wild scenes.
To play with the demo, click the button in the middle of each image and slide it to the left. For results on other datasets, you are welcome to check our paper for more details.
We refer to the source code of this webpage from Xin Yu. Thanks for their awesome works. Thanks imgsli for theri great work of providing such a powerful visualization tools.
Last updated: June 2023 Contact me: zhoukun303808@gmail.com