Kun Zhou* · Wenbo Li* · Yi Wang · Tao Hu · Nianjuan Jiang · Xiaoguang Han · Jiangbo Lu#
Paper Supplementary Material CodeNew: NeRFLiX++ : An improved version that is stronger, faster and for 4K NeRFs.
We also upload this video file (lossless) to google drive.
Neural radiance fields (NeRF) show great success in novel view synthesis. However, in real-world scenes, recovering high-quality details from the source images
is still challenging for the existing NeRF-based approaches, due to the potential imperfect calibration information and scene representation inaccuracy.
Even with high-quality training frames, the synthetic novel views produced by NeRF models still suffer from notable rendering artifacts, such as noise, blur, etc.
Towards to improve the synthesis quality of NeRF-based approaches, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven
inter-viewpoint mixer. Specially, we design a NeRF-style degradation modeling approach and construct large-scale training data, enabling the possibility of effectively
removing NeRF-native rendering artifacts for existing deep neural networks. Moreover, beyond the degradation removal, we propose an inter-viewpoint aggregation framework
that is able to fuse highly related high-quality training images, pushing the performance of cutting-edge NeRF models to entirely new levels and producing highly
photo-realistic synthetic views.
We show how NeRFLiX can be used to enhance the rendered novel views of various NeRF models on different in-the-wild scenes.
To play with the demo, click the button in the middle of each image and slide it to the left. For results on other datasets, you are welcome to check our paper,
or try our code directly.
We refer to the source code of this webpage from Xin Yu. Thanks for their awesome works.
Last updated: March 2023 Contact me: zhoukun303808@gmail.com