🔍NEF: Neural Error Fields for Follow-up Training with Fewer Rays

Int. Conf. on Computer Vision Theory and Applications (VISAPP) 2026 (To Appear)

Video GitHub

*1University of Trento, 2Keio University, 3University of Stuttgart, 4Graz University of Technology

*Equal Contribution

luchetti_visapp26_teaser.png
NeRF rendering with different amounts of total number of ray samples (a, c--e) and NEF (b), focusing on post-hoc ray pruning methods. While NeRF can render learned scene colors (a), NEF can render potential errors trained using the same dataset. Similarly to NeRF, which can synthesize view-dependent colors, NEF can represent view-dependent photometric errors and occlusions (insets from a different angle). This property allows us to identify the pixels or rays that contribute to follow-up training for updating the original NeRF model with a smaller number of samples (c vs. e). The same number of samples from the literature (Goli et al., 2024) fails to capture such pixels.

Abstract A Neural Radiance Field (NeRF) is capable of representing scenes by capturing view-dependent properties from a specific set of images through neural network training. The lack of a significant initial image set implies that additional photographing session and training are required to improve the final view synthesis. For this purpose, we introduce a new variant of NeRF training analysis, termed the Neural Error Field (NEF). NEF visualizes and identifies view-dependent errors to reduce the number of ray samples used in the follow-up training. NEF does not require modifications to the NeRF core and training process. We evaluate and verify the accuracy of the results achieved with NEF on several public datasets, including real and synthetic images, and bounded and unbounded scenes.

BibTex
@inproceedings{luchetti2026nef,
    title={NEF: Neural Error Fields for Follow-up Training with Fewer Rays},
    author={Luchetti, Alessandro and Ito, Kenta and Schmalstieg, Dieter and Kalkofen, Denis and Mori, Shohei},
    booktitle={Int. Conf. on Computer Vision Theory and Applications (VISAPP)},
    year={2026}
}

Acknowledgement This work was supported by the Alexander von Humboldt Foundation funded by the German Federal Ministry of Research, Technology and Space and the JST BOOST, Japan Grant Number JPMJBS2409.