Abstract
Recent advancements in the Neural Radiance Field (NeRF) have enhanced its capabilities for novel view synthesis, yet its reliance on dense multi-view training images poses a practical challenge, often leading to artifacts and a lack of fine object details. Addressing this, we propose ARC-NeRF, an effective regularization-based approach with a novel Area Ray Casting strategy. While the previous ray augmentation methods are limited to covering only a single unseen view per extra ray, our proposed Area Ray covers a broader range of unseen views with just a single ray and enables an adaptive high-frequency regularization based on target pixel photo-consistency. Moreover, we propose luminance consistency regularization, which enhances the consistency of relative luminance between the original and Area Ray, leading to more accurate object textures. The relative luminance, as a free lunch extra data easily derived from RGB images, can be effectively utilized in few-shot scenarios where available training data is limited. Our ARC-NeRF outperforms its baseline and achieves competitive results on multiple benchmarks with sharply rendered fine details.
Area Ray Casting

Luminance Consistency Regularization

For simplicity, we use a relative luminance value, which is normalized as \( [0, 1] \), and derive the GT relative luminance \( y_\text{GT} \) of a target pixel as follows: $$y_\text{GT} = \sum_{\bar{c}}^{\{\bar{r}, \bar{g}, \bar{b}\}} \lambda_{\bar{c}} \bar{c},$$ where \( \bar{c} = c_\text{GT}^{2.2} \) indicates a linear rgb component converted from the gamma-compressed one by applying a simple power curve.
Experiments
Frequency regularization effect of Area Ray

Effectiveness of Area Ray as a bundle of rays

Comparison with Other Baselines
3-view; Notable improvement in the detail of the tail.
6-view; Apple surface textures are more stably reconstructed across changing views.
9-view; Brick textures are also more consistently reproduced.
Citation
Acknowledgements
This work was supported by NRF grant (2021R1A2C3006659) and IITP grant (RS-2021-II211343), both funded by MSIT of the Korean Government. The work was also supported by Samsung Electronics (IO201223-08260-01).