Dense Depth from Event Focal Stack ๐
Kenta Horikawa1, Mariko Isogawa1, Hideo Saito1, and Shohei Mori2, 1
1Keio University, 2University of Stuttgart
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025
Abstract We propose a method for dense depth estimation from an event stream generated when sweeping the focal plane of the driving lens attached to an event camera. In this method, a depth map is inferred from an โevent focal stackโ composed of the event stream using a convolutional neural network trained with synthesized event focal stacks. The synthesized event stream is created from a focal stack generated by Blender for any arbitrary 3D scene. This allows for training on scenes with diverse structures. Additionally, we explored methods to eliminate the domain gap between real event streams and synthetic event streams. Our method demonstrates superior performance over a depth-from-defocus method in the image domain on synthetic and real datasets.
@inproceedings{horikawa_wacv25,
author={Horikawa, Kenta and Isogawa, Mariko and Saito, Hideo and Mori, Shohei},
booktitle={IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
title={Dense Depth from Event Focal Stack},
year={2025}
}
Acknowledgement This work was partially supported by JST Presto JPMJPR22C1, Keio University Academic Development Funds, the Austrian Science Fund FWF (grant no. P33634), and JSPS KAKENHI JP23H03422.