Implicit scene representation has attracted a lot of attention in recent research of computer vision and graphics.
Most prior methods focus on how to reconstruct 3D scene representation from a set of images.
In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream.
To eliminate motion blur, we introduce event stream to regularize the learning process of NeRF by accumulating it into an image.
We model the camera motion with a cubic B-Spline in SE(3) space.
Both the blurry image and the brightness change within a time interval, can then be synthesized from the NeRF given the 6-DoF poses interpolated from the cubic B-Spline.
Our method can jointly learn both the implicit scene representation and the camera motion by minimizing the differences between the synthesized data and the real measurements without any prior knowledge of camera poses.
We evaluate the proposed method with both synthetic and real datasets.
The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality.
Keywords: Neural Radiance Fields, Event Stream, Pose Estimation, Deblurring, Novel View Synthesis, 3D from a Single Image