BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream

ECCV 2024

Wenpu Li1,5*    Pian Wan1,2*    Peng Wang1,3*    Jinghang Li4    Yi Zhou4    Peidong Liu1†
* denotes equal contribution. denotes corresponding author.
1Westlake University   2EPFL   3Zhejiang University   4Hunan University   5Guangdong University of Technology

We explore the possibility of recovering the neural radiance fields and camera motion trajectory from a single blurry image!!

Abstract

overview

Implicit scene representation has attracted a lot of attention in recent research of computer vision and graphics. Most prior methods focus on how to reconstruct 3D scene representation from a set of images.

In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream. To eliminate motion blur, we introduce event stream to regularize the learning process of NeRF by accumulating it into an image. We model the camera motion with a cubic B-Spline in SE(3) space. Both the blurry image and the brightness change within a time interval, can then be synthesized from the NeRF given the 6-DoF poses interpolated from the cubic B-Spline. Our method can jointly learn both the implicit scene representation and the camera motion by minimizing the differences between the synthesized data and the real measurements without any prior knowledge of camera poses.

We evaluate the proposed method with both synthetic and real datasets. The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality.

Keywords: Neural Radiance Fields, Event Stream, Pose Estimation, Deblurring, Novel View Synthesis, 3D from a Single Image

Pipeline

overview

Given a single blurry image and its corresponding event stream, our method recovers the underlying 3D scene representation and the camera motion trajectory jointly. In particular, we represent the 3D scene with neural radiance fields and the camera motion trajectory with a cubic B-Spline in SE(3) space. Both the blurry image and accumulated events within a time interval can thus be synthesized from the 3D scene representation providing the camera poses. The camera trajectory, NeRF, are then optimized by minimizing the difference between the synthesized data and the real measurements.

Results

The experimental results demonstrate that our method can learn accurate radiance fields from a single blurry image and event stream, enabling the rendering of view-consistent sharp images from the learned NeRF. Furthermore, Since our method can jointly recover scene representations and camera motion trajectories, it provides a powerful capability to decode a series of sharp images from a single blurry image, thus bringing the blurry image alive in high quality.

Hover over or click the image to view the outputs of our method!

Synthetic datasets

Image 1
Image 2
Image 3

Image 1
Image 2
Image 2

Image 1
Image 2
Image 2

Image 1
Image 2
Image 2

Image 1
Image 2
Image 2

Real-world datasets

Image 1
Image 1
Image 1
Image 1
Image 1

Image 1
Image 2
Image 2
Image 2
Image 2

Image 1
Image 1
Image 1
Image 1
Image 1

Image 1
Image 1
Image 1
Image 1
Image 1

Image 1
Image 1
Image 1
Image 1
Image 1

Comparison

comparison_1
comparison_2

Citation

@inproceedings{li2024benerf,
      author = {Wenpu Li and Pian Wan and Peng Wang and Jinghang Li and Yi Zhou and Peidong Liu},
      title = {BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream},
      booktitle = {European Conference on Computer Vision (ECCV)},
      year = {2024}
  }