In nuce: Researchers at Google Research present a new method that can be used to create high-resolution HDR scenes that would otherwise suffer from poorly lit source photos. The method used for this is called NeRF.
What is NeRF? The abbreviation stands for Neural Radiance Field and refers to an approach that can be used to create high-resolution, spatial images based on LDR (Low Dynamic Range) photos.
What can NeRF do? The novelty of the approach is that it can also be used to reconstruct scenes that were taken in dark lighting conditions. For this purpose, NeRF is able to use thousands of photos as input.
Who is behind it? NeRf was developed by a group at Google Research consisting of Ben Mildenhall, Peter Hedman, Pratul Srinivasan (all three Research Scientists), Ricardo Martin Brualla (VR/AR Engineer) and Jon Barron (Senior Staff Research Scientist).
The team says of its technology that NeRF enables you to create your scene based on linear raw images. The fact that raw images are output means that the full dynamic range of the scene is retained. And by rendering such raw images, the team goes on to explain, new types of HDR synthesis tasks (originally: high dynamic range view synthesis tasks) are performed.
Click further: Those whose informational horizons exceed the research of Google Research can quickly enrol at the University of California, Berkeley to study computer science – where three of the researchers completed their academic degrees. If that’s too time-consuming, you can still click through to the fascinating research work or watch the video below for an explanation of NeRF. You are also welcome to visit the clever minds Ben Mildenhall, Peter Hedman, Pratul Srinivasan, Ricardo Martin Brualla and Jon Barron online.
So: We hope you enjoy expanding your knowledge horizons!
NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images