Implementing NERFs
Neural Radiance Fields (NERFs) have been widely used in 3D reconstruction, and remains as the state of the art for multi-view synthesis. For this project, my group partners and I implemented, trained and tested a vanilla NERF model from scratch, while explaining each step and aiding the comprehension with visuals!
Multi-view Synthesis of 3D objects
The concept of a NERF is fairly straight forward: given a collection of images from known orientations of an object, can we infer what the 3D objects model is for angles that we haven’t seen?
NERFs aim to answer this problem by using traditional computer graphics methods to cast rays along each pixel position at each image, and learn the color at each point along the ray along with the opacity. By using multiple views and aligning them, this becomes possible in a learning framework, only by using a simple model, like an MLP!
The representation of a rotating camera and the impact on the sampling points along the ray.