Talking Papers Podcast

BACON - David Lindell

Itzik Ben-Shabat Season 1 Episode 14

 In this episode of the Talking Papers Podcast, I hosted David B. Lindell to chat about his paper "BACON: Band-Limited Coordinate Networks for Multiscale Scene Representation”, published in CVPR 2022. 

In this paper, they took on training a coordinate network. They do this by introducing a new type of neural network architecture that has an analytical Fourier spectrum. This allows them to do things like multi-scale signal representation, and, it gives an interpretable architecture, with an explicitly controllable bandwidth.

David recently completed his Postdoc at Stanford and will join the University of Toronto as an Assistant Professor. During our chat, I got to know a stellar academic with a unique view of the field and where it is going. We even got to meet in person at CVPR. I am looking forward to seeing what he comes up with next. It was a pleasure having him on the podcast. 

AUTHORS
David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein

ABSTRACT
 Neural implicit fields have recently emerged as a useful representation for 3D shapes. These fields are Coordinate-based networks have emerged as a powerful tool for 3D representation and scene reconstruction. These networks are trained to map continuous input coordinates to the value of a signal at each point. Still, current architectures are black boxes: their spectral characteristics cannot be easily analyzed, and their behavior at unsupervised points is difficult to predict. Moreover, these networks are typically trained to represent a signal at a single scale, so naive downsampling or upsampling results in artifacts. We introduce band-limited coordinate networks (BACON), a network architecture with an analytical Fourier spectrum. BACON has constrained behavior at unsupervised points, can be designed based on the spectral characteristics of the represented signal, and can represent signals at multiple scales without per-scale supervision. We demonstrate BACON for multiscale neural representation of images, radiance fields, and 3D scenes using signed distance functions and show that it outperforms conventional single-scale coordinate networks in terms of interpretability and quality.

RELATED PAPERS
📚SIREN

📚Multiplicative Filter Networks (MFN)

📚Mip-Nerf

📚Followup work: Residual MFN


LINKS AND RESOURCES
💻Project website

📚 Paper

💻Code

🎥Video


To stay up to date with David's latest research, follow him on:
👨🏻‍🎓Personal Page

🐦Twitter

👨🏻‍🎓Google Scholar

👨🏻‍🎓LinkedIn


Recorded on June 15th 2022.

CONTACT

If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com

SUBSCRI

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

People on this episode