Talking Papers Podcast

Manuel Dahnert - Panoptic 3D Scene Reconstruction

March 07, 2022 Itzik Ben-Shabat Season 1 Episode 8
Talking Papers Podcast
Manuel Dahnert - Panoptic 3D Scene Reconstruction
Show Notes

In this episode of the Talking Papers Podcast, I hosted Manuel Dahnert to chat about his paper “Panoptic 3D Scene Reconstruction From a Single RGB Image”, published in NeurIPS 2021.  In this paper, they unify the task of reconstruction, semantic segmentation and instance segmentation in 3D from a single RGB image. They propose a holistic approach to lift the 2D features into a 3D grid.  Manuel is a good friend and colleague. We first met in my research visit at TUM during my PhD, we spent some long evenings together at the office. We have both come a long way since then and I am really looking forward to seeing what he will cook up next. I have a feeling it is not his last visit in the podcast.

PAPER TITLE 
"Panoptic 3D Scene Reconstruction From a Single RGB Image" : https://bit.ly/3phnLGp

AUTHORS

Manuel Dahnert, Ji Hou, Matthias Niessner, Angela Dai

ABSTRACT

In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. However, the implicit nature of neural implicit representations results in Richly segmented 3D scene reconstructions are an integral basis for many high-level scene understanding tasks, such as for robotics, motion planning, or augmented reality. Existing works in 3D perception from a single RGB image tend to focus on geometric reconstruction only, or geometric reconstruction with semantic segmentation or instance segmentation. Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction -- from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations. We propose a new approach for holistic 3D scene understanding from a single RGB image which learns to lift and propagate 2D features from an input image to a 3D volumetric scene representation. Our panoptic 3D reconstruction metric evaluates both geometric reconstruction quality as well as panoptic segmentation. Our experiments demonstrate that our approach for panoptic 3D scene reconstruction outperforms alternative approaches for this task

RELATED PAPERS

📚 Panoptic Segmentation: https://bit.ly/3vd1FZd

📚MeshCNN: https://bit.ly/3M2lWH6

📚Total3DUnderstanding: https://bit.ly/36yH9bf


LINKS AND RESOURCES

💻 Project Page: https://bit.ly/3JT2Dy1

💻 CODE: https://github.com/xheon/panoptic-reconstruction

🤐Paper's peer review: https://bit.ly/3Cij44t


To stay up to date with Manuel's latest research, check out his personal page and follow him on: 

👨‍🎓Google Scholar: https://scholar.google.com/citations?user=eNypkO0AAAAJ
🐦Twitter: https://twitter.com/manuel_dahnert


CONTACT

If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com


SUBSCRIBE AND FOLLOW

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikb...

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP


🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP