In July 2021, Meta launched the Benchmarking City-Scale 3D Map Making with Mapillary Metropolis request for proposals (RFP). Today, we’re announcing the winners of this award.
Earlier this year, we introduced a novel, city-scale data set called Mapillary Metropolis, which was designed with the goal of creating a completely novel and complex benchmarking paradigm for training and testing computer vision algorithms in the context of semantic 3D map making.
For this RFP, we sought research proposals that leveraged Mapillary Metropolis to improve basic computer vision algorithms that use one or preferably multiple data modalities from our data set for improving semantic 3D building. We were particularly interested in the following areas:
- City-scale 3D modeling from heterogeneous data sources
- ML for object recognition, tracking, and dense labeling
- Image-based matching, relocalization, and retrieval
The RFP attracted 29 proposals from 27 universities and institutions around the world. Thank you to everyone who took the time to submit a proposal, and congratulations to the winners.
Research award winners
Principal investigators are listed first unless otherwise noted.
Factorized, object-centric implicit representations for city-scale scenes
Jiajun Wu, Hong-Xing (Koven) Yu (Stanford University)
Multi-modal 6DOF visual relocalization in Mapillary Metropolis
Torsten Sattler, Zuzana Kukelova (Czech Technical University in Prague)
Neural feature fields for photorealistic scene synthesis
Andreas Geiger (University of Tübingen, Germany)
The post Announcing the winners of the City-Scale 3D Map Making with Mapillary Metropolis request for proposals appeared first on Facebook Research.