
Automated 3D modeling of building interiors is useful in applications such as virtual reality and entertainment. In this talk, we develop an architecture and associated algorithms for fast, automatic, photo-realistic 3D models of building interiors. The central challenge of such a problem is to localize the acquisition device while it is in motion, rather than collecting the data in a stop and go fashion. In the past, such acquisition devices have been placed on robots with wheels or human operated pushcarts, which would limit their use to planar environments. Our goal is to address the more difficult problem of localization and 3D modeling in more complex non-planar environments such as staircases, or caves. Thus, we propose a human operated backpack system made of a suite of sensors such as laser scanners, cameras, inertial measurement units (IMU)s which are used to both localize the backpack, and buildthe 3D geometry and texture of the scene. We show examples of resulting 3D models for multiple floors of the electrical engineering building at U.C. Berkeley.