Leveraging deep learning to save you time on property photography
One of the most exciting things about working at Matterport is combining our large set of 2D and 3D data with powerful technologies like computer vision and deep learning. Our job is to use these tools to create something that is not only technically impressive, but also offers real value to our users. Today, we’re unveiling the public beta of our newest breakthrough: Instant Galleries.
We started with one simple goal: Empower customers to create and download high quality property or real estate photography in a fraction of the time.
We know high-quality photos are critical whenever a living space is sold, leased, or rented. Photos may lack the context and interactivity of a Matterport Space, but they’re familiar and required for every MLS. They’re easy to share. Every single website listing needs them. They’re useful in emails, social media posts, ads, posters, direct mailings, brochures, and more.
Traditionally, using Matterport for 2D photos meant manual time spent: opening Workshop, navigating to a scan position, composing each shot, capturing and labeling, then downloading the set. For a typical real estate listing, this could take 20 minutes or more. A recent Matterport survey showed that 54% of users took over 30 minutes per model in post-production with Matterport Workshop.
It used to take quite a while to get usable 2D photos. Today, it gets much faster.
After months of research and testing, Matterport is excited to announce the public beta of Instant Galleries. With it, Matterport 3D models now include a gallery of 2D photos. Automatically -- without you lifting a finger.
Don’t believe us? Here are real, unedited photos, created automatically from a normal Pro2 model scan:
The number of photos varies with the size of the model, with larger models getting more photos. We also name each photo -- Kitchen, Bedroom -- with support for one-click renaming if you’d like to change it.
Everything else is the same. The photos we create act just like photos you might create yourself: the same resolutions, the same easy ability to download in bulk, or delete unneeded photos. We even set the best snapshot as the Start Position.
We’re excited about Instant Galleries, but there’s still room to improve. That’s why we’ve included the ability to give feedback on the images, to let us know if we’ve picked a winner -- or if the snapshot could have been better. We’ll use this information to improve Instant Galleries.
What does this mean?
Matterport wants to make your lives easier. We believe the best way to do that is to leverage our machine learning technology to built tools that save you time and money.
Recently we launched Fast Capture firmware update, dramatically speeding up the capture process and halving the time spent on site. Instant Galleries further reduces the time to get your 2D photos and build your Highlight Reels. Get everything you need, and get on with business.
We hope you enjoy Instant Galleries and find it useful. For best results, make sure to scan using our 2D photography guidelines!
Contact us if you have any questions or feedback.
Curious about the details?
If you’re wondering how it all works, we can shed some light.
First, for each scan position, we use semantic understanding to determine the type of room the scan is in. This is easier for some rooms than for others: if the room has a bed, it’s probably a bedroom. But if there’s an open floor plan, it can be difficult to judge where the kitchen, dining room, and living room begin and end. Some rooms also could be multiple types: an empty spare room could be a bedroom, a home office, a home gym, or many other things. In these cases, we’ll label the room as, “Unfurnished.”
Once we have an idea of the number and types of rooms, our next goal is to start finding locations and angles for the photos. This is challenging, because a rule that works well in one room may fail in another. So, we use some broader principles: if the range of colors is very small, then that is likely a poor photo. If our depth sensors tell us a large plane is close in front of us, then we’re probably facing a wall or corner, which we don’t want. Instead, we try to pick points of view with the biggest viewing volume (length, width, and height), since that should give the user the longest lines of sight.
But that’s still tricky: if there’s a big window, then the largest viewing volume might be facing out the window -- and ignoring the beautiful living room immediately adjacent. By using these techniques and others, we’re able to score each of the possible photos, and then create only the very best ones for each room.
Finally, we take our selection of photos, name them, and put them in order -- with communal rooms like the dining room and kitchen appearing before bedrooms and bathrooms -- and use the very best for the Start position.
Scott Adams is a product manager and Gunnar Hovden is a software architect at Matterport