Skip to content

3D scanning from photographs

November 13, 2011

I’ve been interested in representing spaces in electronic terms for almost as long as I’ve been using computers. Some technologies, like VRML or ActiveWorlds or Second Life, tried to provide an interactive 3d experience via the web. Outside the internet there are also a bewildering array of scanning platforms that allow users to capture 3d geometry and export CAD-friendly files. These can cost significant amounts of money, although in the last few years there have been some intriguing low-cost systems.

It was in this context, then, that I discovered a new service from Autodesk several months ago: Photofly – which has just (November 10th, 2011) been re-branded with the unwieldy name “123D Catch.” This cloud-based service can create textured 3D models from digital photographs. No specialized hardware needed.

As an example of what this platform can do, I took six pictures of a boulder I came across recently on a trip. After uploading these pictures to the Photofly servers I got back a pretty decent 3D model of the rock and its immediate surroundings. The Photofly client on my desktop was then able to use the original photographs as keyframes to generate a video of the rock, re-creating the path I walked to take the photographs.

As you can see, the rock itself was modeled fairly well. There are gaps in the surrounding landscape, but those can probably be filled fairly easily – particularly by platforms like DAVID’s ShapeFusion.

Now, this platform is not without its limitations. Most notably, because you upload your images to the Autodesk cloud you should be careful about what you upload. Not that Autodesk is untrustworthy, but there are bound to be issues whenever you give content to someone else for processing. The system’s terms of service warn you not to upload anything that is proprietary.

On a more technical note, the system does not work well with reflective or translucent objects, or fine details. It also does not understand the concept of voids, so if at all possible you should avoid shooting the sky in your pictures – that big mass of blue pixels will be processed like everything else, with an eye toward making a mass. It would be great to have the ability to selectively mask photographs, to avoid problems like these. Because of the math involved, the system is also less tolerant of subjects that move – even inadvertently.

By way of a comparison, I submitted the same six photographs of the boulder to Microsoft’s Photosynth platform. The results can be seen below:

http://photosynth.net/embed.aspx?cid=bb955b85-235c-4b1d-ac70-6e684127a704&delayLoad=true&slideShowPlaying=false

I would argue that the Photofly results are far more satisfying than Photosynth, although the two systems have differing aims so literal comparisons do miss the point somewhat.

Then, there are times where a scan that seems straightforward goes somehow awry:

This was my second attempt at creating a 3d model of my head, based on 22 photographs. The first was an abject failure, with only five pictures being usable. This time around I was able to incorporate almost twice as many pictures, but only after manually identifying a dozen different points across 8-10 pictures. Even so, most of the photoset was unusable.

Looking at the specific areas that were difficult to stitch, one problematic area was the glare from my baldspot (thanks for rubbing that in🙂 ). Another was my hair. Neither are surprising, given the technology involved. My left ear and nose are also distorted, which is surprising given the number of photographs that included these features and their relative level of detail.

I hope to re-try this experiment with pictures that include greater overlap, and perhaps a different shirt in order to make identifying common points easier. My eventual goal is to generate something that can be fabricated, either via an unfolding process through Pepakura or some sort of rapid prototyping. It would also be interesting to play with rendering styles and generate some sort of grotesque avatar for use on social networking sites – taking advantage of the Uncanny Valley. In a way, these sorts of “failures” could actually help this attempt.

I’ve produced a half dozen other scans over the last few months, but these two I think show the potential and pitfalls of this technology. I can’t wait to see how both Photofly and Photosynth continue to develop over the next few months.

3 Comments leave one →
  1. November 13, 2011 2:18 pm

    Thanks for your review. If you put a piece of lint on one of your shoulders, it will create a unique point in the pictures that 123D Catch Beta can use to line up the pictures. You also have to remain perfectly still – that’s the hardest part.

    • Matt Bernhardt permalink*
      November 13, 2011 3:30 pm

      I hadn’t thought about using lint – good suggestion. I was just going to wear a shirt with more of a pattern or larger shapes, and maybe have the pictures taken from farther away. In the example I posted, I ended up using the graphic on the front of the t-shirt to help align a few of the photographs.

Trackbacks

  1. Further Adventures with 123D Catch (formerly Photofly) « Fabricated Experience

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: