Skip to content

Further Adventures with 123D Catch (formerly Photofly)

December 17, 2011

I wrote last month about some work I’d done with Autodesk’s photo-stitching platform, 123D Catch. Since then, I’ve continued to experiment with the platform – with mixed results. This post attempts to summarize the lessons that I’ve learned.

More Photographs May Not Mean A Better Model

Since July when I first started playing with this service, I’ve created around 20 series of photographs for stitching. Subjects have ranged in size from the small (figurines) to medium (myself, twice) to huge (multi-story buildings), with a variety of planar, rough, and bulbous geometries. After all this work, the best result I’ve seen gotten has still been the boulder in Utah that was assembled from a mere six images

This is not to say that, for some models, more images won’t produce better results. My most recent project has been to capture a memorial to Jesse Owens that sits outside Ohio Stadium. My first attempt involved a single pass around the memorial, shooting 24 images in a general circle:

The video above took some tweaking to the alignment, but is generally okay. It wouldn’t be acceptable for a site model or fabrication, but does capture the memorial at its more diagrammatic. Several days later, I came back to the site and shot many more photographs. I ended up with 67 images that were focused only on one quarter of the overall memorial – a portion that did not seem to stitch well (the second quadrant in the video above, visible clearly from 0:04 to 0:07). This expanded set of images has produced some good results, particularly in rendering the brick ground plane. By varying the distance from which I shot the memorial (at times very close, at times up to 30 yards away) I have been able, I think, to get a better outcome than by simply walking in a circle.

More challenges remain, however. The resulting geometry is still a long way from being usable in a studio context, even for the simplest massing model. Looking at the jagged output, I’m actually reminded most of some of the projects that Nick Gelpi undertook during his LeFevre fellowship here at Ohio State. Nick was particularly interested in the deformations introduced by scanning physical objects and prototyping a model from digital representations – so I can imagine that he would find these twisted representations of the built environment to be fascinating.

Do Not Move. At All.

Challenged by my earlier failure to get a compelling digital model of myself, I asked my friend Lorrie to photograph me again. This time, she took 81 pictures instead of 22. The distances between each image where smaller, leading to greater overlap between picture and its neighbors.

The good news is that this time I got a nearly-complete model of my torso. The bad news is that I look like I’ve been splinched – to borrow a term from Harry Potter about teleportations gone awry. I was not aware of changing my body position – but it took nearly 10 minutes for Lorrie to photograph me from every direction. It is entirely possible (even probable, given the outcome) that my body changed position during the shooting. I’ve attempted to correct this issue by starting with a smaller set of images and then adding the rest after an initial pass – but thus far that has not resolved the issue. I have also tried assigning reference points around my head but have not made much headway.

Several people have commented, here or on YouTube, that it might be necessary to place reference markers on me to facilitate the stitching. That may be my next step, although I tried to improve the shoot by wearing a button-down shirt – which has more natural details for the stitching algorithm to utilize. Another option might be to scale down each image, in the thought that a 3-4 megapixel image might contain a bit too much noise, particularly if the camera moved at all.

Stay tuned, either here of on my YouTube channel, for more updates as I continue to play with this technology.

 

 

3 Comments leave one →
  1. bgconstrukt permalink
    December 18, 2011 6:03 pm

    for shooting yourself: you have to do it quickly. if it takes more than 1 min to take the full 360 degrees, its likely you have moved. I’ve had success here with my quick slr moving around people. You may need to manually stitch a few from behind depending on your increments and other stuff in the scene that can help featurewise

  2. qubic permalink
    April 10, 2012 10:43 am

    This all looks really impresive!! but does anyone know what kind of matching algorithm they are using ?? I’ve been trying to find some info on-line but no luck so far!

    • Matt Bernhardt permalink*
      April 16, 2012 11:41 pm

      I haven’t looked, but no – I don’t know what algorithm the software uses. Frankly, I’m not sure what the options are.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: