Instructables

Six Camera Face Scan Rig (using 123DCatch / Photofly for processing)

Featured
IMG_8696.JPG
IMG_8693.JPG
IMG_8694.JPG
IMG_8819.JPG
IMG_8865.JPG
IMG_8864.JPG
IMG_8932.JPG
IMG_8695.JPG
IMG_8934.JPG
I was recently asked to build "something fun" for a science museum event.

Since I was already waist deep in the world of photogrammetry at work (the reconstruction of 3D data from photographs), I decided to build a camera rig for capturing the faces of the visitors (mostly kids). Now if you have kids, or have ever seen a kid on TV, you know that they very rarely sit still for more than 0.25 seconds unless they are sleeping or in an Elmo-induced hypnotic trance... though even that lasts for only about 0.58 seconds. How then, can kids be photographed from multiple angles simultaneously?

Hmm... we could get a super high speed camera (let's say 10,000 frames per second), put the child on a stool in front of the camera and spin him really fast! Surely we'll get at least a few useable angles with high similarity and no vomit. The parents have an issue with that? Whatever! Little Johnny will almost certainly experience much worse, likely self-inflicted scenarios.

Fine, next idea: Purchase 6 cheap-wad digital cameras and wire them up to one trigger button. That's it! Simple and obvious!

This photo gallery (with annotations) illustrates the 3-week long process of sourcing, cutting, soldering, rigging, wiring and data collecting. In the end, you'll see that the ability to freeze little people in time from multiple angles pays off!

Read on, friends!
threiner2 years ago
really nice a bit more construction details wouldbecool
Gilius2 years ago
What software did you use for the 3d reconstruction?
Tall-drinks (author)  Gilius2 years ago
I direct you, good sir, to the title. ;)

I use Autodesk's 123DCatch for processing. Its fast, clean and so far free! :D
Better results than competitors in my testing (like Agisoft).

I'd definitely recommend it, even before I found out that Instructables was purchased by Autodesk! ;)
Oh I missed that. I am very impressed because all my results with 123D have been a lot worse
Tall-drinks (author)  Gilius2 years ago
Ha! Just messin' with ya!
if you follow the guidelines for Catch (lots of YouTube videos from Autodesk), you should get good results.

The most important things that I found were:
1. Take clear and well spaced photos. This means little image grain, no motion blur, sharp focus and plenty of overlap between shots (15-30 degree increments). Try not to rotate camera too much. Keep all shots either in landscape or portrait.

2. Watch the exposure! This is huge, especially with skin. You need to set the exposure on your camera so that there are no blown out highlights and no black shadows. The software has nothing to "latch onto" in these types of areas, and you'll get crazy lumps in the mesh. On a mobile phone, I just tap the brightest spot on the subject and it auto exposes and focuses on that which usually works. Best results with a good camera or DSLR though.

3. There should be no movement in your scene from shot to shot. No moving leaves, shadows, facial expressions, cars, hair etc. that's why this 6 camera system works great.

Good luck! Feel free to ask any questions you have.
Hi,
I tried this trick with simple stereo (2 camera) rig. I even bought a couple of video cameras that have IR remote control. In my experience with both sets of cameras, still and video, there's a problem of "over design" in their electronics, i.e. they are too internally complex to guarantee that shutters triggered simultaneously will actually take simultaneous pictures. In other words, something in the software/hardware is building inconsistent latency, which results in in shutter skew. I took some photos of people and at least 30% of them show movement between frames, even though they were triggered simultaneously!
Since I noticed those cheapy GE cameras, I've been fantasizing about getting ~6-10 cameras to create a light-field array (a real one, not the Lytro "focus later" gimmick), which can also serve the functions of taking more than 2 frames for lenticular 3D array and super hi-def panoramas.
But spending ~500$ and only realizing later that the latency issue makes my camera only good for still images, on a 0MPH wind days, is going to be heartbreaking.
So my obvious question is: are the GE cameras displaying that annoying "feature" when you use them in an array? I was hoping they are simple enough (or at least sold with a single software version).
Thanks for the instructable.
elabz kakungulu2 years ago
I think the biggest issue with timing here is not the complexity of the cameras or different software versions but the fact that you are using IR control.

IR protocols are designed to add redundancy because IR propagation is not very reliable and you're not always pointing straight at the receiver. They re-send the same command multiple times and I think what happens with the cameras is that the different cameras get the correct reading at different  times. For example, because the IR transceiver in the camera is slightly more or less sensitive or it's running on a slightly lower voltage or simply the IR window has a thumb print on it.

Since we are talking about really slow speed IR protocols, the difference in time between the moments when each camera can finally get enough good readings to guarantee a reliable command, can be really big, certainly big in a photography shutter speed sense.

Hardwire them, I think you'll find that the slight differences in timing because of the cameras' internal complexity are so slight that they are probably measured in microseconds and would not register in the photographs.  I think IR is what's holding you back.

Cheers!
Tall-drinks (author)  elabz2 years ago
I'd agree with the IR comment. I began building an IR blaster rigged approach, but, like elabz notes, the signal received was too random.

I was going to trigger the shutters with a mechanical lever or servo depressing each button, but ran into problems there too: these cheap cameras focus themselves when the shutter is depressed halfway, and so they would independently try to acquire the best focus for their own view per shot. This added a large 1+ second lag before the photo was snapped, and also presented the risk that some cameras might focus on a wall instead of the subject.

The solution I landed on was to manually half-depress each shutter button at the beginning of a session (just once per poewr on cycle, not bad at all), to focus all of the cameras on a test-subject. Then, when the shutter button contacts were closed, due to the soldered contact points I chose through trial and error, the cameras all just captured, and skipped the focus step. BTW, there is no manual focus mode on cheap cameras. Boo.

Lastly, I neglected to include my camera settings step in this Instructable, partially by accident, but I think they wouldn't matter unless someone sourced the exact same model. My fault though.

I set all of the cameras to the exact same settings before shooting, leaving nothing to default or automatic mode. This included the ISO, exposure offset, flash mode (off), turning off face-detection, as this adds to the delay per camera, setting focus type (center/9 point) the same, and any other settings or modes which could potentially affect the ability to fire instantaneously.

I ended up with all 6 firing within a 10th of a second or so.
Admittedly not good enough to capture action shots, or low light shots, but definitely worked for every subject who followed the rules of "sit still, look here, and press button when you're ready"!

Good luck in your quest.
Unfortunately, the best answer means buying good DSLRs with cabled shutter releases and manual focus modes and settings.
That was a great tip. Thanks! Last June, I took two of my cheap cameras as a stereo rig to a 14'er peak here in Colorado. I kept in mind your half-depressed pre-focus tip and got excellent stereo pairs. I even captured animals in motion, in perfectly matching left/right. Thanks again for the tip. I wish I knew it when I first took a 2 camera rig with me to Africa.
If this rig its going into a science museum.... 3d souvineer!!! and its of themselves!!! doesn't get more personal than that.

just something to think about...
Tall-drinks (author)  dconnolley2 years ago
My friend...
The words "3d" and "printer" come up in every other sentence I squeeze out of my lungs. I'm on it! :)

Nobody else is ever on board with me though. C'est la vie and stuff!!

Sniff.
100msec is a long time, but maybe it's good enough. The system I'm *imagining* will "void the warranty" for sure, because I'm not going to leave the cameras in their original plastics, but strip and embed them in a new housing, so I can drag the array with me to road trips. In light of that, I plan on soldering together both the full and half triggers and maybe other controls so I can control all the cameras together in the field. The AA entries will be fed from a common power-tool battery with a DC/DC converter. Again, the original battery compartments are redundant. The next improvement would be to add a controller board that will have programmable controls and USB interface to stream the data from the cameras to a central SSD.
It's all good "on paper"; but just a fantasy so far...
I'll start "small" by buying two cameras and making a stereo pair using your method. 100$ is a reasonable tuition.
Thanks elabz and Tall-drinks for the useful input.

morval6662 years ago
Great article. A question though. You only used 6 cameras and they were front facing. So you only captured the front of the face? And were 6 cameras enough to get that nice mesh you had above? Doesn't 123d catch want 30 or more pictures?
I want to do this to take pictures of my dogs mainly and then 3d model them. You can't tell the dog to sit still so you need to take all the pics at once. I want to capture the whole head (body would be even better, but need more cameras).
If you use Canon P&S cameras and hack the firmware with CHDK there are some nice DIY and premade switches you can get.
http://www.gentles.ltd.uk/gentstereo/sdm.htm
onlyjus2 years ago
Nice project! I am thinking about copying your idea. Although I am thinking about using cheap webcams instead? I have experience using OpenCV to read webcams (write now I am playing with two). I am thinking about using 9 webcams, and OpenCV to query single frames from all the cameras. I can't stream all the webcam feeds or I will hit the USB bandwidth limit. Let me know what you think.
Tall-drinks (author)  onlyjus2 years ago
I do believe that would work fine, friend! Your results likely won't be too high end due to the clarity and excessive noise (noise kills the results) in most webcams, but if you light it really well (not overblown) and get you settings right, you should get a decent result. I actually got a fair mesh using an iPhone front facing 640x480 camera. No great details but the basic shape was there.

I did experiment with webcams, kids cheap point and shoots, Kinects and such early on. Try to grab as much meta data as OpenCV will allow too. If you can keep the metadata stored in the jpgs, it'll be a great help to the 123dcatch system in terms of calibrating the lens properties and giving you a more true result.

Check this out too:
http://digiteyezer.com/joomla/index.php?option=com_content&view=article&id=77&Itemid=445

They are using 9 webcams I believe in a very pleasant and automated package.

Good luck!
So I started playing with this concept. I bought 9 cheapo Logitech web-cams off of ebay. I wrote a python script, using Opencv, to initialize and grab pictures from an arbitrary number of web-cams.

However, on my laptop, I am exceeding the USB controllers bandwidth and can only access three web-cams. booo.

I am trying to find ways around this. Anyone have an ideas?
iddit2 years ago
What a cool project! Have you considered using hacked Canon DCs? The hacked firmware allows manual controls and custom scripts!
spystealth12 years ago
Great sense of humor, oh and your instructable was nice too. Just kidding. :) This is awesome! I want to get into 3d designing and modeling. 123D Catch seems a bit finicky though. I guess if I had better lighting and a capturing jig it would be easier. Is there anything else you did with the 3D model besides print it, (use it in a game or something digital like that)?
Tall-drinks (author)  spystealth12 years ago
Yes, I added a webcam and 3d face tracking software and had the digital model stare back at people (with someone else's face), and it turned, moved and rotated exactly like the person looking at it.

The experience was horrifying.
Very cool. What software did you use?
Tall-drinks (author)  spystealth12 years ago
FaceAPI, Maya and a tiny amount of custom code.
ClCrow2 years ago
Amazing Job! Have you tried setting up an ad-hoc connection on your laptop to connect your sd cards with? Or you could just bring your own router as part of your rig.
Tall-drinks (author)  ClCrow2 years ago
I did try the ad-hoc approach, but there was a limit of 5 connected clients in the configuration I tried. So my 6 Eye-Fi cards could not connect to the ad-hoc simultaneously. I just carried a small router around with the rig. It worked very well and is dedicated, so it is fast, versus a public wi-fi point, which is typicall slow like digital molasses.
Yah, I should have realized if you got this far with technology you'd figure all that. While I'm at it, does anyone know if it's possible to do this or something with live streaming video?
Tall-drinks (author)  ClCrow2 years ago
Possible?
Yes.
High quality?
Not yet.

I don't always reconstruct meshes from photographs, but when I do, I use 123DCatch (or Agisoft, decent but not free).

I've run 1080p video stream data through catch in the form of snapshots from a very slow rotation. The result is passable for a base model shape, but nearly zero detail makes it through. You really do need a good 3mp image set or higher to get into details (which happens to be where Catch draws the line (in free mode).

There are a couple of sites out there based in Europe which take in video and output mesh, but they are simply automating that snapshot process on their servers. The results are not great. Now get some Red video footage in there and maybe we have something!

There are really exciting things going on in structure from motion development, which is how 3d features can be derived from motion video. Check it out on wiki. Pretty cool stuff. Look up SLAM, PTAM, structure from motion, bundle adjustment, Bea Arthur, sandwiches, yada yada.
Once again I am out-geeked. By "this" I was just talking about the video from one digital camera through Wi-fi.
dsergeyev2 years ago
Hi, everybody. Not to time didn't see such adaptation. Advances to your project. Call on.
Tall-drinks (author)  dsergeyev2 years ago
Yes.
Ausm2 years ago
This is actually pretty neat. Especially so since making side-by-side stereoscopic is a hobby for me. And I never even knew about 123D Catch. Pretty neat stuff there.
sitearm2 years ago
@Tall-drinks; Hi! Love your humor and your obvious love for children. What a great idea to let THEM press the button. I've tweeted this. Cheers! : ) Site
Tall-drinks (author)  sitearm2 years ago
Kids make any project more fun I think. They aren't afraid to show their joy when something entertains them! And thanks!
Foxtrot702 years ago
Great Project! Once the pics are taken can you transfer this to a MakeBot and then print a 3D bust?
Tall-drinks (author)  Foxtrot702 years ago
Thanks.
Yes, I actually sent one to Shapeways and added that to the end of this instructable. The detail is definitely there for printing, but I chose a very small scale (0.5 inches wide) so I wasn't happy with the quality of the print. Im going to try another one 2X larger in metal next.
Dude that's so cool. :]