3D Video Rig (Canon Optura 300's with 2" optical separation)
The Short Version
I have a pair of cameras mounted side-by-side and connected so they can be triggered simultaneously. This results in two nearly identical photos or videos, taken from a few inches apart. I merge the two photos or videos using a special color formula, so that when viewed through the glasses, the left eye can only see the photo from the left camera while the right eye can only see the photo from the right camera. This provides your eyes with the illusion of actually "being there". Your brain does the rest!
The Long Version
Basically it works like this. You have two eyes right? The left eye sees the room from one perspective, and the right eye sees it from a slightly different perspective. If you've ever been laying in bed and you open one eye and then the other, the pillow moves right? (It's the age old question... why does the pillow move?)
And here's something else... have you ever tried to learn how to cross your eyes? You can do it by focusing on an object like a finger, and moving it closer and closer to your face until your eyes get so crossed it starts to hurt! I'll explain why the eye crossing thing is important in a minute...
So how do we perceive depth normally in the real world? There are two big things at work:
1st THING: Whenever both of our eyes are open, we are actually seeing two different images. (One from the left eye, and one from the right eye) Amazingly, our brain combines them into a 3-dimensional scene based on the trillions of subtle differences between the two images.
2nd THING: Because the two images are slightly different, when you focus on any object, your eyes have to "cross" to line the images up so the object is clear. (Imagine sliding two identical transparencies together until they line up and become a single image. If the transparencies are drawings of a scene from slightly different perspectives, not all the objects will line up at the same time. You have to choose which object to make clear by sliding the transparencies back and forth)
Focus on something far away like the wall and hold a pen out in front of you. Move the pen upwards until you can see it, but stay focused on the wall. You should see two pens. Now, pay attention to what happens when you shift your focus onto the pen. Try shifting focus a few times between the wall and the pen.
The closer the object, the more your eyes have to cross to keep it lined up and in focus. Or to think of it in reverse: If you are focusing on an object, the more your eyes are crossed - the closer that object must be. That's how your brain knows things are close to you. That's why you can easily capture a lightening bug in your hand.
If you think about it, a firefly is just a bright dot in the air much like a star in the sky, but you know you can reach out and cup the firefly in your hand. It's because your eyes have to tilt inwards (cross) to focus on the firefly. (Otherwise you'd see two fireflies.) Your brain knows exactly how far your eyes are crossed at all times and it uses that information to do some simple geometry and place that firefly precisely in 3d space. Then when you look back up at the sky your eyes have to un-cross to focus on the stars. So your brain knows they are far away.
Surveyors use the same technique to measure distances such as the width of a canyon when they can't get to the other side, but they use two telescopes spread apart and focus them on the same landmark across the canyon. And then they have to measure the angles of the telescopes and do some math to figure it all out. Our brains do it constantly in real time with no math, even on fast moving objects. Try that with two telescopes and a calculator.
But so far this all requires two eyes, or two telescopes, or two perspectives. What if you only had one? People who are blind in one eye have a lot of trouble with stuff like tennis and baseball, but they can still tell you which tree is close and which is far, and they can still drive a car (although takes some getting used to of course). And if you look at a normal 2-dimensional photo you can still tell what is close and what is far even though it is a single perspective from one "eye" (one camera lens). How can this be?
Fortunately, our brain is smart enough to look for other clues, like when one object obscures another object then it must be closer. Or if you are looking across a field, the horizon is in the middle of your field of view. As things get closer and closer they usually get lower and lower in your field of view until you have to actually look down to see them. So if you are looking at a normal photo, your brain can say... "the rock is lower in the picture than the antelope, therefore the rock must be closer".
Another clue is the level of detail that you can perceive on an object. An orange that is close by has bumps... one that is far away just looks like a smooth orange ball. Artists will purposely use less detail on things that are far away in a painting to give it a realistic sense of depth. A person or a bird in the distance might be composed of a few quick brush strokes.
Another big clue comes from previous knowledge and experience. We know how big a person should be so when we see a really little one our brain places it far away.
And there are more, but the point I'm making is that all these clues are combined in your brain and depth is automatically assigned to all the objects you can see. It works so well that we can catch a ball or clap a mosquito with deadly accuracy without even thinking about it. So all we have to do to create a 3d illusion on a flat piece of paper is recreate most of these clues. The brain will take care of the rest.
As you know, in normal 2d photography you use one camera. You get most of the depth cues I mentioned, but it doesn't really look 3d because it misses the most important part. You absolutely need to have two different perspectives if you want to see real 3d. So in 3d photography you use two cameras (side by side) and you mount them on a special bar approximately 2.5 inches apart (the same distance as your eyes). You snap the photo with both cameras at the same time. That gives you the two perspectives your brain needs to see true depth.
All that's left is to find a way to present the right image to the right eye, and the left image to the left eye (at the same time of course). That's what the colored glasses are about. The red filter only lets the color red through. Try looking around the room through the red lens. Everything will be red. In the same respect the blue filter only lets blue light through.
So all I do is treat the images in such a way that the left eye's image only contains red pixels, and the right eye's image only contains blue pixels. Then the two images are layered on top of each other in Photoshop and printed out. And when you view the final image through the glasses, each eye gets its own perspective of the scene and your brain thinks it is looking at the real thing.