Introduction: DIY Stereoscopic Camera for Oculus Rift VR

About: #Innovation #Technology R&D lab of the the world's fastest-growing #media agency. #Creative makers working from a secret location in SG. #leanintochange

By Alexander Jaspers, Strategist at Metalworks by Maxus

The following documentation outlines an attempt at creating a stereoscopic camera for Oculus Rift Goal of this project is to create a rudimentary view-through for Oculus. During the creation of this project, we encountered some problems, which are mentioned after the instructions.

The first step included research on how to make a stereoscopic camera. Input from two similar cameras has to be taken, each representing the left and right eye respectively. A software processes the images, and outputs them for use on Oculus Rift. The software used, was created by user ‪SyzygyRhythm‪ on the Oculus Developer Forum – it is called Ocucam. More about this later.

Step 1: The Cameras

Ideally, we would have cameras with a very wide field of view (FOV), close to the Oculus 120 degree FOV. For this experiment, we used simple Microsoft Webcams (having a FOV of approx. 50 degrees), the model being Microsoft LifeCam VX-3000. If required, the FOV can be widened by using an additional lens, as proposed by user PomeroyB on the Oculus Developer forum.

In this example I used both cameras in landscape mode, other sources do suggest however using portrait mode for better results.

Step 2: Setting Up the Cameras

Just like human eyes, the cameras need to be perfectly aligned to create one picture once the user starts wearing the Oculus VR headset. Otherwise the visual will be eerie and headache inducing. The cameras represent the left and right eye respectively.

I removed the factory attached mount from the cameras to mount the cameras on a sheet of corrugated plastic. To allow for adjustment and alignment, an L bracket was used which allowed the camera to slide left and right. The sheet of corrugated plastic was then attached to a standard tripod, by means of a ¼-20 UNC thread.

Step 3: The Software

The software setup was pretty straightforward: after installing the SlimDX framework, I installed the software provided in the introductory instructions (Ocucam), which was generously provided on the Oculus Developer Forum.

As the webcams had the same name, it was a bit of guesswork, which camera was left and right, but as soon as this was correctly set up, I could proceed to the adjustment.

Step 4: Adjusting the Cameras

This proved to be the hardest part, as the webcams were not tightly secured on the plastic sheet. Therefore I recommend using a better material in future iterations of this, to allow for more precise adjustment. The camera needs to be adjusted whilst wearing the Oculus Rift DK2 headset. This proves to be a rather complicated process, as both angle and distance between cameras (i.e. distance between eyes) need to be set up correctly.

The software provided also allows for some set up, by using the home and end key for inter-eye spacing, and the page up/down buttons for field of view adjustments. The picture shows a set-up that worked for me.

Step 5: Learnings

Even though this simple set-up worked and showed the possibilities of using view-through by means of stereoscopic video, there are still things to improve:

  • The field of view of regular cameras is too limited (need at least 90 degrees, as opposed to 50 degrees as in this setup)
  • The FPS rate might need to be increased to reduce possible headaches
  • Calibration is difficult, need to create a dedicated rig for setup which allows for fine adjustment
  • Need to use two identical cameras, ideally more hi-res than the ones used
  • Autofocus cameras will be giving a better image quality
  • Ideally, we strip the cameras from their enclosure and 3D print a dock for the cameras
  • This might be interesting to recreate: http://willsteptoe.com/post/66968953089/ar-rift-part-1 especially with regards to the view through experimentation

Step 6: Thanks

Big kudos to the Oculus developer forum members, especially users SyzygyRhythm for the software to pipe through the images and user Jargon for fixing parts of the code. Also, I would like to thank Mithru Vigneshwara (our resident Creative Technologist) for helping me with the setup.