First trials with new 3D-camera attached to Zeiss Cinemizer Plus

Hi there,

As I have announced in my last post this one is going to be on the first trials of my new 3D stereoscopic camera
module. But before we start let's first take a look on the hardware.

P1040056_small.jpg

This is the final stage of assembly with two NedCam1 modules and one base board (the NerdConv1). You see that the boards are plugged in each other with simple 2.54mm headers/receptables and are hold in position with 12mm metallic M2.5 posts. There are 4 mouning holes available for attachment on the platform of choice. The small rubber bubbles above the camera lenses are the microphones for stereo audio. I should mention here (one of my latest experiences) that this way of microphone attachement is not the best one, because the phone cables are susceptive to the clock signal emission form the cam daughter boards. It's better to place them a little bit away form the camera sub-modules.

P1040058_small.jpg

This is the back side of the device with rugged connectors for power supply and video/audio out signals. The FPGA-board is also plugged from this side. On the left there are 4 trimmers which will later be used to control hue, saturation and brighness equilisation of the two camera sub-modules. One nice feature of this setup is the fact, that it can be powered completely form the USB port on the FPGA-board. This is very helpful when working on the software/firmware of the FPGA.

 

P1040055_small.jpg

Some technical data:

  • Width: 123mm
  • Height: 40mm
  • Depth: 55mm (including lenses)
  • Mass: 94g
  • Power supply: +5V ... +18V, DC, includes reverse polarity protection via P-channel MOSFET circuit
  • Current draw: 450mA (includes all attached sub-modules)
  • Video output: Composite video (CVBS) in permanent side-by-side 3D format, NTSC or PAL in full SD-resolution
  • Audio ouput: stereo audio (a little bit weak, could be improved with an additional line amplifier in future)
  • Number of AV-channels for wireless transmission (FPV): just one channel

My first trials were so to say in wired mode - at this time I did not possess the AV-transmision gear completly, I am sitll waiting for the delivery of the AV-receiver box. So I made some rough cableing to my hard-disk TV-recorder including an NTSC-to-PAL conversion. This may seem strange, but the imager chips I use in my cameras tend to have a nicer picture in NTSC than in PAL. The latter generates black bars around the image when observed in the Cinemizer goggles, while the former completly fills out the whole image frame. Because the Cinemizer can handle both video norms I tend to use NTSC for my daily use of the camers.

The video attached (http://www.youtube.com/watch?v=8voTD9zLC8s) was taken on my terrace and recorded with the said hard-disk recorder. Then I hand to burn this recording on a transfer-DVD, read and convert it into a DV-stream on my computer and finally I made a youtube-compatible clip out of it. So the final quality of the movie is not the best one. The image is much clearer on the TV-screen and of cause in the Cinemizer display.

If you own a Cinemizer then I suggest to view the movie on iPhone/iPod using the YouTube app. If not, there still is the crossed-eye-trick at hand. In my next post I hope to show some more footage in "wireless mode".

 

Cheers,

A.

 

 

https://www.youtube.com/watch?v=8voTD9zLC8s

You are doing some amazing work here.

I am glad you are sharing your progress with us. Have you considered posting a tip to hackaday about your work here?

Hello Michael (/AnTenNnA),I

Hello Michael (/AnTenNnA),

I am trying to get in touch with you with regards to your NerdCam/NerdConv boards (but not sure how to). I am interested in building a RGB-D camera using optical imaging only (unlike Kinect), and am willing to put some resources behind it. Such a device could be sold to hobbyists via companies (such as 3DRobotics). 

To start with, your creations can be connected to an (upcoming) UDOO board (with an IMU sensor attached) running an OpenCV pipeline on Linux. As a next step the algorithm can be converted to an FPGA based implementation (such as Dan Strother’s work).

Lets have a chat when you get a chance drop me a line ([email protected]). I live and work in Dallas US, but am travelling in Austria at this time. Let me know how I could conect with you.

Best regards,

Manuj

 

**Hi Manuj:

Thanks for your**
Hi Manuj:

Thanks for your interest. In the meantime I made much more progress than described here in this blog entry. The NerdCam/NerdConv combination is discontinued and is replaced by the NerdCam3D:

http://www.rcgroups.com/forums/showthread.php?t=1802511

Unfortunately this model is not ready for market entry due to excess in electromagnetic interference. Currently I’m working on the successor model which (hopefully) will pass the regulatory tests.

As I don’t maintain this blog here any longer I suggest to stick with my discussion on RCgroups if you want to be informed on future developments.

Michael

Michael, I read through the

Michael, I read through the discussion in RC groups forum, can post there next. I think there is a market in robotics community for a synchronized stereo camera without any radio transmission. I was actually not thinking about the radio component.

The need is to have a cheaper USB-based stereo camera that will be on-board on a robot, for visual odometry and computer vision applications. E.g. input from a stereo camera can be translated into a depth map (typically called disparity map). From one such video sequence a robot’s change in pose can be calculated. (A lot of this research is being done out of ETH by the way.) With two such cameras, accurate visual odometry can be achieved along with reconstruction of the surrounding 3D area. I am happy to explain in more detail. 

Manuj