A Preview on the NerdConv1

Today my wife went to the cinema with my kids and I took the resulting moment of quietness at home to build up two new NerdCam1 modules. I wanted to have them ready as the new base PCB for the NerdConv1 - my new video side-by-side converter - is approaching.

sP1040052.jpg

The two new cams with customized optics and spacers for firm attachment

The lens sockets are still taken from my old commercial cam modules. The lenses are available here but can also be bought from ebay or elsewhere. Please note that those rather cheap lenses usually do not have an infrared cutoff filter coating. Hence this filter is mounted on top of the imager chip of the module.

sP1030927.jpg

Packaging of the IR-filters I have bought here

sP1030929.jpg

Close-up of the filter glued with flexible glue on the imager chip

 


 

Here are two photos taken by my PCB manufacturer showing the NerdConv1 base PCB. It is still double sided with some tricks applied for having digital and analog ground planes. The eight big holes are for accessing the screws of the lens holders when the camera modules are in place. This is helpful for final adjustments of the optics.

BDE4fff034a73683_05.jpg

Top layer - all SMDs are placed here as well as the XuLA-200 FPGA board

BDE4fff034a73683_05_bottom.jpg

Bottom layer - both camera modules are plugged-in from this side

The idea is to have one board which carries everything that is needed in order to do the stereo signal conversion. In addition to my initial plans I included support for stereo audio via two miniaturized electret microphones POW-1644L-LWC50-B-R.There will be some LED's for status indication, some push buttons for switching between user-defined modes of operation, and some potentiometers to tune brightness, hue, or saturation of the camera modules. During my trials I noticed that especially the brightness adjustment between both cameras is important. Different brightness levels at both cameras are extremely disturbing when observed with 3D-video goggles. Finally I included some headers for future expansion boards. Those headers make a number of signals available such as power supply, I2C, the main 27MHz clock, two GPIO pins as well as the 8-bit-wide BT.656 digital output signal from the FPGA. So if anyone in future wants to build a video compressor for digital audio/video transmission then the board will be prepared.

3D-view.png

3D-preview of the board - not all components are visualized with real dimensions

If everything goes well then my next post will be on my first trials with the new NerdCam1/NerdConv1 device combination.

 

 

 

 

 

 

Very kewl!

Thanks for keeping us updated.

hi!I’ve stumbled upon your

hi!

I’ve stumbled upon your project and I am really amazed since this is really similar to something I’ve been considering. 

What I would like is to be able to read two camera modules (like ov7670, really cheap on ebay), which seems to be able to be synchronized since you provide the input clock and obtain pixel clock and such. I would like to build a lightweight stereo system capable of being mounted on a quadcopter. I would like to be able to send the pair of images over wifi to do some processing off-board (in the future, it could be on-board, but not know, don’t know about FPGAs, which would be needed).

I see you created a circuit that reads both images and communicates them with the FPGA. Would you mind providing details (schematic?) or concepts on your circuit? Would your circuit allow me to read images from both cameras and output them to an embedded PC like the raspberry pi?

I would really appreciate some insights, you seem to be the only person on the internet doing this =b

Thank you! 

Comments on your post

Thanks. I already noticed that there are not many other people working on this topic. Just a few comments on your post:

  1. The OV7670 might be risky. I am not sure whether the digital output of this chip is 100% BT.656 conforming. If not, you will hardly have a chance to create an analogue CVBS signal to be handed over to your headset or whatever. For this reason I spent some effort to find a useful imager chip. The Aptina MT9V135 absolutely fits my needs. 
  2. Please note that there are number of imager chips out there claiming to be able to output BT.656 or CCIR656 digital video data. But it happens that these data is only partially BT.656 conforming, e.g. the data comes out in progressive and not in interlaced mode. But interlaced is mandatory for many video encoder IC’s like the one that I use in my projects.
  3. The idea of my work is solely focused on the Zeiss Cinemizer Plus and subsequent models. This means the creation of a so-called side-by-side video image, where the data of both cameras is stored just like a conventional video frame. So the whole thing has to be understood as a continuous video filter: grab digital video data from both cameras, synchronize both video streams, filter each video line to form the side-by-side format, output this result to a suitable video encoder. That’s it. This all is a continuous 27MHz process, that’s why (cheap) microcontrollers might be overstrained. Hence the use of an FPGA to cover this kind of work. Don’t know if a Raspberry Pi is capable enough for this …

Cheers,

A.

Thanks for your reply.The

Thanks for your reply.

The thing is I don’t care about analogue, I’m not using this for the same purpose as you (I don’t have a headset). The idea is to sent this to another computer to run computer vision algorithms (like SURF and such). Hence, I only need something that is able to read the data and offload it to an embedded computer with WiFi capabilities (Raspeberry Pi is a possibility). I imagine your circuit interfaces the FPGA in some similar way I would need to interface to an embedded processor. The difficulty in my mind is how to read something at 24Mhz from an embedded processor. I imagine that your board somehow deals with this, right? Or how is that handled?

My actual question is what to put in between the cameras and an embedded processor. 

Thanks!