It's come to my attention that much of a robot's sensor processing would be better off done with an FPGA (field programmable gate array)
Such things as sound localization, or a FFT can be done directly in an FPGA, and not need to eat up the processing power of a Pi or Arduino. In fact would be much faster and could be done in real time.
I've posted a link to the Snicker Doodle earlier and here is a stand alone that would *not* have the same internal bus speed advantages:
The answer is that it is not so easy to incorporate FPGAs in a robot project. For one there is not much shared code, then there are no less than 4 distinctly different languages to write in. Not to mention that the hardware capability differs.
I did find an article on robot object tracking in FPGA. And it was right here on LMR, or rather it used to be.
Still, I find it interesting and when I an caught up with the rest of my robot project I can see a way forward to do localization and object tracking.
I’ll eventually dive on FPGA, but at the moment are still expensive compared to microcontrollers. Video is a good application for FPGAs, but what doesn’t make it easy is that you need to input the video on the FPGA and the some way to get the output.
It’s a distraction for me, at the moment. But what I think we all would like is an inexpensive point cloud depth data from a pair of cameras. I have an idea about how to go about that. I can forsee a way to go about that,
I am an FPGA junkie. Got started with Verilog at work to perform ASIC simulations. Addicted on first day. I usually instantiate a softcore (Microblaze, ARM, PIC, etc) cpu to perform algorithms and use the FPGA logic for dedicated hardware functions. Have bookmarked the Cx page and am very interested in trying it.