Those lasers looks cool, and usable. How does it work? Where’s the laser modules attached? Is it those green packages next to the camera?
I drool on that hardware. I’ve just bough an embedded system with an ARM 9 processor (200 mHz). How much could one squeeze out from that hardware? Have you reached your systems maximum performance yet? I’m also thinking about doing some vision processing, so i would like to know how much processing it requires and what kinds of vision processing would be ok to work with for my system.
If only use ARM without DSP, the processing power may be difficult for complex image processing algorithms. And probably not able to devlivery real-time video to receiver. But should be enough if take some not too complex processing algorithms. And later you can add on some video encoder module to devlivery video/audio. So I think you do it with ARM should be also fine. It’s just another way that also working.
The Executor is doing a H.264 video encoding, transfer video/audio sync to server by IP network. Also server will stream back audio to executor. In fact, I remained the video streaming receiving in executor, but I did not implement H.264 decoder on it yet. Since DM6437 EVM have VPBE(Video process back end). to display the video stream on executor is possible for furture development. But currently I did not do it.
The executor can do some image processing by itself, but I didn’t make it do in that way. In my idea, I want all the processing done by distributed computation networks cells. So the executor is only executing action and how to decide action is made by human or by computation network.
Because of H.264/AVC license issue, I can only release less than 12 minutes processing. Have to restart it after the time bomb.
These days I am quite busy on my full time jobs, not much time allows me to write out a detailed document.
Once I got time, will write a document about information:
what are the components need to build your own style executor
how to connect hardware
how to use the software
current package is mainly focus on human control interface. For computing network cell controlling, I am still working on the structure design and coding.
First step, it will more depend on images without audio. I am also thinking about fit it on with more senors.
If anybody got good idea and suggestions, welcome.
You are right. I am also worrying about that. The toes need to be very smooth.
I am thinking a better mechanical structure. But it’s expensive to make the metal accesseries. I am trying to find a cheaper way to make a better structure.
Before the hardware prototype come out, I have a lot of time to re-design the software structure. I am think make a better interface with the controller’s software.