Where can I find examples of programming a robot with a camera of this type (chapter Stereo Camera System in Lesson 7) or similar ?
Is there anywhere on the site ?
Where can I find examples of programming a robot with a camera of this type (chapter Stereo Camera System in Lesson 7) or similar ?
Is there anywhere on the site ?
Hi there @Aleksandari
To program a robot with the information of a camera you will need to do some image processing and for that, I recommend OpenCV which is a is a library of programming functions aimed at real-time computer vision. You can use it with many programming languages including Python, Java, and C++ (I recommend C++ but if you are a beginner I guess Python would be easier).
The documentation is oretty easy to find and there are lots of tutorials out there, for example, these ones:
https://docs.opencv.org/master/d9/df8/tutorial_root.html
I hope that information helps
PS: let us know if you would be interested in us creating tutorials on image processing
I need for a robot (a robot arm with a moving platform) moving past the wall of the room, to mount a camera on it that will record the wall during movement in real-time, and to obtain precise parameters of the robot’s motion based on that shot and the distance sensor: coordinates x, y, z, current coordinates, distance traveled, total distance traveled, … relative to some starting reference point.
Theoretically for example when we sit in a car and looking out the window while the car is moving, our eye can see that all the exterior objects are “moving” in the opposite direction of the car movement, so if we place the camera on the moving platform of the robot against the wall, during the motion of this robot the pixels of the camera will also move in the opposite direction, if the robot goes uphill, the pixels will move obliquely downhill, so I think then if the camera is of good quality and with the appropriate program those pixels could track and project an image of space orientation based on the speed, direction and direction movement the pixels from the camera.
This possible in theory, and how is in practice ? Is it explained somewhere on the internet which camera is best for it, how to program it ?
Hello @Aleksandari that is an interesting project,
I understand what you are trying to achieve but I think it would be quite hard getting that much information from the images of a wall.
As far as I know, visual SLAM and point cloud algorithms are based on feature detection and matching and I don’t think you’d be able to detect many features from a wall (or at least not the one I’m imagining) maybe it could be possible if the wall has a pattern or is textured but you could also get an unreliable orientation measurement if the paint strokes vary across the room. Another option would be facing the camera to another side so you can actually get some information about the surroundings.
Either way, I think you could have more precise measurements using the solutions other members of the community provided in this post, or maybe you could create a system that includes some of those and also a computer vision solution. In either case it would be interesting to see your project running so be sure to keep us informed
@geraldinebc15 that is what I explained in a different post. I said that he needs some kind of point of reference to track on the wall and suggested something like this:
Yes, but when we edit an image in Paint (or similar) and use the zoom option, and when we magnify the image enough (6x), we notice that each image on the “micro field” consists of cubes (squares) and each square has its own color. the intensity of the light, and their assembly makes up the whole picture, and those cubes when the camera moves, they move in the opposite direction in proportion to the movement of the camera, regardless of the changes in color, light, and should be properly programmed to follow only those cubes.
The whole picture does not show this, and if zoomed in enough then they can be seen and when moving the camera those dice act the same as this (only the colors are different, not just black and white):
It’s as when would the wall looked like this:
(Grid lines)
Hi @Aleksandari, I’m familiar with the concept of pixels (the “cubes” you are referring to) and as I have stated before I see what you are trying to achieve but from my experience I don’t think you would be able to get much information from the pictures of a wall.
Let me make myself a little more clear with an example, in those pictures you show some pixels with an obvious difference between them (the color) and that difference is what allows you (and feature descriptors) notice if there is a change because if it moves to the left you would be able to notice the change, but if all those pixels were the same color you would not be able to tell there was a change and that is the problem with the picture of a wall because most of it will have the same color, yes, there will be some differences between the pixels but not enough to notice the change (unless there is a very obvious texture, a pattern or some kind of reference).
Here’s the same example with some pictures:
Those points are features found in your picture (and one I created for this example) and the lines show the ones that match both pictures. Using that information you would be able to create a model of the camera movement.
Here’s the problem, I did the same thing with some pictures I took from a wall and here’s the result:
In case you are curious I did that using C++ and OpenCV with SURF as a detector and FLANN as matcher.
And OpenCV, as far as I can see, only serves to process snapshots and photos (very good software), is intended for Windows, MacOS, linux, Android, iphone, … platforms, but then how can I implement it all to robots, respectively to a microcontroller ?
There are a lot of such software for the computer, I think probably the robot needs special Arduino software compatible for microcontrollers, does OpenCV have any addon for it ?
Hi again, as you mentioned OpenCV works on computers or devices with an operating system and Arduinos don’t have one.
If you really want to use OpenCV with an Arduino you could use the last one to capture the images/videos and send the data through Wi-Fi to a server or your main computer. Then, process all the images using OpenCV.
Other solution would be using a Raspberry Pi because unlike the Arduino, this is a microcomputer that can run an operating system. Here’s a tutorial for installing OpenCV on a Rpi in case you’re interested:
But there has to be a way to program an arduin camera robot, those microcomputers are not very cheap. How are arduino-based camera-like robots commonly programmed ? I also saw it in photos of robots (robots with camera ), and how to program them, there must be some alternative of OpenCV for the arduino.
Of course there are ways to use an Arduino for image processing but I’m pretty sure you wouldn’t be able to do complex processing with it because you need a lot of storage space and memory. Ideally, you would need to hold the entire image you want to process in the ram. A 45x45 pixel image with 1 byte per pixel would take up all the ram memory on an Arduino Uno. You could get 90x90 pixel image on a Mega. So it is possible to do some limited processing but for the task you want to do which requires not only one image but multiple with a very high resolution I would consider getting a Raspberry Pi, a BeagleBoard, or the option I suggested previously:
If you want to use an Arduino you could use it to capture the images/videos and send the data through Wi-Fi to a server or through serial to your main computer. Then, process all the images using OpenCV or other image processing tool, you can find other options here.
How do they connect to which pins, the Arduino microcontroller, and the Raspberry Pi, where, how do they connect all the wires ?