Autonomous robot's navigation

Since October 2007 I developed new object recognition algorithm "Associative Video Memory" (AVM).

Algorithm AVM uses a principle of multilevel decomposition of recognition matrices, it is steady against noise of the camera and well scaled, simply and quickly for training.

 And now I want to introduce my experiment with robot navigation based on visual landmark beacons: "Follow me" and "Walking by gates".

I embodied both algorithms to Navigator plugin for using within RoboRealm software.

The Navigator module has two base algorithms:

-= Follow me =-
The navigation algorithm do attempts to align position of a tower and the body of robot on the center of the first recognized object in the list of tracking and if the object is far will come nearer and if it is too close it will be rolled away back.

-= Walking by gates =-
The gate data contains weights for the seven routes that indicate importance of this gateway for each route. At the bottom of the screen was added indicator "horizon" which shows direction for adjust the robot's motion for further movement on the route. Field of gates is painted blue if the gates do not participate in this route (weight rate 0), and warmer colors (ending in yellow) show a gradation of "importance" of the gate in the current route.

* The procedure of training on route
For training of the route you have to indicate actual route (button "Walking by way") in "Nova gate" mode and then you must drive the robot manually by route (the gates will be installed automatically). In the end of the route you must click on the button "Set checkpoint" and then robot will turn several times on one spot and mark his current location as a checkpoint.


So, if robot will walk by gates and suddenly will have seen some object that can be recognized then robot will navigate by the "follow me" algorithm.

If robot can't recognize anything (gate/object) then robot will be turning around on the spot for searching (it may twitch from time to time in a random way).

For more information see also thread: "Autonomous robot's navigation" at Trossen Robotics.

Now AVM Navigator v0.7 is released and you can download it from RoboRealm website.

In new version is added two modes: "Marker mode" and "Navigate by map".

 

Marker mode 

 

 

Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.

 

Navigation by map

  

Navigator_wnd_5.png

 

In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.

 

 

 

For external control of “Navigate by map” mode is added new module variables:

 

NV_LOCATION_X - current location X coordinate;

NV_LOCATION_Y - current location Y coordinate;

NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);

 

 

Target position at the navigation map

NV_IN_TRG_POS_X - target position X coordinate;

NV_IN_TRG_POS_Y - target position Y coordinate;

 

NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).

 

Examples

 

AVM_Navigator_movie_8.jpg

Quake 3 Odometry Test 

 

AVM_Navigator_movie_9.jpg

Navigation by map 

 

AVM_Navigator_movie_10.jpg

Visual Landmark Navigation

 

 

 

Quake 3 Mod

 

Quake_Thumb.jpg

 

Don’t have a robot just yet? Then click here to view the manual that explains how to setup RoboRealm with the AVM module to control the movement and processing of images from the Quake first person video game. This allows you to work with visual odometry techniques without needing a robot!  

The additional software needed for this integration can be downloaded here. 

 

Is it possible to play with virtual robot in “Navigation by map” mode?

Yes!


Just look into documentation  and download the “AVM Quake 3 mod” installation.

 

Next modification of AVM Navigator v0.7.2.1 is released.

Changes:
Visual odometry algorithm was updated:

 

AVM_Navigator_movie_11.jpg

Visual Odometry

 

 

I have done new plugin for RoboRealm:

 

EDV_DVR_thumb.jpg

 

Digital Video Recording system (DVR)

hqdefault.jpg

 DVR Client-Server presentation

 

You can use the “DVR Client-server” package as a Video Surveillance System in which parametric data (such as VR_VIDEO_ACTIVITY) from different video cameras will help you search for a video fragment that you are looking for. 

You can use the “DVR Client-server” package as a powerful instrument for debugging your video processing and control algorithms that provides access to the values of your algorithm variables that were archived during recording.  

  

Technical Details

 - ring video/parametric archive  with duration of 1 - 12 months; 

 - configurable database record  (for parametric data) with maximal length of 190 bytes;

 - writing of parameters to database with discretization  250 ms;

 - the DVR Client  can work simultaneously with four databases  that can be located at remote computers.

  

 

Scorpio presented his great project of the robot "Vanessa" that also used AVM Navigator for space orientation:

default.jpg

Interactive mobile robot “Vanessa” 

 

Simple AVM Navigator tutorial:

default.jpg

Route training and navigation by map

 

See more details about tuning of “Marker mode” and “Navigation by map” modes.

 

    

 

It’s test of new algorithm for AVM Navigator v0.7.3.

First in video the robot has received command: “go to the checkpoint” and when robot arrived to the checkpoint then I brought robot back (several times) in different position of learned route. When robot noticed the changes then it indicated that robot was displaced because any commands were not given by robot to his motors however changes were seen in the input image.

Then robot started looking around and localized his current position. Further the robot just calculated path from current position to the checkpoint and went there (and so forth).

default.jpg

Back to checkpoint! 

 

It is a testing of new robot for AVM Navigator project:

hqdefault.jpg

Testing of new robot 

 

 Playing with Twinky rover that was controlled by AVM Navigator:

 

 Playing with Twinky rover

 

 

 

hqdefault.jpg

Object tracking (see here for more detals)

 

 

Twinky rover presentation:

 

hqdefault.jpg

 

 Twinky rover presentation

https://www.youtube.com/watch?v=xbCpthKrL0o

Great work EDV

Very nice demonstrations.  Would you have a module or implementation which would work with OpenCV?  I’m guessing since you went with RoboRealm this would be closed source?  Regardless, I have to say excellent work and your implementation and execution are extremely fascinating.

Ahh, I peeked @ your SDK, and it appears you have samples which work with OpenCV but release only the closed source binary libs of your SDK.

I should have read all the parts.  Well, very interesting algorithm.  I wish you the best of luck sir.  

And welcome to LMR.

GroG

Fantastic! I love the video.

Fantastic! I love the video. This is really the future. If only I understood half of it :wink: Looks really promising!

AVM SDK
Thanks GroG!



The algorithm “Associative video memory” is commercial project but you can use AVM SDK for free in your non-commercial projects.


  • You can use AVM algorithm in your researches for development of efficient navigation solution

    for robotics (as recognizer). You can test your hypothesis concerning robot navigation based

    on landmark beacons with AVM. And if successful navigation solution will be achieved then

    you will have two ways: you can develop your own pattern recognition algorithm and then replace

    AVM algorithm in your finished (commercial) project, or you can use commercial version of AVM

    algorithm in your finished project.

  • Also AVM could be used in testing of your own recognition algorithm in development process.



    Source code of the “Navigator” (for English community) can be downloaded here:

    http://edv-detail.narod.ru/Navigator_src_en.zip



    For more information see: http://forums.trossenrobotics.com/showthread.php?t=3510&page=2

Future of robot

It is cool. This module has the potential to change the future of computer vision. 

have you looked into dual

have you looked into dual licensing?  it works for Qt and mySQL.

Dual licensing for AVM
I think it is not expedient on present stage of AVM developing.

**this looks fantastic. **

this looks fantastic.   its the kind of stuff you only see in the military   :slight_smile:

**AVM Navigator v0.7 is released **

Now AVM Navigator v0.7 is released and you can download it from RoboRealm website.

That’s what I was looking

That’s what I was looking for…Great :wink:

C#

У вас есть исходники на C# по распознованию маячков на камере ?

 

P.S. Тема на английском, а в “новости” по русски ))

Наверное, вот в

Наверное, вот в этой теме можно посмотреть: “Портирование алгоритма AVM под C# (распознавание образов)

English only please :confused:

English only please :confused:

No problem :)But this user

No problem :slight_smile:
But this user asked me in Russian and I decided to answer in Russian too :slight_smile:

2Sergey_M: Just follow this link to article (in Russian) about Library AVM SDK simple.NET.

Hi guys, I’m still working

Hi guys,

 

I’m still working over AVM technology. Now I’ve founded my own company that is named Invarivision.com.

We are small but passionate team of developers that are working over system which would be able watch TV and recognize video that interests user.

 

And we need your help! 

 

It seems that interface of our search system is good enough because we try to make it to be simple and user friendly but from other point of view it could be a total disaster.

 

Could you  please take a look to our system and then tell us about good and bad sides of this?

 

The constructive criticism is welcome.

 

With kind regards, EDV.