Autonomous robot's navigation

This presentation was developed with helping of “3DS Max 5.1”.

Presentation of Digital Video Recording system (DVR)

[video=youtube;EtCipl8m_3U]http://www.youtube.com/watch?v=EtCipl8m_3U

It is a testing of the enhanced tracking algorithm that takes into consideration variable servo speed:

[video=youtube;ueqDhuHiR-E]http://www.youtube.com/watch?v=ueqDhuHiR-E

See VBScript program and diagram below for more details:

[code]’ Get turret control variables
flt_base = 10000

rate_v = GetVariable(“RATE_V”)/flt_base
turret_v = GetVariable(“TURRET_V”)/flt_base
turret_Sv = GetVariable(“TURRET_V_SPEED”)

rate_h = GetVariable(“RATE_H”)/flt_base
turret_h = GetVariable(“TURRET_H”)/flt_base
turret_Sh = GetVariable(“TURRET_H_SPEED”)
turret_f = GetVariable(“TURRET_FIRE”)
step_counter = GetVariable(“STEP_COUNTER”)

dX = 0
dY = 0

status = “”
turret_v_initial = -80

nvObjectsTotal = GetVariable(“NV_OBJECTS_TOTAL”)

if nvObjectsTotal>0 then ’ If any object was found

' Get image size
img_w = GetVariable("IMAGE_WIDTH")
img_h = GetVariable("IMAGE_HEIGHT")

' Get array variables of recognized objects
nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
nvArrObjRectY = GetArrayVariable("NV_ARR_OBJ_RECT_Y")
nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
nvArrObjRectH = GetArrayVariable("NV_ARR_OBJ_RECT_H")

' Get center coordinates of first object from array
obj_x = nvArrObjRectX(0) + nvArrObjRectW(0)/2
obj_y = nvArrObjRectY(0) - nvArrObjRectH(0)/2

' Get difference between object and screen centers
dX = img_w/2 - obj_x
dY = img_h/2 - obj_y

dXr = 1 - abs(dX*4/img_w)
if dXr < 0 then dXr = 0

dYr = 1 - abs(dY*4/img_h)
if dYr < 0 then dYr = 0

turret_min = -100
turret_max = 100
reaction   = 7
speed_min  = 1
speed_max  = 100
filtering  = 0.7
decay      = 0.1
threshold  = round(img_w*0.03)

sRateH = exp(-dXr*reaction)
sRateV = exp(-dYr*reaction)

rate_h = rate_h + (sRateH - rate_h)*filtering
rate_v = rate_v + (sRateV - rate_v)*filtering

turret_Sh = round(speed_min + rate_h*(speed_max - speed_min))
turret_Sv = round(speed_min + rate_v*(speed_max - speed_min))

delta_h = (img_w/8)*rate_h
delta_v = (img_h/8)*rate_v

if step_counter =< 0 then
step_counter = round(exp(-(dXr*dYr)reaction0.7)*15)

	if dX > threshold then
		' The object is at left side
		turret_h = turret_h - delta_h
	
		if turret_h < turret_min then turret_h = turret_min
	end if

	if dX < -threshold then
		' The object is at right side
		turret_h = turret_h + delta_h
	
		if turret_h > turret_max then turret_h = turret_max
	end if

	if dY > threshold then
		' The object is at the bottom
		turret_v = turret_v - delta_v
	
		if turret_v < turret_min then turret_v = turret_min
	end if

	if dY < -threshold then
		' The object is at the top
		turret_v = turret_v + delta_v
	
		if turret_v > turret_max then turret_v = turret_max
	end if
else
	step_counter = step_counter - 1
end if
	
' Is the target locked?
if dX < threshold and dX > -threshold and dY < threshold and dY > -threshold then
	status = "Target is locked"
	turret_f = 1
else
	status = "Tracking"
	turret_f = 0
end if

else
’ Back to the center if object is lost
if turret_h > 0 then turret_h = turret_h - 1
if turret_h < 0 then turret_h = turret_h + 1
if turret_v > turret_v_initial then turret_v = turret_v - 1
if turret_v < turret_v_initial then turret_v = turret_v + 1

turret_Sh = speed_min
turret_Sv = speed_min

rate_h = rate_h - rate_h*decay
rate_v = rate_v - rate_v*decay

turret_f = 0

end if

’ Set turret control variables
SetVariable “RATE_V”, rate_vflt_base
SetVariable “TURRET_V”, turret_v
flt_base
SetVariable “TURRET_V_CONTROL”, round(turret_v)
SetVariable “TURRET_V_SPEED”, turret_Sv
SetVariable “RATE_H”, rate_hflt_base
SetVariable “TURRET_H”, turret_h
flt_base
SetVariable “TURRET_H_CONTROL”, round(turret_h)
SetVariable “TURRET_H_SPEED”, turret_Sh
SetVariable “TURRET_FIRE”, turret_f
SetVariable “STEP_COUNTER”, step_counter
SetVariable “DELTA_X”, dX
SetVariable “DELTA_Y”, dY
SetVariable “TURRET_STATUS”, status
[/code]

forums.trossenrobotics.com/attachment.php?attachmentid=3966&d=1335029045&thumb=1

When training will be done you should use variables that described below for your VBScript program:

NV_OBJECTS_TOTAL - total number of recognized objects
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object

As example you can use these VBScript programs that was published in this topics:
roborealm.com/forum/index.php?thread_id=3881#
forums.trossenrobotics.com/showthread.php?4764-Using-of-AVM-plugin-in-RoboRealm&p=48865#post48865

[size=3]AVM Navigator help page was updated![/size] :smiley:

[video=youtube;SWVfVd_UetY]http://www.youtube.com/watch?v=SWVfVd_UetY

You can set “Learn from motion” option for training on some movable object in “Object recognition” mode.

See here for more detail.

It is enough difficult route that was passed by robot
with help AVM Navigator (route training and passing):

[video=youtube;1-w3lSLTnjM]http://www.youtube.com/watch?v=1-w3lSLTnjM

Autonomous navigation view from outside:

[video=youtube;GD_g0q_I6NQ]http://www.youtube.com/watch?v=GD_g0q_I6NQ

Twinky rover and fruit (color tracking with RoboRealm)

[video=youtube;YBHYeuT51bA]http://www.youtube.com/watch?v=YBHYeuT51bA

**AVM Navigator v0.7.4.2 update **

Changes:

See here about all other changes.

See here about all other changes.

Fun with AVM Navigator

i1.ytimg.com/vi/4uywp5TNrZk/mqdefault.jpg

It’s little demo of object recognition and learning from motion with helping of AVM Navigator.

All object rectangle coordinates are available in RoboRealm pipeline from external variables:
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object

So you can use it in your VBScript program.

See here for more details.

In fact the AVM algorithm is not invariance to rotation and you should show the object for memorizing to AVM search tree under different angles during training for further correct recognition.

See also an example of using of Canny module as background for AVM Navigator:

i3.ytimg.com/vi/6oSJbwO-qp0/mqdefault.jpg

Tomy Omnibot & AVM Navigator

[video=youtube;iAE7NXQgwC0]http://www.youtube.com/watch?v=iAE7NXQgwC0

[video=youtube;4s9bQi8Y828]http://www.youtube.com/watch?v=4s9bQi8Y828

Hi guys,

I’m still working over AVM technology. Now I’ve founded my own company that is named Invarivision.com.
We are small but passionate team of developers that are working over system which would be able watch TV and recognize video that interests user.

And we need your help!

It seems that interface of our search system is good enough because we try to make it to be simple and user friendly but from other point of view it could be a total disaster.

Could you please take a look to our system and then tell us about good and bad sides of this?

The constructive criticism is welcome.

With kind regards, EDV.

Hi ExDxV ,

Welcome to the RobotShop Forum. The results are quite impressive. It seems like a steadier platform, the additional of distance sensors and a gyro to stabilize the camera would give some very professional results. We gather the “down arrow” is just the start of what you plan, and we can easily see how that can be adapted to autonomous car navigarion. We really look forward to seeing more of your creations and videos are certainly appreciated.

Sincerely,

Hi,

What would be the hardware requirements? If you take a look at the RobotShop product lineup, which products, based on your experience, would be best to copy your setup? Would it require a single board computer or is all your processing being done by an external computer? Perhaps someone from the user community will follow your guidelines and reproduce (and perhaps expand upon) what you have done.

Sincerely,

  1. Can you change the boxes in the lower left to smaller arrows?
  2. I’m still trying to figure out what the upper three boxes are
  3. It does not seem to like that chair leg at all - can you show how it works in an uncluttered environment (empty hallways and doors only)?
  4. If you can make the transition between boxes more gradual (take an average for example) and change the color, it would be far more pleasant

This having been said, we are certainly keeping an eye on your progress. Visual based tracking is the future.

Sincerely,