hi
Guys I need your help to get understanding artificial potential field approach and its problems , actually the main idea of the approach is clear to me and I don’t have any problem but the limitations of this approach and their solutions are not so clear to me , because I am not so good in math and actually I read many papers and sites which talking about that approach but all of them introduce the same concept , so I post here hoping to get some help from members here specially with who familiar with this types of approaches .
First I will give a brief explanation about artificial potential field and that what I got and read in websites and papers and then I will explain my questions and points that don’t make sense to me .
---------------------------------------------------------------------------------------------------------
ARTIFICIAL POTENTIAL FIELD APPROACH AND ITS PROBLEMS
Traditional artificial potential field: The artificial potential field method assumes the robot moving in an abstract artificial force field. The artificial field is consisted of repulsive potential field and attractive potential field in the workspace. The potential force has two components: attractive force and repulsive force. The goal position produces an attractive force which makes the mobile robot move towards it. Obstacles generate a repulsive force, which is inversely proportional to the distance from the robot to obstacles and is pointing away from obstacles. the robot moves from high to low potential field along the negative of the total potential field. Consequently, the robot moving to the goal position can be considered from a high-value state to a low-value state.
The most commonly used attractive potential function is:
where, ε is a positive scaling factor, p(q, qgoal) is the distance between the robot q and the goal qgoal.
The attractive force is given by the negative gradient of the attractive potential:
The attractive force toward zero as the robot approaches the goal.
we use following repulsive potential function:
where, η is a positive scaling factor, p(q, qbos) denotes the shortest distance from the robot q to the obstacle, p0 is the largest impact distance of the obstacle. The negative gradient of the repulsive potential function:
The total force applied to the robot is: Ftotal = Fatt(q)+ Freq(q) which determines the motion of the robot. The robot moves in this field of forces.
Disadvantage of artificial potential field method:
(1) When the robot is far away target, the attractive force will become very great. It is easily leading robot move too close to the obstacles.
(2) Goals Non-Reachable with Obstacle near by (GNRON)
(3) When the attractive force and repulsive force is equal or almost equal but on the opposite direction, the potential force of robot is zero, then it will cause robot to be trapped in local minima or oscillations and
IMPROVED POTENTIAL FIELD METHOD
In order to solve the problems above :
Risk of collision to obstacles: The value of attractive potential field is decided by their distance d(q, qgoal), as function 1 presented. When robot is far away from target, the attractive force will become very great. That is to say, when p(q, qgoal) is very great, it is easily leading robot move too close to the obstacles (Shi and Cui, 2010). Therefore, the robot has the risk of collision to obstacles in the real environment. Thus, the attractive potential field is modified as function 5:
where, ε is the attraction gain, p(q, qgoal) is the distance between robot and target, d*goal is the threshold value that defines the distance between the robot and the goal which will be compared for the choice between conic and quadratic potential.
The gradient of the function 5 is obtained as:
Goals non-reachable with obstacle nearby: it is always assumed that the goal position far away from obstacles. When the robot is near the goal position, the repulsive force is negligible, so the movement of the robot to the goal position will only be determined by the attractive force. However, in some cases, the goal position will be close to an obstacle. When the robot is near its goal position, it also closes to obstacles nearby. According to the attractive and repulsive potentials in formula 2 and 4, the repulsive force will be much larger than the attractive force and the robot cannot reach its goal position (Shi et al., 2010; Chuang and Ahuja, 1998). In order to overcome this problem, the repulsive potential functions are modified by considering the relative distance from the robot to the goal (Ge and Cui, 2000). The new repulsive potential function is:
where, p(q, qobs) is the minimal distance between the robot q and the obstacles, p(q, qgoal) is the distance between the robot and the goal qgoal, p0 is the distant of influence of the obstacle and n is a positive constant.
When the robot is not at the goal position, the gradient of the new repulsive potential function is:
nOR = ∇p(q, qobs) and nRG = -∇p(q, qgoal) are two unit vectors and the direction of them is from the obstacle to the robot and from the robot to the goal, respectively.
---------------------------------------------------------------------------------------------------------
My question are
1- The negative gradient ? what is negative gradient
When I googled about that I found that Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent . What that’s mean and how that is related to artificial potential field ? can any one give more clear answer ?
2- How equation 5 solve the problem of “ Risk of collision to obstacles” ? Why these two equations ?
I need to understand these two equations in 5 .
3- How equation 7 solve the problem of “Goals non-reachable with obstacle nearby ? Why the minimal distance between the robot q and the obstacles is added to the repulsive potential function ? What is effect of that distance on the robot ?
4- How the gradient of the new repulsive potential function in equation 7 is done ? from where Frep1 and Frep2 come ?
Sorry if I am being foolish asking these questions that may be not in the right place , and sorry for long post .