Mobile Robot Control 2024 Rosey: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(One intermediate revision by one other user not shown)
Line 23: Line 23:
|}
|}


=== Week 1 - Don't crash code: ===
== '''Week 1 - Don't crash code:''' ==


===== Exercise 1 - The art of not crashing. =====
=== Exercise 1 - The art of not crashing. ===
''Tessa Janssen''


The basic idea of this code was to loop through the data from the laser. If one of the laser points returned a distance short than acceptable (e.g. 0.5m), the robot was commanded to stop and turn to the left. If after looping over the laser data again, no points were found within the 0.5 m, the robot was commanded to stop turning and move forward again. In the for-loop going over the laser data, only the middle 300 points were used out of the total range of 1000, resulting in a cone of vision of approximately 1.2 rad. The latter was chosen to make sure the robot only stops for obstacles it will actually run into when it moves forward. A wall on the side of the robot, for example, should not be reason to stop as the robot can move along the side perfectly fine.
==== Overall approach ====
Every member of the group worked on this exercise separately, after which we combined and discussed the different approaches. Though each implementation the members of our group came up with is slightly different, they all have the same starting approach. The robot is constantly looping through the laser data and when the returned laser range of one of the laser points is within a predefined distance, the robot is commanded to act on this. Additionally, most approaches only loop through the front/middle part of the laser data to make sure only obstacles that are actually in the robots path are taken into account. The different methods of going about this are explained below.


<nowiki>*</nowiki>Add screencapture of code*
<big>'''Implementation 1''': '''Stopping'''</big>


''Jose Puig Talavera''
''Riccardo Dalferro Nucci & Ruben Dragt''


<nowiki>*</nowiki>Insert description of method here*
In this implementation the instruction to the robot is the most simple. The robot is instructed to keep moving forward using the io.sendBaseReference function and when the laser returns a distance within 0.3 meters, the robot is commanded to stop by setting the forward velocity in io.sendBaseReference back to 0. Using this method means the robot will not continue moving as long as the obstacle is within the 0.3 meters. If the robot is in front of a wall it will stay there forever, but with a moving obstacle such as a human walking by, the robot will eventually move forward again until the next obstacle. Video of the simulation can be found [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EbWIje2GvzRInrJRvwxO44cBloUo262_6LwdQOqhViCpLA?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=GVbDab here].


<nowiki>*</nowiki>Add screencapture of code*
'''<big>Implementation 2'':'' Turning away</big>'''


''Thijs Beurskens''
Using this method, the robot is not instructed to just stop but to also turn away from the obstacle. The added benefit of this approach is that the robot will be able to start moving forward again, as the laser will eventually stop returning obstacles in the set distance.


<nowiki>*</nowiki>Insert description of method here*
'''Method 1''' ''Tessa Janssen''


<nowiki>*</nowiki>Add screencapture of code*
In this code, the robot loops over part of the data by setting the for loop from <code>scan.ranges.size()/2 - range</code> to <code>scan.ranges.size/2 + range</code> (setting variable range at 150 means a view of approximately 1.2 rad). Then, when an obstacle is detected in within a set distance e.g. 0.5 m, the forward velocity is set to 0, but the angular velocity is given a positive value. This results in the robot turning left when it has reached an obstacle. When there are no longer any obstacles the angular velocity is set back to 0 and the forward velocity is given a positive value. The downside of always turning left is that the robot can get easily get stuck in corners and will not take into account efficiency.


''Riccardo Dalferro Nucci, Ruben Dragt & Pablo Ruiz Beltri''
'''Method 2''' ''Pablo Ruiz Beltri''


In order to implement the ''art of not crashing'' a simple code was written, based only on the Laser Data. Specifically, the program tells the robot to go forward and when it detects an obstacle within 0.3 m in front, it stops.  
In this approach, a safety distance and a cone size (in number of laser points) are set. The cone is centered in front of the robot. While we are properly connected, the laser data in this cone is verified to be greater than the safety distance. If it is, the robot moves forward, if not the robot rotates. To keep a forward path, the direction of rotation is determined by assessing if the safety distance violation was in the left or the right of the cone. After testing this approach, it is concluded that it is vulnerable to corners, where the robot is located where safety distance violations come equally from left and right, leaving the robot in a blocked position.


<nowiki>*</nowiki>Screencapture on the drive*
'''Method 3''' ''Thijs Beurskens''


Comparing Remarks:
Here, the cone of vision of the robot is determined by setting the minimum and maximum angle, which is then translated to the amount of points the for loop should cover. The turning direction of the robot is determined by checking on which side of the robot the obstacles are closer and thus turning where there is more space. This approach has again the benefit of being able to keep moving around a map/obstacle course while also being more clever on where it can go. This approach makes the robot less likely to get stuck in certain areas and makes it more robust to changing environments.


Overall the concept of each implementation is similar: check if the laser data returns a distance that is too short. The difference between our implementations is what the robot is instructed to do after avoiding collision. This ranges from simply stopping or turning away, to a more complex implementation from Thijs that takes into account the nearest obstacle to determine which way the robot is better of turning towards. This final method is very robust against different types of maps compared to a code only turning left (guaranteed to get stuck at some point).
=== Exercise 2 - Testing your don't crash ===
 
During the first testing session it became clear that the parameters that were used in simulation needed to be adjusted to the robot in real life. Especially the velocity needed to be tuned down when using Coco/Bobo as they do not have the speed limiter built in. Moving too fast towards an obstacle meant that when it saw an obstacle within the safety range, it did not have enough time to stop before hitting the obstacle. Though tuning the velocities and obstacle distances was a doable fix and the code from method 3 (Thijs) ended up working very nicely in an environment with multiple different obstacles. The video showing this approach can be seen [https://tuenl-my.sharepoint.com/:v:/g/personal/t_h_janssen_student_tue_nl/EfOj6xPD00tFkXR1Ucylq2QBkDdMrKnozQXRP-P926NqbQ?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0NvcHkifX0&e=fG8zYF here].
===== Exercise 2 - Testing your don't crash. =====
Findings after testing the code on both Hero and Coco:
 
Most important was that the velocity used in the simulation needs to be adjusted to which robot is used in real life. Hero automatically limits the speed it takes, but Coco/Bobo don't. Having too high velocity makes it hard to stop in time for an obstacle. This happened a few times when testing the code on the robot for the first time. These parameters, however, were easily tuned. The implementation from Thijs worked really nice in real life when putting up some obstacles. See video below:
 
<nowiki>*</nowiki>INSERT VIDEO FROM TEST SESSION*


<nowiki>*</nowiki>INSERT SCREENCAP FROM TEST MAPS??*
<nowiki>*</nowiki>INSERT SCREENCAP FROM TEST MAPS??*

Revision as of 17:10, 25 May 2024

Group members:

Name student ID
Tessa Janssen 1503782
Jose Puig Talavera 2011174
Ruben Dragt 1511386
Thijs Beurskens 1310909
Pablo Ruiz Beltri 2005611
Riccardo Dalferro Nucci 2030039

Week 1 - Don't crash code:

Exercise 1 - The art of not crashing.

Overall approach

Every member of the group worked on this exercise separately, after which we combined and discussed the different approaches. Though each implementation the members of our group came up with is slightly different, they all have the same starting approach. The robot is constantly looping through the laser data and when the returned laser range of one of the laser points is within a predefined distance, the robot is commanded to act on this. Additionally, most approaches only loop through the front/middle part of the laser data to make sure only obstacles that are actually in the robots path are taken into account. The different methods of going about this are explained below.

Implementation 1: Stopping

Riccardo Dalferro Nucci & Ruben Dragt

In this implementation the instruction to the robot is the most simple. The robot is instructed to keep moving forward using the io.sendBaseReference function and when the laser returns a distance within 0.3 meters, the robot is commanded to stop by setting the forward velocity in io.sendBaseReference back to 0. Using this method means the robot will not continue moving as long as the obstacle is within the 0.3 meters. If the robot is in front of a wall it will stay there forever, but with a moving obstacle such as a human walking by, the robot will eventually move forward again until the next obstacle. Video of the simulation can be found here.

Implementation 2: Turning away

Using this method, the robot is not instructed to just stop but to also turn away from the obstacle. The added benefit of this approach is that the robot will be able to start moving forward again, as the laser will eventually stop returning obstacles in the set distance.

Method 1 Tessa Janssen

In this code, the robot loops over part of the data by setting the for loop from scan.ranges.size()/2 - range to scan.ranges.size/2 + range (setting variable range at 150 means a view of approximately 1.2 rad). Then, when an obstacle is detected in within a set distance e.g. 0.5 m, the forward velocity is set to 0, but the angular velocity is given a positive value. This results in the robot turning left when it has reached an obstacle. When there are no longer any obstacles the angular velocity is set back to 0 and the forward velocity is given a positive value. The downside of always turning left is that the robot can get easily get stuck in corners and will not take into account efficiency.

Method 2 Pablo Ruiz Beltri

In this approach, a safety distance and a cone size (in number of laser points) are set. The cone is centered in front of the robot. While we are properly connected, the laser data in this cone is verified to be greater than the safety distance. If it is, the robot moves forward, if not the robot rotates. To keep a forward path, the direction of rotation is determined by assessing if the safety distance violation was in the left or the right of the cone. After testing this approach, it is concluded that it is vulnerable to corners, where the robot is located where safety distance violations come equally from left and right, leaving the robot in a blocked position.

Method 3 Thijs Beurskens

Here, the cone of vision of the robot is determined by setting the minimum and maximum angle, which is then translated to the amount of points the for loop should cover. The turning direction of the robot is determined by checking on which side of the robot the obstacles are closer and thus turning where there is more space. This approach has again the benefit of being able to keep moving around a map/obstacle course while also being more clever on where it can go. This approach makes the robot less likely to get stuck in certain areas and makes it more robust to changing environments.

Exercise 2 - Testing your don't crash

During the first testing session it became clear that the parameters that were used in simulation needed to be adjusted to the robot in real life. Especially the velocity needed to be tuned down when using Coco/Bobo as they do not have the speed limiter built in. Moving too fast towards an obstacle meant that when it saw an obstacle within the safety range, it did not have enough time to stop before hitting the obstacle. Though tuning the velocities and obstacle distances was a doable fix and the code from method 3 (Thijs) ended up working very nicely in an environment with multiple different obstacles. The video showing this approach can be seen here.

*INSERT SCREENCAP FROM TEST MAPS??*

Week 2 - Local navigation:

Local navigation 1 - Vector field histogram:
Group 1
Thijs Beurskens
Pablo Ruiz Beltri
Riccardo Dalferro Nucci
Local navigation 2 - Attraction/Repulsion vector potential field:
Group 2
Tessa Janssen
Jose Puig Talavera
Ruben Dragt

The main idea behind the artificial potential field is calculating both an attractive potential toward the target and a repulsive potential away from obstacles, which are summed together. The resulting force vector is calculated by taking the negative of the gradient of the summed potentials. The attractive potential is determined by the formula given in the lecture, which depends on the difference between the current pose of the robot and the target pose. The target pose was provided beforehand as a point 6 meters down the corridor (1.5, 6, 0), the current pose was calculated by using odometry data from the robot (with starting coordinate (1.5, 0, 0)). Because the odometry is not perfect, this introduced a small error to the final position of the robot, which was not corrected for this exercise. The repulsive force is similarly determined by the formula from the slides, which uses the laser data to determine the distance between the robot and an obstacle. For this specific implementation, each laser point returning a distance that fell within the predefined space buffer was registered as an obstacle, with the corresponding repulsive force being added to the total force. The constants k_att and k_rep were used to adjust the relative importance of attracting and repulsing.

The resulting force vector was used as input to the robot by calculating the angle corresponding to the vector. This was used as reference angular velocity for the robot, while velocity was kept mostly constant. Mostly constant here means that in the original implementation the velocity of the robot was kept low to allow the robot to have enough time to respond to any obstacles, but at the final session code was added that made sure the robot picked up its speed when there were no obstacles in sight.

*Upload screen recording of the simulation*

Comment on observations

*Upload video of implementation on the robot*

Comment on observations

Answer the following questions:

  1. Advantages of the approach
    1. This method makes a lot of intuitive sense and uses relatively easy mathematical concepts to calculate the required forces.
    2. Calculating the repulsion based on real-time laser data makes this method robust against changes in the environment. New obstacles should be incorporated relatively fast into the navigation.
    3. As an extension of point 2: this robustness makes it easier to scale the robot to new and more complex environments.
  2. Disadvantages of the approach
    1. Local minima can occur where the total force is zero, but the robot has not reached its target.
    2. ??
  3. What could result in failure?
    1. A position where the resultant attractive force is equal to the repulsive force in opposite direction (local minima). Here the robot would be stuck forever as no control input is calculated.
    2. ??

How would you prevent these scenarios from happening?


[math]\displaystyle{ U_{att}(q) = \frac{1}{2} * k_{a} * (||q-q_{goal}||)² }[/math]

[math]\displaystyle{ U_{rep,j}(q) = \frac{1}{2} * k_{rep,j}(\frac{1}{(||q-q_{j}||} - \frac{1}{ρ_{0}})^2 }[/math] if [math]\displaystyle{ ||q-q_{j}|| ≤ ρ_{0} }[/math]

[math]\displaystyle{ U_{rep,j}(q) = 0 }[/math] if [math]\displaystyle{ ||q-q_{j}|| ≥ ρ_{0} }[/math]

Week 3 - Global navigation:

Global navigation 1 - A* algorithm:
Group 1
Pablo Ruiz Beltri
Riccardo Dalferro Nucci
Ruben Dragt

The A* algorithm is a widely used pathfinding and graph traversal algorithm. It combines the benefits of Dijkstra's Algorithm and Greedy Best-First Search to find the shortest path between nodes in a weighted graph. The code follows the next structure:

  1. Initialization:
    • We start with an open set (open_nodes) containing the initial node (_start_nodeID), and another set (closed_nodes) where the visited nodes will be stored, so they are only visited once.
    • Initialize a cost map to keep track of the cost from the start node to each node (h-cost), using calculate_distance().
    • Initialize a heuristic map estimating the cost from each node to the goal (g-cost), this are initialized to infinity and will be calculated for each visited node to the surrounding nodes. This way the nodes with no edge connecting them will have infinite value.
    • The f-cost is the summation of the g-cost and h-cost. The f-cost is calculated for the initial node.
  2. Processing:
    • While the open set is not empty, select the node with the lowest f-cost from the open nodes set.
    • If this node is the goal, the path is found.
    • Otherwise, move the node from the open set to the closed set.
  3. Expansion:
    • For each neighbor of the current node, calculate the tentative g-cost.
    • If this tentative g-cost is lower than the previously recorded g-cost for that neighbor, update it and change the parent node to the current node.
    • Set the f-cost for the neighbor (f = new g + h).
    • If the neighbor is not in the open set, add it.
  4. Repeat:
    • Continue the process until the goal is reached or the open set is empty, indicating no path exists.
  5. Store the optimal path:
    • To store the optimum path from the initial node to the goal (path_node_IDs), we start at the goal node and move to its parent node, continuing the storing of parents node until this becomes the inital node.
Global navigation 2 -:
Group 2
Tessa Janssen
Jose Puig Talavera
Thijs Beurskens