Mobile Robot Control 2023 Group 10: Difference between revisions
No edit summary |
No edit summary |
||
Line 35: | Line 35: | ||
Image of the maze: https://drive.google.com/file/d/1_e8r0qw3u7ObbJdcfGqWqJiNKzSvloUa/view?usp=sharing | Image of the maze: https://drive.google.com/file/d/1_e8r0qw3u7ObbJdcfGqWqJiNKzSvloUa/view?usp=sharing | ||
===<u>Exercise 3 (Corridor)</u>=== | ===<u>Exercise 3 (Corridor)</u>=== |
Revision as of 14:29, 23 May 2023
Group members:
Name | student ID |
---|---|
Jelle Cruijsen | 1369261 |
Florian Geister | 1964429 |
Omar Elba | 1492071 |
Exercise 1 (Don't crash)
- There's noise present in the robot laser data and also in the simulation environment. It causes a small jitter in the observed laser distances. We expect this noise to be negligible, due to the small variance. The laser can see every object on the height of the laser rays until roughly 25 meters. So basically, the whole room is seen. The limitation is the viewing angle between min and max since the robot does not see behind itself. Another limitation is that objects behind other objects can't be seen and the laser detection points limit the resolution. Our own legs look like lines of dots and are detected in real-time. If we move, the lines of dots move as well.
- Did that.
- The robot behaved as expected in simulation.
- Video from Flo: https://drive.google.com/file/d/171WYadXPzqqVjb2qXL5PLcXLRiuZv1NQ/view?usp=share_link
We noticed that the robot would stop driving, when it was parallel to a wall. Therefore we changed our code to only scan in a cone that is in front of the robot. Therefore, objects outside of this cone are not seen as obstacles.
The coding for this exercise was split up into three distinct parts. Small descriptions regarding the solutions of each of these three parts are given below.
- In this part, the node which has to be expanded next is found. This is done through a simple for loop, which looks at the open nodes and then chooses the node which has the minimal total cost (we will call this nodeMin).
- In this part, the neighboring nodes of nodeMin have to be explored and updated if necessary. Here, only the nodes that are not closed yet are considered. If a neighboring node is not yet opened, it is added to the open nodes list. Node distances from the start are calculated, by adding the start distance of nodeMin to the distance between nodeMin and its neighbor. The neighboring node distance to the start is only updated if this calculated distance is lower than the distance that was saved before. In the case of a lower new start distance, the total cost of the neighboring nodes is updated and their parent node is set to nodeMin.
- In this part the optimal path has to be determined. Once the finish node is reached, it is added to the path node list. Then, all of the previous parenting nodes are visited in a while loop and added to the path node list. This while loop stops running if the visited node is the starting node. Finally, the path node list is reversed, in order to get the nodes from start to finish.
Increasing efficiency of algorithm:
The used A* algorithm can be made more efficient reducing the nodes that are computed. Currently, every node is stored. If the robot goes straight, we should not compute any nodes until an intersection with multiple node options appears. By doing this, only nodes are computed that lead to a new decision by the robot. In the image below we can see the start (blue) and end (green) nodes. The blue lines in between are the paths, the robot can follow. The orange node points are the nodes, that should only be computed and stored. The other nodes can be dismissed.
Image of the maze: https://drive.google.com/file/d/1_e8r0qw3u7ObbJdcfGqWqJiNKzSvloUa/view?usp=sharing
Exercise 3 (Corridor)
add text and video's of working simulation/robot
https://drive.google.com/file/d/1H8rbXfeSw0uKU0QaPlFkiuvFDaeXu5cL/view?usp=share_link
Exercise 4 (Odometry data)
- Keep track of our location: The code performs as expected, the current odomety data is printed in each iteration of the while loop, in addition the difference in position with respect to the previous iteration is also printed.
- Observe the behaviour in simulation:
- In order to assess the accuracy of our method, the simulation was used with the uncertain_odom option set to false. For assessing the accuracy of the angle, the robot was rotated in place and the angle was printed. It was checked that the printed angle was correct by looking at the value printed every 90 degrees and eventually, the angle returned to 0 when a full 360 degree turn was made. Furthermore, for assessing the accuracy of the x and y odometry data, a map with a known height and width was created. The robot was then initially positioned in the corner of the map such that the entire length and width of the map can be driven accurately by it. The distance printed from the odometry data was then compared to the known map dimensions. Overall, the the results of the robot odometry data were deemed to be accurate.
- When the uncertain_odom option is set to true, the coordinate frame of the robot is both rotated and shifted. Therefore, the starting angle (pointing in the direction of the x-axis) is not zero anymore. Additionally, the starting x,y-position is nonzero. Therefore, driving straight ahead when the simulation starts now results in a change in both the x and y-direction, instead of only a change in the x-direction when uncertain_odom was set to false.
- Using the simulation data that has been collected thus far, this approach would be a viable option to use in the final challenge since it outputs accurate data and can provide valuable information that can be used while making the algorithms to be run on the robot. However, this approach still needs to be validated on the real robot in order to determine whether or not the results are as accurate as the ones found in simulation before making a final decision.
- Observe the behaviour in reality: