Mobile Robot Control 2023 Group 10: Difference between revisions
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
==='''<u>Exercise 1 (Don't crash)</u>'''=== | ==='''<u>Exercise 1 (Don't crash)</u>'''=== | ||
#There's noise present in the robot laser data and also in the simulation environment. It causes a small jitter in the observed laser distances. We expect this noise to be negligible, due to the small variance. The laser can see every object on the height of the laser rays until roughly 25 meters. So basically, the whole room is seen. The limitation is the viewing angle between min and max since the robot does not see behind itself. Another limitation is that objects behind other objects can't be seen and the laser detection points limit the resolution. Our own legs look like lines of dots and are detected in real-time. If we move, the lines of dots move as well. | #There's noise present in the robot laser data and also in the simulation environment. It causes a small jitter in the observed laser distances. We expect this noise to be negligible, due to the small variance. The laser can see every object on the height of the laser rays until roughly 25 meters. So basically, the whole room is seen. The limitation is the viewing angle between min and max since the robot does not see behind itself. Another limitation is that objects behind other objects can't be seen and the laser detection points limit the resolution. Our own legs look like lines of dots and are detected in real-time. If we move, the lines of dots move as well. | ||
#Did that. | #Did that. | ||
#The robot behaved as expected in | #The robot behaved as expected in simulation. | ||
# | #Upload video | ||
#We noticed that the robot would stop driving, when it was parallel to a wall. Therefore we changed our code to only scan in a cone that is in front of the robot. Therefore, objects outside of this cone are not seen as obstacles. | |||
===<u>Exercise 2 (A* navigation)</u>=== | ===<u>Exercise 2 (A* navigation)</u>=== | ||
Line 36: | Line 37: | ||
===<u>Exercise 3 (Corridor)</u>=== | ===<u>Exercise 3 (Corridor)</u>=== | ||
add text and video's of working simulation/robot | add text and video's of working simulation/robot | ||
<br /> | |||
===<u>Exercise 4 (Odometry data)</u>=== | |||
* '''Keep track of our location:''' The code performs as expected, the current odomety data is printed in each iteration of the while loop, in addition the difference in position with respect to the previous iteration is also printed. | |||
* '''Observe the behaviour in simulation:''' | |||
*# | |||
*# When the uncertain_odom option is set to true, the coordinate frame of the robot is both rotated and shifted. Therefore, the starting angle (pointing in the direction of the x-axis) is not zero anymore. Additionally, the starting x,y-position is non-zero. Therefore driving straight ahead when the simulation starts now results in a change in both the x and y-direction, instead of only a change in the x-direction, when uncertain_odom was set to false. | |||
* '''Observe the behaviour in reality:''' | |||
<br /> | <br /> |
Revision as of 10:32, 19 May 2023
Group members:
Name | student ID |
---|---|
Jelle Cruijsen | 1369261 |
Florian Geister | 1964429 |
Omar Elba | 1492071 |
Exercise 1 (Don't crash)
- There's noise present in the robot laser data and also in the simulation environment. It causes a small jitter in the observed laser distances. We expect this noise to be negligible, due to the small variance. The laser can see every object on the height of the laser rays until roughly 25 meters. So basically, the whole room is seen. The limitation is the viewing angle between min and max since the robot does not see behind itself. Another limitation is that objects behind other objects can't be seen and the laser detection points limit the resolution. Our own legs look like lines of dots and are detected in real-time. If we move, the lines of dots move as well.
- Did that.
- The robot behaved as expected in simulation.
- Upload video
- We noticed that the robot would stop driving, when it was parallel to a wall. Therefore we changed our code to only scan in a cone that is in front of the robot. Therefore, objects outside of this cone are not seen as obstacles.
The coding for this exercise was split up into three distinct parts. Small descriptions regarding the solutions of each of these three parts are given below.
- In this part, the node which has to be expanded next is found. This is done through a simple for loop, which looks at the open nodes and then chooses the node which has the minimal total cost (we will call this nodeMin).
- In this part, the neighboring nodes of nodeMin have to be explored and updated if necessary. Here, only the nodes that are not closed yet are considered. If a neighboring node is not yet opened, it is added to the open nodes list. Node distances from the start are calculated, by adding the start distance of nodeMin to the distance between nodeMin and its neighbor. The neighboring node distance to the start is only updated if this calculated distance is lower than the distance that was saved before. In the case of a lower new start distance, the total cost of the neighboring nodes is updated and their parent node is set to nodeMin.
- In this part the optimal path has to be determined. Once the finish node is reached, it is added to the path node list. Then, all of the previous parenting nodes are visited in a while loop and added to the path node list. This while loop stops running if the visited node is the starting node. Finally, the path node list is reversed, in order to get the nodes from start to finish.
Increasing efficiency of algorithm:
The used A* algorithm can be made more efficient by reordering the node IDs. Since the optimal path from start to finish can be easily found through visual inspection, node IDs can be rearranged in such a way that the first neighboring node of nodeMin that is explored, will always be the next node in the optimal path (all of the neighboring nodes are explored, thus invalid). That way the algorithm never runs in to closed nodes or sub-optimal paths. Therefore the amount of redundant computations is reduced to zero. An example for the small maze is given below. This reasoning is incorrect, change it later. I think what instead should happen is that the nodeMin should always lie on the optimal path.
Video from Flo: https://tuenl-my.sharepoint.com/:v:/r/personal/f_geister_student_tue_nl/Documents/VID_20230509_151733.mp4?csf=1&web=1&e=s5ayEZ
Exercise 3 (Corridor)
add text and video's of working simulation/robot
Exercise 4 (Odometry data)
- Keep track of our location: The code performs as expected, the current odomety data is printed in each iteration of the while loop, in addition the difference in position with respect to the previous iteration is also printed.
- Observe the behaviour in simulation:
- When the uncertain_odom option is set to true, the coordinate frame of the robot is both rotated and shifted. Therefore, the starting angle (pointing in the direction of the x-axis) is not zero anymore. Additionally, the starting x,y-position is non-zero. Therefore driving straight ahead when the simulation starts now results in a change in both the x and y-direction, instead of only a change in the x-direction, when uncertain_odom was set to false.
- Observe the behaviour in reality: