Mobile Robot Control 2024 R2-D2: Difference between revisions
Line 114: | Line 114: | ||
Video recording for a-star implementation without PRM: | Video recording for a-star implementation without PRM: | ||
''Small_Maze'': https://youtu.be/MarLBC3igVI | |||
''Large_Maze'': https://youtu.be/C9wwy7IZU4Y | |||
=== Combining local and global planning === | === Combining local and global planning === | ||
After combining A-star and PRM, global navigation result in ''Compare_Map'' is shown: https://youtu.be/jQf0NvlVsYE | |||
=== Questions === | === Questions === |
Revision as of 19:05, 27 May 2024
Introduction
This the wiki of the R2-D2 team for the course Mobile Robot Control of the Q4 in the year 2023-2024. The team is consisted from the following members.
Group members:
Name | student ID |
---|---|
Yuri Copal | 1022432 |
Yuhui Li | 1985337 |
Wenyu Song | 1834665 |
Aditya Ade | 1945580 |
Isabelle Cecilia | 2011484 |
Pavlos Theodosiadis | 2023857 |
Week 1 - The art of not crashing
Simulation
Pavlos
My idea was to use the LiDAR sensor to detect any objects directly on the front of the robot. Thus when the robot would move forward it would detect the distance from the object directly in the front of it and stop before reaching a predefined threshold. In the video the threshold was 0.5 meters. To detect the distance from the object exactly on the front I used the measurement in the middle of the ranges list in a laser scan message. I also created a function which takes as an argument a laser scan message and an angle in degrees and returns the distance measurement of the ray at that angle.
Video displaying the run on the simulation environment:
https://www.youtube.com/watch?v=MXB-z1hzYxE
Isabelle
I took the laser reading at the middle angle by taking the middle value of the reading range, together with the two readings before and after it. The robot moves forward by default and if these values go under 0.3m, it will stop.
Wenyu
Video record of dont_crash implementation & measurements: https://youtu.be/brgnXSbE_CE
Practical Session
Pavlos
Running the code on the real robot made me realize that the use of a single ray doesn't make much sense with real life obstacles. Instead it would be better to use a range of rays based on the angle of detection we would like to have (e.g. angles from -5 to +5 degrees from the direction of the robot).
Video from the practical session:
https://youtube.com/shorts/uKRVOUrx3sM?feature=share
Simulation
Dynamic Window Approach
We implemented the dynamic window approach algorithm based on the paper "The Dynamic Window Approach to Collision Avoidance". The tuning of the scoring function seemed to be not trivial since different values result in different behaviors. For example a lower factor for the values of the heading error together with a higher factor for the values of the clearance score result in more exploration of the space where the robot prefers to move towards the empty space instead of moving to the goal.
Video displaying the run on the simulation environment:
https://www.youtube.com/watch?v=v6rQc6_jtUE
Question for TAs: Can we plot somehow the trajectories in the RVIZ environment?
Answer: We can send a single path like it is demonstrated in the global navigation assignment but it's not clear if we can send multiple paths.
Practical Session
Dynamic Window Approach
When we run our code on the actual robot we noticed a similar to the simulation behavior. The robot would get stuck in local minima points for high values of the heading coefficient and for smaller values it would prefer to avoid the obstacles instead of heading towards the goal. Again we had the oscillating behavior when no obstacles were near the robot and the robot was heading towards the goal. We believe that this behavior is due to the discretization of the possible heading directions. More specifically when the scoring function is only affected by the heading (e.g. no obstacles near the robot) the robot wants to have the perfect heading (zero heading error) towards the goal. However this isn't possible because we have discrete headings where none of them would be perfect for any given time. As a result, the robot picks once a non perfect heading which will cause a small divergence (heading error) and then it will pick a heading to fix this divergence which will have the same effect but now on the other side. That creates what we see as an oscillating path towards the goal.
What we could do to fix this:
- Decrease the maximum rotational speed. That would make the sampling finer but would reduce the exploration for a single cycle of the algorithm. In theory we could also increase the sampling points but when we tried that, the robot couldn't keep up with the 4Hz frequency due to computational workload. Although, even with more samples we would still end up to this oscillating behavior (still perfect heading wouldn't be possible) but maybe with a smaller magnitude.
- Allow some small heading error. One way to do this would be to set the heading error used in the scoring function to zero for real heading errors smaller than a value (e.g. 5 degrees).
Other things to consider:
- Make the distance/clearance calculation function more efficient.
Video of the practical session test:
Simulation
Problem Definition
balabala
PRM (Probabilistic Road Map) Implementation
Addy:
A-star Algorithm Implementation
Theory
how A-star works
Advantages & Disadvantages
advantages: balabala
Code structure
some code part
Result
Video recording for a-star implementation without PRM:
Small_Maze: https://youtu.be/MarLBC3igVI
Large_Maze: https://youtu.be/C9wwy7IZU4Y
Combining local and global planning
After combining A-star and PRM, global navigation result in Compare_Map is shown: https://youtu.be/jQf0NvlVsYE