Mobile Robot Control 2024 R2-D2: Difference between revisions
No edit summary |
No edit summary |
||
Line 57: | Line 57: | ||
'''Yuhui''' | '''Yuhui''' | ||
This code controls a robot's movement based on laser and odometry data at a fixed frequency (10Hz). When the laser or odometry data is updated, the robot moves forward at a speed of 0.1 meters per second. If any laser scan point detects an obstacle closer than 0.4 meters, the robot stops moving. The <code>emc::IO</code> object is used to initialize and manage input/output operations, while the <code>emc::Rate</code> object controls the loop's execution frequency. The functions <code>io.readLaserData(scan)</code> and <code>io.readOdometryData(odom)</code> read laser and odometry data, and the <code>io.sendBaseReference</code> function sends speed commands to control the robot's movement. Video of the program: https://youtu.be/qtMjFI86jlA | |||
==Practical Session== | ==Practical Session== |
Revision as of 10:51, 7 June 2024
Introduction
This the wiki of the R2-D2 team for the course Mobile Robot Control of the Q4 in the year 2023-2024. The team is consisted from the following members.
Group members:
Name | student ID |
---|---|
Yuri Copal | 1022432 |
Yuhui Li | 1985337 |
Wenyu Song | 1834665 |
Aditya Ade | 1945580 |
Isabelle Cecilia | 2011484 |
Pavlos Theodosiadis | 2023857 |
Week 1 - The art of not crashing
Simulation
Pavlos
My idea was to use the LiDAR sensor to detect any objects directly on the front of the robot. Thus when the robot would move forward it would detect the distance from the object directly in the front of it and stop before reaching a predefined threshold. In the video the threshold was 0.5 meters. To detect the distance from the object exactly on the front I used the measurement in the middle of the ranges list in a laser scan message. I also created a function which takes as an argument a laser scan message and an angle in degrees and returns the distance measurement of the ray at that angle.
Video displaying the run on the simulation environment:
https://www.youtube.com/watch?v=MXB-z1hzYxE
Isabelle
I took the laser reading at the middle angle by taking the middle value of the reading range, together with the two readings before and after it. The robot moves forward by default and if these values go under 0.3m, it will stop.
Video of the program in action: [link]
Wenyu
Video record of dont_crash implementation & measurements: https://youtu.be/brgnXSbE_CE
Aditya
For this assignment I decided to improve upon the existing code of my group member Pavlos. Because I had the similar idea on the implementation of dont_crash as him, I decided to make the robot do more than what was asked for using the existing implementation. I developed a function turnRobot and made a loop in main code that makes the robot turn 90 degrees in the left direction as soon as it detects an obstacle infront of it and continue to move forward until it detects another obstacle and the process is repeated again. (How does the function works?). The simulation video can be found here [].
Yuri
(Text/Video)
Yuhui
This code controls a robot's movement based on laser and odometry data at a fixed frequency (10Hz). When the laser or odometry data is updated, the robot moves forward at a speed of 0.1 meters per second. If any laser scan point detects an obstacle closer than 0.4 meters, the robot stops moving. The emc::IO
object is used to initialize and manage input/output operations, while the emc::Rate
object controls the loop's execution frequency. The functions io.readLaserData(scan)
and io.readOdometryData(odom)
read laser and odometry data, and the io.sendBaseReference
function sends speed commands to control the robot's movement. Video of the program: https://youtu.be/qtMjFI86jlA
Practical Session
Running the code on the real robot made us realize that the use of a single ray doesn't make much sense with real life obstacles. Instead it would be better to use a range of rays based on the angle of detection we would like to have (e.g. angles from -5 to +5 degrees from the direction of the robot).
Video from the practical session:
https://youtube.com/shorts/uKRVOUrx3sM?feature=share
Problem Statement
Local planning's job is finding what velocity and direction command to send the wheels given a "local" goal. The local goal in this context is usually some point on the global plan that will follow the global plan as it takes us from the start to the finish. Local planning will usually account for live/active sensor readings, and is the algorithm most responsible for obstacle avoidance, and the behaviours you witness as a human observer. It will try to follow the global plan if given, but if a dynamic object is kept in the environment that the global planner did not account for, there is a high probability that the robot might crash into it. It is the job of a local planner to avoid such scenarios.
For local navigation, the two solutions we considered for the restaurant scenario are the Dynamic Window Approach and the Artificial Potential Fields.
Simulation
Dynamic Window Approach
We implemented the dynamic window approach algorithm based on the paper "The Dynamic Window Approach to Collision Avoidance", by D. Fox et al.
Implementation of code and functionality
(Pavlos)- Add code implementation and working, eg functions, classes
The tuning of the scoring function seemed to be not trivial since different values result in different behaviors. For example a lower factor for the values of the heading error together with a higher factor for the values of the clearance score result in more exploration of the space where the robot prefers to move towards the empty space instead of moving to the goal.
Simulation Results
Video displaying the run on the simulation environment:
https://www.youtube.com/watch?v=v6rQc6_jtUE
Problems
(Pavlos)- write any specific problems with daw and proposed solution if any
Question for TAs: Can we plot somehow the trajectories in the RVIZ environment?
Answer: We can send a single path like it is demonstrated in the global navigation assignment but it's not clear if we can send multiple paths.
Artificial Potential Fields
Implementation of code and functionality
(Yuri)- Add CODE implementation and working eg functions, classes
We implemented the Artificial Potential Fields algorithm [as in literature], where the robot is influenced by two types of potential fields: an attracting potential that pulls it towards the goal and a repulsing potential that pushes it away from obstacles. The attracting potential is defined to decrease as it nears the goal, which directs the robot to move closer. The repulsing potential increases as the robot approaches obstacles, which prevents collisions. The combination of these two potentials gives the resulting force vector that determines the robot's movement direction and speed.
Simulation Results
(Yuri)
Video displaying the run on the simulation environment:
(link)
Problems
(Yuri) - write any specific problems with apf and proposed solution if any
One of the challenges we faced was tuning the parameters(?). (ANY POSSIBLE SOL from students side?)
Assignment Questions
Explain implementation details about DWA: Pavlos
1. What are the advantages and disadvantages of your solutions?
DWA (Pavlos)
APF (Yuri)
2. What are possible scenarios which can result in failures for each implementation?
DWA: (Pavlos)
APF: (Yuri)
3. How would you prevent these scenarios from happening?
DWA: (Pavlos)
APF: (Yuri)
4. For the final challenge, you will have to link local and global navigation. The global planner will provide a list of (x, y) (or (x, y, θ) ) positions to visit. How could these two algorithms be combined?
(TO WRITE)
Practical Session
Dynamic Window Approach
When we run our code on the actual robot we noticed a similar to the simulation behavior. The robot would get stuck in local minima points for high values of the heading coefficient and for smaller values it would prefer to avoid the obstacles instead of heading towards the goal. Again we had the oscillating behavior when no obstacles were near the robot and the robot was heading towards the goal. We believe that this behavior is due to the discretization of the possible heading directions. More specifically when the scoring function is only affected by the heading (e.g. no obstacles near the robot) the robot wants to have the perfect heading (zero heading error) towards the goal. However this isn't possible because we have discrete headings where none of them would be perfect for any given time. As a result, the robot picks once a non perfect heading which will cause a small divergence (heading error) and then it will pick a heading to fix this divergence which will have the same effect but now on the other side. That creates what we see as an oscillating path towards the goal.
What we could do to fix this:
- Decrease the maximum rotational speed. That would make the sampling finer but would reduce the exploration for a single cycle of the algorithm. In theory we could also increase the sampling points but when we tried that, the robot couldn't keep up with the 4Hz frequency due to computational workload. Although, even with more samples we would still end up to this oscillating behavior (still perfect heading wouldn't be possible) but maybe with a smaller magnitude.
- Allow some small heading error. One way to do this would be to set the heading error used in the scoring function to zero for real heading errors smaller than a value (e.g. 5 degrees).
Other things to consider:
- Make the distance/clearance calculation function more efficient.
Video of the practical session test:
Artificial Potential Field Approach
(TO BE TESTED)
Problem Statement
As we know that the goal of the robot is to navigate through a map from a starting point to an end point on the map. But how will a robot navigate efficiently in the environment and dodging any know obstacles? This is where global navigation takes charge. It basically provides a systematic path to the robot which it can follow to reach the destination. There are multiple ways to implement global navigation but the team decided to use PRM and A* algorithms.
Simulation
Probabilistic Road Map (PRM)
Implementation and Code Exercises Explanation
(Addy) - Write code exercises
Probabilistic Road Mapping is a technique used to create a path using nodes/vertices and edges/connections. To understand the implementation in detail, you can look at the code at our gitlab page. In brief, we start by randomly sampling the nodes in a map and connect every node in the proximity using an edge if they are valid. There are various advantages in using PRM algorithm as compared to grid based approach due to its flexibility and scalability. In the code, we used Bresenham's line algorithm that creates edges optimally that avoids all the existing know obstacles in the map. PRM uses navigation techniques like Dijkstra or A* algorithms to find the most efficient path in the graph generated by it.
Advantages and Disadvantages
(Addy)
Although, PRM is a very effective graph generation algorithm, it has certain drawbacks. For example, the algorithm effectiveness is highly dependent on hyperparameters that we provide like number of nodes, the minimum distance between the two nodes that it needs to connect, etc.
Simulation Results
(Addy) - add prm result
Results: The PRM results for the given map_compare are given here [].
Problems
(Addy)- add problem discussed with Pavlos
A-star Algorithm
Implementation and Code Exercises Explanation
The basic algorithm of A-star is listed below:
- Set two empty node lists,
open_nodes
andclosed_nodes
. The OPEN list keeps track of those nodes that need to be examined; the CLOSED list keeps track of nodes that have already been examined(no need to take care anymore).
- Set three functions:
_nodelist[nodeID].g
g(n) = the cost of getting from the initial node to n._nodelist[nodeID].h
h(n) = heuristic function, the estimation of the cost of getting from n to the goal node._nodelist[nodeID].f
f(n) = g(n) + h(n). the estimate of the best solution that goes through n.
- Introduce parent node: The node which is a predecessor of current node (on the optimal path from start node to curretn node) is known as a parent node. Each node also maintains a pointer to its parent.
The main algorithm follow below steps:
- Add
_start_nodeID
to the OPEN list. - Repeat following steps:
- Get node n off the
open_nodes
with the lowest f(n). Add n to theclosed_nodes
. - For every neighbour nodes of current node
current_nodeID
, e.g.for (int neighbourID : neighbours)
:- If the neighbour node
_nodelist[neighbourID]
is unreachable or it is already in theclosed_nodes
, ignore that node. Otherwise, do the followings. - If the neighbour node is not in the OPEN list, then add it to the OPEN list. Set current node as neighbour node's parent.
_nodelist[neighbourID].parent_node_ID = current_nodeID;
store g(n), h(n), f(n) of that neighbour node. - If the neighbour node is already in the OPEN list, check whether this path (directly from the current node to neighbour node) is better, using the g(n) value as a reference. A smaller g(n) value indicates that this is a better path. If yes, set current node as parent of that neighbour node, and recalculate its g(n) and f(n) .
- If the neighbour node
- IF n is
_goal_nodeID
, which means we have found the solution; or failed to reach_goal_nodeID
, andopen_nodes
is empty, which means there is no path. Then stop searching.
- Get node n off the
- Save the path. From
_goal_nodeID
, from every node to itsparent_node_ID
, until you reach the start node.
Advantages and Disadvantages
advantages:
- A-star can find the best path quickly, quicker than other algorithm such as Dijkstra.
- Compared to Dijkstra, A-star go through less nodes, thus the efficiency is high.
- The evaluation function
_nodelist[nodeID].f
can be easily modified, to suit the implementation better.
disadvantages:
- A-star does not go through every feasible solution, so the result may not be global minimum/maximum. (can be improved by modifying heuristic function
_nodelist[nodeID].h
) - For a node, when there are multiple minimum in of its neighbour nodes' evaluation function
_nodelist[neighbourID].f
, A-star cannot ensure the final path is the best path. - Large amount of computation. the growth of nodes number will lead to geometric growth of computation. (Complexity = O(n^2 ∗ log(n)))
Problems
As mentioned above, the computational complexity can easily explode if node number increase. However, when using PRM to generate a better graph, more nodes is necessary. Otherwise, if node is too sparse, it is hard to generate the best path from given graph, and will be likely to not form a continuously path either. Thus, when combining PRM and A-star, there is a trade-off between node number and optimal path.
Please check which video you want to keep and also remove noise/sound from the video.
Combining local and global planning
(Yuhui) - add code implementation and problems if
The purpose of this code is to implement a mobile robot using dynamic window method (DWA) in the tasks of path planning and real-time obstacle avoidance. First, the program loads a configuration file to determine which path planning mode to use (grid map or probabilistic roadmap) and performs path planning based on that configuration. The robot then follows the planned path. In the main cycle, the robot updates its current position and obstacle information by reading odometer and laser data. The DWA algorithm then computes the currently optimal speed instruction, allowing the robot to avoid obstacles and move to the next target point. This is done by calling the dwa_step function, which calculates the best line and angular velocity based on the current attitude, laser point cloud data, and target point. Finally, the program sends the calculated speed instructions to the robot to perform the actual move.
The program first loads path-planning parameters from the configuration file. Determine the configuration file path to use based on the command line parameters. If no configuration file path is provided, the default configuration file is used. Load the contents of the configuration file using the load_config_file(filename) function. Based on the content of the configuration file, the program determines which path planning mode to use: a grid map (MODE_GRIDMAP) or a probabilistic roadmap map (MODE_PRM). Based on the selected schema, the program builds the corresponding planner object and loads the corresponding map or graphical data. Use the built planner object to call the planPath() method for path planning. If the path is found, the program converts the ID of the path node to the actual coordinate point and prints the planned path.
Initializes the initial position and speed of the robot. Set the target points (extracted from the planned path points). Entering the main loop, the robot continuously performs the following steps: Reading the odometer data updates the current position. Calculate the error between the current position and the current target point. If the robot reaches the current target point, the next target point is updated. The laser data is read and generated into point cloud data, representing the surrounding obstacles. The DWA algorithm dwa_step is invoked to calculate the optimal motion velocity (linear velocity and angular velocity) according to the current attitude, point cloud data and target point. The calculated speed command is sent to the robot to control its actual movement.
The DWA algorithm implements obstacle avoidance and path following through the following steps: Velocity sampling: Sampling different linear and angular velocities in the space of possible velocities. Trajectory prediction: For each sampling speed, predict the trajectory of the robot in a short period of time. Collision detection: Check whether the predicted trajectory will collide with an obstacle. Target score: Calculate the score of each collision-free trajectory, which is determined by three factors: the distance between the end of the trajectory and the target point, the speed of the trajectory, and the safety of the trajectory. Select Optimal speed: Select the speed with the highest score as the current control command. Through these steps, the DWA algorithm implements real-time path following and obstacle avoidance, ensuring that the robot can avoid obstacles and advance to the target point during the movement.
To use our local planner with the global planning code we had to make some modifications on the implementation. Specifically we implemented the local planner as a class. The planner's parameters are now attributes of an object and the driver code (main) can now set these parameters and call the relevant methods. Also the velocities are now sent to the robot through the main file instead of sending them inside the planner's functions. The local planner's method that is used the most in the main code is the "local_planner_step" method which finds the next transnational and rotational velocities given the current robot pose, the robot dynamics, the next (local) goal and the laser measurements.
After making the necessary changes, we were able to run the global navigation path generation together with the local navigation. However the robot's behavior wasn't the one expected. The robot would deviate from the path and/or collide with the walls in some cases. To solve the problem we fine tuned the PRM's parameters for generating a graph with smoother paths and we went over the DWA code to find the possible bugs that were causing the collisions. The main problem was in the admissibility condition for a new pair of velocities, where we needed to increase the distance threshold between the robot and an obstacle for non-admissible velocities.
Video after the described changes:
(Sketch of maze with proposed nodes and connections in between). ??
Assignment Questions
1. How could finding the shortest path through the maze using the A* algorithm be made more efficient by placing the nodes differently? Why would this be more efficient? (Addy) - PRM*
2. Would implementing PRM in a map like the maze be efficient? (Addy)
3. What would change if the map suddenly changes (e.g. the map gets updated)? (Addy)
4. How did you connect the local and global planner? (Yuhui)
In this code, we achieve a combination of the two by taking the path nodes generated by the global path planning algorithm one by one as the target points of the Dynamic Window Method (DWA) algorithm. First, the program uses a global path planning algorithm (such as the A* algorithm) to generate a global path from the start to the end point, and converts the coordinates of the path nodes into a sequence of coordinates of the target points. In the main loop, the robot reads the path nodes in turn, passing the current target point to the DWA algorithm.
Based on the current robot attitude, obstacle point cloud data generated by laser scanning, and the current target point, the DWA algorithm calculates the optimal motion speed instruction (linear speed and angular speed). The robot moves according to these instructions, and when it approaches the current target point, the program updates the next target point and continues to execute the DWA algorithm. In this way, the DWA algorithm implements obstacle avoidance and path following on a local scale, while the global path planning algorithm provides a global path to ensure that the robot can get from the starting point to the end point smoothly.
5. Test the combination of your local and global planner for a longer period of time on the real robot. What do you see that happens in terms of the calculated position of the robot? What is a way to solve this? (Pavlos and addy)
6. Run the A* algorithm using the gridmap (with the provided nodelist) and using the PRM. What do you observe? Comment on the advantage of using PRM in an open space.
The implementation result of a-star in gridmap is shown below:
Small_Maze: https://youtu.be/MarLBC3igVI
Large_Maze: https://youtu.be/C9wwy7IZU4Y
After combining A-star and PRM, global navigation result in Compare_Map is shown: https://youtu.be/jQf0NvlVsYE
Also this video (a-star+ PRM, but with higher nodes number): https://youtu.be/1V__BW8XyRE
For the implementation without PRM, it can been observed that a blue line go through every nodes in path_node_IDs
in order, and the robot follow exactly on the blue line, from _start_nodeID
to _goal_nodeID
given in json file.
For the implementation with PRM, it can been observed that a figure of comapre_map is generate. The figure is generate with PRM, showing nodes and edges on the open space of map(white area). Then same process as implementation without PRM, a blue line go through every nodes in past path, and the robot follow the blue line.
One thing need to be mentioned for both case. The robot will follow exactly on the edge path, when it needs to turn, the robot will reach a node and stop moving and start rotatiing, then start another section of path.
Practical Session
After working with the real robot, we noticed some problems with the DWA local planner. More specifically the robot would crash if obstacles were too close to it and the robot wouldn't move if the frequency of the planner was high. By noticing this behavior we found and resolved some bugs related to DWA. The issue we need to solve now is the case of a dynamic obstacle that ends up exactly on a node position making a local goal not reachable.
(pavlos) - Explain why the robot wouldn't move when the bag was in the front even though that there was enough clearance (lidar vs robot frame).
Week 4 and 5 - Localization
Problem Statement
Assignment Questions
1. Assignment for first week
Assignment 0.1
How is the code is structured?
Particle.h
is included in ParticleFilterBase.h
, and the latter is included in ParticleFilter.h
.
What is the difference between the ParticleFilter
and ParticleFilterBase
classes, and how are they related to each other?
The ParticleFilter
class inherits from the ParticleFilterBase
class and extends its functionality. ParticleFilterBase
provides basic interfaces(initialise, update, etc), and ParticleFilter
implements specific functions (uniform/gaussian distribution, particle propagrate, recaculate weight, ..etc).
How are the ParticleFilter
and Particle
class related to eachother?
The Particle
class represents each single particle in the particle filter, including position, weight, likelihoods, propagate, etc. The ParticleFilter
class is responsible for managing the entire particle filter, it It may call methods of the Particle
class to propagate individual particles, compute likelihoods and weights.
Both the ParticleFilterBase
and Particle
classes implement a propagation method. What is the difference between the methods?
In Particle
class, void propagateSample
propagate the Particle based on the received odom information, at the particle level; In ParticleFilterBase,
void propagateSample
propagate particles based on their kinematics, the odometry information and random noise, It iterates every particles in the filter and use the propagate
method of the Particle
class to each single particle.
Assignment 1.1
What are the advantages/disadvantages of using the first constructor, what are the advantages/disadvantages of the second one?
The first constructor generates N uniformly distributed particles in the whole map, and the second constructor generates N normal distributed particles around the position of robot.
For the first one, spreading all over the map can discover whole map with low computational cost, thus avoid trapping in local-min/max point. But most particles are distrubuted at un-relavant place, and will cost a lot to calculate their Posterior probability.
For the second one, more particles are distributed in high-probability region, thus can increase accuracy. But it can be easily trapped in local extreme point. Besides, choosing variance can influence the result as well, so the choice should be proper.
In which cases would we use either of them?
In cases where initial position is known(such as "tracking"), normal distribution constructor is recommanded; in cases where initial position is unknown (such as "Global localization"), uniform distribution constructor is recommanded.
[screenshot] From screenshot we can see the red arrows(with x, y, direction) normally distributed around the position of robot.
Assignment 1.2
Interpret the resulting filter average. What does it resemble? Is the estimated robot pose correct? Why?
weightedPosition[0] += particle.getWeight() * particle.getPosition()[0];
weightedPosition[0] = weightedPosition[0]/total_weight;
The filter calculates an average weighted position(x, y, direction) of all the particles., returns to a position vector resembles to real robot's position.
The estimated result is close but not exactly as the real position.
Because the resample is not setted yet, so the estimate result depends on limited N
particles, especially on several particles with high weight, thus introduce big error in estimate.
Imagine a case in which the filter average is inadequate for determining the robot position.
[screenshot]: same screenshotas above. From it we can see a big green arrow (x,y,direction) represent the average state (a.k.a. the result after filter).
Assignment 1.3
Why do we need to inject noise into the propagation when the received odometry infromation already has an unkown noise component?
Because the motion data estimation (how particles mimic the robot’s moves based on its motion data.) is not perfect (always associated with noise) , so we have to model the uncertainty (such as whell slip, uneven terrain) in motion. Injecting noise ensures that the filter can handle the inherent uncertainty in robot motion. Considering the noise to the motion data is also important to ensure that there will be a wide range of positions to test
What happens when we stop here, and do not incorporate a correction step?
Without correction, weight of each particles will not change according to their similarity to the real position of the robot. Particles within the “correct” region will not have higher weights than those in other areas. if there is a correction step, particles with higher weights are more likely to be selected, and thus subset{bad particles} --> subset{good particles}.
Recording video of localization result: https://youtu.be/yGidSriYTFc
2. Assignment for second week
(pavlos)