Mobile Robot Control 2021 Group 5: Difference between revisions
Line 193: | Line 193: | ||
[[File:Algorithm_2_TEB.png|right|400px|thumb|left|Figure 11: | [[File:Algorithm_2_TEB.png|right|400px|thumb|left|Figure 11: Implementation of the TEB algorithm in combination with algorithm 1 (line 10) ]] | ||
'''Part 1: Identifying different homology classes''' | '''Part 1: Identifying different homology classes''' | ||
Line 201: | Line 201: | ||
There are different ways to explore acceptable path and calculate the H-signatures. For example using a Voronoi diagram will give a complete exploration graph within al feasible paths. However using a Voronoi diagram is very expensive w.r.t. comuter calculations. Therefore a sample based system is used. This system creates a closed area between the start and end point and uniformly samples waypoints within this area. Waypoints that intersect with any obstacle region are removed and new waypoints are sampled until the set number of waypoints is reached. The placed waypoints are then connected to each other if this connection decreases the Euclidian distance to the goal and a connecting line segment does not intersect an obstacle. The H-signature for each constructed path from the start to the end point is calculated and the different homology classes are determined. The sampling of the waypoints is repeated with each step in order to discover new homology classes that were previously not jet detected. Figure 9 showhs the psudocode of the algorithm used to identify the different homolgy classes. Figure 10 shows an example region with sampled waypoints, where z<sub>s</sub> is the starting point and z<sub>f</sub> is the end point. | There are different ways to explore acceptable path and calculate the H-signatures. For example using a Voronoi diagram will give a complete exploration graph within al feasible paths. However using a Voronoi diagram is very expensive w.r.t. comuter calculations. Therefore a sample based system is used. This system creates a closed area between the start and end point and uniformly samples waypoints within this area. Waypoints that intersect with any obstacle region are removed and new waypoints are sampled until the set number of waypoints is reached. The placed waypoints are then connected to each other if this connection decreases the Euclidian distance to the goal and a connecting line segment does not intersect an obstacle. The H-signature for each constructed path from the start to the end point is calculated and the different homology classes are determined. The sampling of the waypoints is repeated with each step in order to discover new homology classes that were previously not jet detected. Figure 9 showhs the psudocode of the algorithm used to identify the different homolgy classes. Figure 10 shows an example region with sampled waypoints, where z<sub>s</sub> is the starting point and z<sub>f</sub> is the end point. | ||
[[File:Example_sample_TEB.png|none|400px|thumb|left|Figure 10: | [[File:Example_sample_TEB.png|none|400px|thumb|left|Figure 10: Example of region with sampled waypoints ]] | ||
Revision as of 15:49, 26 May 2021
Team members
Jaap van der Stoel - 0967407
Roel van Hoof - 1247441
Remco Kuijpers - 1617931 - r.kuijpers1@student.tue.nl - 0611135100
Timo de Groot - 1629352 - t.d.groot2@student.tue.nl
Roy Schepers - 0996153 - r.j.m.schepers@student.tue.nl - 0631329826
Arjan Klinkenberg - 1236143 - a.m.klinkenberg@student.tue.nl
Design document
- The Design Document: File:Design Document Group 5.pdf
Escape Room Challenge
Strategy
At The start of the escape room challange, the PICO searches for the closest wall. It then drives to this wall and allignes itself sideways to the wall. The PICO then drives straight ahead until either another wall, or the exit, is detected. In the first case the PICO rotates to align itself with the new wall and drives forwards again. In the second case the PICO rotates to align with the exit and then drives through the corridor of the finish line. A graphical representation of the strategy can be seen in Figure 1.
Implementation
Starting and finding nearest wall
At the start of the run the PICO will turn around 360 degrees to look for the wall which is closest to the PICO. When it has identified this direction it will turn to this direction and drive straight to this wall until it is 0.4m apart from the wall.
Aligning to the walls
As PICO has driven to a wall, it has to align itself to the wall. To do so, laser range finder data is used. The laser range finder should return the same distance at coefficients 0 and 220 for PICO to be alligned to the wall. In Figure 3 it can be seen how the laser range finder coefficients are defined. As previously mentioned, PICO has driven to a nearby wall. Afterwards, it starts rotating quickly until the laser range finder data of coefficients 0 and 220 ratio is less than 0.2. Then it starts rotating slowly, until the laser range finder data of coefficients 0 and 220 ratio is less than 0.04. If this is the case, it is assumed PICO is alligned to the wall.
Driving along the wall
Now that PICO has aligned itself to the wall, it is going to drive alongside the wall. By doing so, multiple things can be sensed by the laser range finder. First off, if LRF data [500] senses a wall nearby, PICO stops, turns approximately 90 degrees, aligns itself to the wall, and starts to drive forward again. Secondly, if the ratio between LRF data [0] and [220] becomes too large, it is probable that a gap in the wall PICO is following is found. As a gap is found, PICO stops and is going to execute its exit procedure.
Exiting The Room
When PICO has identified an opening in to a wall it will start the exiting procedure. This will be done by turning around 90 degrees in the direction so that it faces the opening. It then drives sideways up to the point that it is in the middle of the exit corridor. PICO needs to drive forward and checks if it is not going too close to the wall.
Centering in the Halway
For the escape room challenge the decision is made to make the PICO center in the exit hallway. The idea is that this maximizes the distance to the walls and thus minimizes the chances of a collision. In applications where the robot may need to navigate a corridor where other dynamic objects are present, such as humans, it may be desirable to make the PICO drive on one side of the hallway. However, this is not the case for this challenge and thus a more robust solution is better. Centering the PICO in the hallway is accomplished by measuring the distance to the walls at a pre-determined angle to the left and right from the normal direction of the PICO. On each side of the normal 20 datapoints are measured in a band around the set measurement angle α, as shown in the Figure 4. The average of the 20 measured distances is then calculated to mitigate the affect of noise on the measurement. The distance to the left and right are then compared to each other and as long as they are within a certain range the PICO is centered. If they are not within the range the PICO moves either to the right or to the left to compensate. While the PICO is moving left or right it also continues to drive forward, in order to move as fast as possible.
Results Escape Room Challenge
The escape room challenge is successfully complete and the PICO has escaped within 42 seconds, as can be seen in the video bellow.
Improvements
A few improvements can be considered and may aid in designing better software for the hospital challenge. One possible improvement is to decide, when aligning to the wall, what direction to turn would be the fastest. [add more improvements]
Hospital challenge
Strategy
The overall strategy can be seen in Figure 5. However, the localization procedure and path planning itself need to be investigated so the best option in the hospital challenge will be used.
Localization procedure
Path planning
Within this section, different path planning methods are investigated to see which method should be used in the hospital challenge.
Localisation
AMCL algorithm [4]
What is AMCL?
The AMCL algorithm is an abbrivation of Adaptive Monte Carlo Localization, which is an extended version of Monte Carlo Localization (MCL). To understand the AMCL an explanation of MCL needs to be given first. MCL is a probabilistic localization system for a robot moving in 2D. It implements the Monte Carlo localization approach, which uses a particle filter to track the pose of a robot against a known map. The algorithm uses a particle filter to estimate its position throughout a known map. The initial global uncertainty is achieve through a set of pose particles drawn at random and uniformly over the entire pose space. When the robot sense a feature it gives a weight to the particle at that location. From the map data the algorithm assigns non-uniform importance weights to the likelihood of other features. Further motion leads to another resampling step where a new particle set is generated. Ultimately the location of features such as walls, corners, objects etc. will be estimated with a high accuracy and will eventually almost match the given map.
The size of the particle set M will determine the accuracy of the approximation. Increasing the size will lead to a higher accuracy, but this size is limited to the computational resources. A common strategy for setting M is to keep sampling particle until the next pair of measurements is arrived.
MCL solves the global localization problem, but cannot recover from robot kidnapping of global localization failures. In practice the MCL algorithm may accidentally discard all particles near a correct pose when being kidnapped. An Adaptive Monte Carlo Localization (AMCL) solves this problem. By adding random particles to the approximated particle sets the robot takes a probability into account that it may get kidnapped. This adds an additional level of robustness to the robot.
AMCL vs other algorithms
When comparing the AMCL algorithm to other popular algorithm such as an Extended Kalman Filter (EKF) or Multiple Hypothesis Tracking (MHT) some pros and cons become apparent. For example, the AMCL algorithm is much more robust than the EKF algorithm because it uses raw measurements to determine its pose instead of landmarks. This is especially important when the robot is kidnapped and no landmark is visible nearby. AMCL provides global localization while EKF and MHT can only provide local localization. AMCL is also much easier to implement compared to the MHT algorithm. A con of AMCL is that it is not as efficient in terms of memory and time compared to EKF and MHT. However it is still decided to use the AMCL algorithm due to its robustness, available global localization and because it solved the kidnap problem solving which cannot be solved easily with an EKF.
AMCL Pseudocode
A pseudocode of the Adaptive Monte Carlo Localization is given in figure 8. It is a variant of the MCL algorithm that adds random particles. The first part is identical to the algorithm in MCL, where new poses are sampled from old particles using the motion model line 5 and their importance weight is set in accordance to the measurment model line 6. However, Agumented_MCL calculates the emperical measurement likelihood in line 8 and maintains short-term and long-term averages of this likelihood in line 10 and 11. During the resampling process a random sample is added with probability given in line 13. The probabilitiy of adding a random sample takes into consideration of the divergence between the short- and the long-term average of the measurment likelihood. If the short-term is better or equal to the long-term one, random smaples are added in proportion to the quotient of these values. In this way, a sudden decay in measuremnt likelihood induces an increased number of random samples. THe exponential smoothing counteracts the danger of mistaking sensor noise for a poor localization result.
Global Path Planning
Artificial potential field Algorithm
Robot planning by means of the artificial potential field algorithm acts similarly as a charged particle where the robot is attracted to the target and repelled by obstacles as one can see in Figure 6.
The control input uses the attraction and repulsion factor to calculate the control input and the inputs are the weighted sum of both. These weights can be adapted to the specific needs of the path planning case.
Pros:
- Computing time
- Parallel computing
Cons:
- Not optimal
- What to do with local minima?
- Difficult to configure
Dijkstra's algorithm
The Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a graph. There are many variants of this algorithm, but the more common variant uses a fixed single "start" or "source" node and finds the shortest path to the "end" node producing a shortest path three. It does this with a priority que which sorts the possible paths by the sum of their weights. Factors of the weights are most commonly the distance in constant velocity maps, but when taking the example of a route planning algorithm for maps the type of road or even traffic jams can be taking into account. The speed of the algorithm described in the big O-notation is equal to O := (|E|+|V|log|V|), where |V| is the number of nodes and |E| is the number of edges. This efficiency can be achieved by using a Fibonacci heap min-priority queue. This is asymptotically the fastest known single-source shortest path algorithm for arbitrary directed graphs with unbounded non-negative weights.
It is quite hard to explain this algorithm without visalization, but what is basicly does is:
- You create a map with different nodes and vertices for all the junctions of the possible routes. The vertices receive a certain weight factor which is equal to the distance in our case.
- A "start" and "end" node needs to be given from where the algorithm starts finding the shortest path to the end node.
- The algorithm tries every possible direction it can go to from the start node and stores them in separate branches.
- From the first branch it does the same thing, but adds up the weight to the first branch which eventually leads to paths which have a lower value and thus a shorter distance.
- It sorts the these paths again by their weight.
- It keeps on doing this until one path reaches the end point and no branch has a weight lower value than this path.
- The algorithm reroutes this path and gives this as the fastest path. \n
A* search algorithm
This algorithm is an extension on the Dijkstra's algorithm. It is used in many fields of computer sciences due to its completeness, optimality and optimal efficiency. One major drawback is its O(b^d) space complexity as it stores all generated nodes in memory. Therefore, it is mostly outperformed by other algorithms due to memory capability and because with most algorithms can be pre-processed which makes them more efficient. This is due to the fact that the A* approach needs to examine all equally meritorious paths to find the optimal path. However, this can be speed up by relaxing the admissibllity criterion, which means nothing more than that we accept that the found path is not the most ideal one, but is compuationally more efficient.
Visibility graph
The visibility graph creates a path by connecting all points on the convex hull of the polygonal obstacles to each other and the agent and goal. Then the shortest path is calculated which is the generated path. To accommodate for the width of the PICO, the PICO can be modeled as a disk and the edges of the obstacles should be offset by the radius of the disk.
Pro’s:
- Guarantees to find the shortest path.
Con’s:
- Takes a lot of computing power.
- The graph needs to be recalculated when new objects are discovered.
- Doesn’t handle dynamic obstacles well.
Voronoi diagram
The Voronoi diagram creates lines that are equidistant from al neighboring obstacles. These lines and their connections create the graph of all possible routes through the space. An algorithm like Dijkstra’s algorithm can be used to find the optimal path [3].
Pro’s:
- Creates the most safe path, since the distance to obstacles is maximized.
- The map can be locally updated when dynamic changes are measured.
- Fast algorithm.
- Creates smooth trajectories.
Con:
- this method doesn't guarantee the optimal path.
Local Path Planning
TEB local planner
All information below about the TEB algorithm is taken from [5, 6] and is reworded to fit the format of the wiki and project.
The TEB (Timed Elastic Bands) approach is a method for path planning for autonomous robots that minimizes the time while taking (kino-)dynamic constraints, such as maximum velocity and acceleration, into account. Using only the TEB approach will often lead to locally optimal solutions instead of the desired globally optimal solutions. This is due to the fact that the TEB approach smoothly transforms the planed path based on detected obstacles, but doesn’t consider alternative paths. To make sure the path finding algorithm finds the optimal path the TEB algorithm is extended. This extension enables the path planner to optimize a subset of admissible trajectories of distinctive topologies in parallel. Distinctive topologies in this context means, paths that belong to different homologous classes. A distinction is made between homotopic paths and homologous paths. Below follow the definitions.
Definition: Homotopic Paths
Two paths are homotopic if they have the same start and end points, and can be continuously deformed into each other without intersecting any obstacles. A homotopy class is then the set of all paths than are homotopic to each other.
Definition: Homologous Paths
Two paths τ1 and τ2 with the same start and end point, are homologous if and only if τ1⊔−τ2 forms the complete boundary of a 2D manifold embedded in C not containing and intersecting any obstacle. A homology class is then a set of all paths homologous to eachother.
Homology classes are easier to compute and therefore better suited for an online path planner for autonomous robots. This is the reason why Homology classes are used instead of homotopic classes for the path finding algorithm. The extended TEB algorithm then optimizes (w.r.t. time) multiple path in parallel all belonging to different homologous classes, and has the ability to switch to the current global optimal solution.
The extended TEB algorithm consists of two parts: first a part that finds alternative homology classes and secondly a part that optimizes the path in each homology class. Both parts are explained below.
Part 1: Identifying different homology classes
In order to find different homology classes, each acceptable path gets an associated H-signature. This is a number that is the same for each path within a certain homology class but is different for paths between different classes. The paths are discretized and therefore consist of line segments connecting two points zk and zk+1.
There are different ways to explore acceptable path and calculate the H-signatures. For example using a Voronoi diagram will give a complete exploration graph within al feasible paths. However using a Voronoi diagram is very expensive w.r.t. comuter calculations. Therefore a sample based system is used. This system creates a closed area between the start and end point and uniformly samples waypoints within this area. Waypoints that intersect with any obstacle region are removed and new waypoints are sampled until the set number of waypoints is reached. The placed waypoints are then connected to each other if this connection decreases the Euclidian distance to the goal and a connecting line segment does not intersect an obstacle. The H-signature for each constructed path from the start to the end point is calculated and the different homology classes are determined. The sampling of the waypoints is repeated with each step in order to discover new homology classes that were previously not jet detected. Figure 9 showhs the psudocode of the algorithm used to identify the different homolgy classes. Figure 10 shows an example region with sampled waypoints, where zs is the starting point and zf is the end point.
Part 2: optimizing the path within a homology class
To optimize the path in each homology class the TEB algorithm is used. The TEB algorithm is implemented as an approximate optimizer that uses user defined weights for the cost function. This is done to improve the computational efficiency of the algorithm. Figure 11 shows the implementation of the TEB algorithm in combination with algorithm 1 (line 10).
Although the extended TEB algorithm could be used as a global path planning algorithm, this would be a very expensive algorithm in terms of computer calculations. Therefore The extended TEB algorithm is used as a local path planner, to avoid unexpected obstacles.
The extended TEB algorithm is a good choice for the hospital challenge since it can handle dynamic obstacles very well. Furthermore the goal of the hospital challenge is to complete the tasks as fast as possible, therefore a path planning algorithm that minimizes time is very useful. The final reason to use the extend TEB algorithm is that it is easy to implement using ROS and the “teb_local_planner” package.
References
[4]= Thrun, S. (2002). Probabilistic robotics. Communications of the ACM, 45(3), 52–57. https://doi.org/10.1145/504729.504754
[5] = http://wiki.ros.org/teb_local_planner
[6] = https://www.sciencedirect.com/science/article/pii/S0921889016300495