Mobile Robot Control 2023 Group 10

From Control Systems Technology Group
Revision as of 00:49, 2 June 2023 by F.geister@student.tue.nl (talk | contribs) (Setting up Localisation Assignment 2)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Group members:

Caption
Name student ID
Jelle Cruijsen 1369261
Florian Geister 1964429
Omar Elba 1492071

Exercise 1 (Don't crash)

  1. There's noise present in the robot laser data and also in the simulation environment. It causes a small jitter in the observed laser distances. We expect this noise to be negligible, due to the small variance. The laser can see every object on the height of the laser rays until roughly 25 meters. So basically, the whole room is seen. The limitation is the viewing angle between min and max since the robot does not see behind itself. Another limitation is that objects behind other objects can't be seen and the laser detection points limit the resolution. Our own legs look like lines of dots and are detected in real-time. If we move, the lines of dots move as well.
  2. Did that.
  3. The robot behaved as expected in simulation.
  4. Video from Flo: https://drive.google.com/file/d/171WYadXPzqqVjb2qXL5PLcXLRiuZv1NQ/view?usp=share_link

We noticed that the robot would stop driving, when it was parallel to a wall. Therefore we changed our code to only scan in a cone that is in front of the robot. Therefore, objects outside of this cone are not seen as obstacles.


Exercise 2 (A* navigation)

The coding for this exercise was split up into three distinct parts. Small descriptions regarding the solutions of each of these three parts are given below.

  1. In this part, the node which has to be expanded next is found. This is done through a simple for loop, which looks at the open nodes and then chooses the node which has the minimal total cost (we will call this nodeMin).
  2. In this part, the neighboring nodes of nodeMin have to be explored and updated if necessary. Here, only the nodes that are not closed yet are considered. If a neighboring node is not yet opened, it is added to the open nodes list. Node distances from the start are calculated, by adding the start distance of nodeMin to the distance between nodeMin and its neighbor. The neighboring node distance to the start is only updated if this calculated distance is lower than the distance that was saved before. In the case of a lower new start distance, the total cost of the neighboring nodes is updated and their parent node is set to nodeMin.
  3. In this part the optimal path has to be determined. Once the finish node is reached, it is added to the path node list. Then, all of the previous parenting nodes are visited in a while loop and added to the path node list. This while loop stops running if the visited node is the starting node. Finally, the path node list is reversed, in order to get the nodes from start to finish.

Increasing efficiency of algorithm:

The used A* algorithm can be made more efficient reducing the nodes that are computed. Currently, every node is stored. If the robot goes straight, we should not compute any nodes until an intersection with multiple node options appears. By doing this, only nodes are computed that lead to a new decision by the robot. In the image below we can see the start (blue) and end (green) nodes. The blue lines in between are the paths, the robot can follow. The orange node points are the nodes, that should only be computed and stored. The other nodes can be dismissed.

Image of the maze: https://drive.google.com/file/d/1_e8r0qw3u7ObbJdcfGqWqJiNKzSvloUa/view?usp=sharing

Exercise 3 (Corridor)

Main idea:

In order to move thorugh the corridor avioding the obstacles, we need to find gaps where the robot can move once it sees an obstacle. Initially, the robot scans the maximum laser range and then rotates and starts moving in the direction of the longest seen distance. If it sees an obstacle (as in Exercise 1), it scans a range between a chosen minimum and maximum angle (divided into indices) and measures the distances. Once it finds a gap that has a wide enough space to go through, it calculates the angle that it needs to rotate in order to match the desired direction. Then, it rotates to that specific direction and moves forward until it sees the next obstacle. This is repeated until the goal is reached. When there is no wide gap found, but the robot did stop for an obstacle, the robot rotates slightly to the side where the sum of the total seen distances (within the front facing scanning cone) is largest. However, this behaviour can lead to problems when the robot encounters a corner. Therefore, if there has not been any significant movement (checked through the odometry sensors) for a certain amount of loop cycles, the scanning cone radius increases temporarily, in order to find the way out of the corner.


Robot in real life: https://drive.google.com/file/d/1H8rbXfeSw0uKU0QaPlFkiuvFDaeXu5cL/view?usp=share_link

Robot in simulation: https://drive.google.com/file/d/10BUt2zd2uM6V-hQnpVT2NV7N-833scQ2/view?usp=sharing


Exercise 4 (Odometry data, Localisation Assignment 1)

  • Keep track of our location: The code performs as expected, the current odomety data is printed in each iteration of the while loop, in addition the difference in position with respect to the previous iteration is also printed.
  • Observe the behaviour in simulation:
    1. In order to assess the accuracy of our method, the simulation was used with the uncertain_odom option set to false. For assessing the accuracy of the angle, the robot was rotated in place and the angle was printed. It was checked that the printed angle was correct by looking at the value printed every 90 degrees and eventually, the angle returned to 0 when a full 360 degree turn was made. Furthermore, for assessing the accuracy of the x and y odometry data, a map with a known height and width was created. The robot was then initially positioned in the corner of the map such that the entire length and width of the map can be driven accurately by it. The distance printed from the odometry data was then compared to the known map dimensions. Overall, the the results of the robot odometry data were deemed to be accurate.
    2. When the uncertain_odom option is set to true, the coordinate frame of the robot is both rotated and shifted. Therefore, the starting angle (pointing in the direction of the x-axis) is not zero anymore. Additionally, the starting x,y-position is nonzero. Therefore, driving straight ahead when the simulation starts now results in a change in both the x and y-direction, instead of only a change in the x-direction when uncertain_odom was set to false.
    3. Using the simulation data that has been collected thus far, this approach would be a viable option to use in the final challenge since it outputs accurate data and can provide valuable information that can be used while making the algorithms to be run on the robot. However, this approach still needs to be validated on the real robot in order to determine whether or not the results are as accurate as the ones found in simulation before making a final decision.
  • Observe the behaviour in reality:

The behaviour of the odometry data of the robot was observed in reality as well in order to compare it to the results obtained in the simulation. This was conducted using a similar method to the one used in order to asses the accuracy of the odometry data in simulation. Therefore, the robot (Coco), was driven for a specific direction in the x and y directions and this distance was measured using a measuring tape such that it can be compared to the values obtained from the odometry data. For the angular data, the robot was rotated a full rotation of 360 degrees and the discrepancies were also noted. It was observed that initially when uploading the code on the robot that there was noise present in the data unlike the simulation where there was essentially no noise. This noise was more noticeable in the angular data which contained values of noise that were an order of 10^-6 whereas for the directional data the noise was an order of 10^-12. When tested, the robot was moved 300cm in the x and y directions according to the odometry data. This value was measured to be around 294cm which meant that there was around a 6cm difference between the data and reality (or around 2% error). Furthermore, for the angle when the robot did a full rotation there was a discrepancy of around 0.2 radians. The reason for these discrepancies between the odometry data, the simulation and reality was mostly due to the fact that there was noise on the measurements in reality as well as the fact that the robot is subject to wheel slip when operated in real life. Therefore, this would cause the robot to move less than it would detect in its measurements.

Exercise 5 (Localisation Assignment 2)

Assignment 0: Explore the code-base

What is the difference between the ParticleFilter and ParticleFilterBase classes, and how are they related to each other?

The ParticleFilterBase class defines basic functions that are used in the ParticleFilter class. The ParticleFilter then extends these functions using the helping functions of the ParticleFilterBase class. --> Inheritance?


How are the ParticleFilter and Particle class related to eachother?


Both the ParticleFilter and Particle classes implement a propagation method. What is the difference between the methods?

Assignment 1: Initialize the Particle Filter

What is the difference between the two constructors?


What are the advantages/disadvantages of using the first constructor, what are the advantages/disadvantages of the second one?


In which cases would we use either of them?

Assignment 2: Calculate the filter prediction

Interpret the resulting filter average. What does it resemble? Is the estimated robot pose correct? Why?


Imagine a case in which the filter average is inadequate for determining the robot position.

Assignment 3: Propagation of Particles

Why do we need to inject noise into the propagation when the received odometry infromation already has an unkown noise component.


What happens when we stop here, and do not incorporate a correction step?

Assignment 4: Computation of the likelihood of a Particle

What does each of the component of the measurement model represent, and why is each necessary.


With each particle having N << 1 rays, and each likelihood being /ofElement [0,1], where could you see an issue given our current implementation of the likelihood computation.

Assignment 5: Resampling our Particles

How accurate is the implemented algorithm?


What are strengths and weaknesses?