Mobile Robot Control 2021 Group 3: Difference between revisions
No edit summary |
No edit summary |
||
Line 83: | Line 83: | ||
<h4>Data segmentation</h4> | <h4>Data segmentation</h4> | ||
First distances outside the range of the LRF, corresponding to 0.1 - 10m, are removed. The remaining data point are transformed from polar to cartesian coordinates using: | First, distances outside the range of the LRF, corresponding to 0.1 - 10m, are removed. The remaining data point are transformed from polar to cartesian coordinates using: | ||
[[File:Formula - PolarToCart Coord.PNG|x40px]] | [[File:Formula - PolarToCart Coord 999.PNG|x40px]] | ||
where ''ρ<sub>i</sub>'' is the distance between PICO and an obstacle, ''θ<sub>i</sub>'' is the angle of the obstacle with regard to the robot, ''θ<sub>min</sub>''=-2 radians and is the lowest angle that can be measured, and ''i'' is the number of increments (0 to 999) where each increment is of size 4/1000. | where ''ρ<sub>i</sub>'' is the distance between PICO and an obstacle, ''θ<sub>i</sub>'' is the angle of the obstacle with regard to the robot, ''θ<sub>min</sub>''=-2 radians and is the lowest angle that can be measured, and ''i'' is the number of increments (0 to 999) where each increment is of size 4/1000. | ||
Line 97: | Line 97: | ||
After all the segments are found a line is drawn between the start and the end points of each individual segment. These lines can now be used for the computations of ''edges and corners''. First the angle between two lines is calculated using '''formula'''. | After all the segments are found a line is drawn between the start and the end points of each individual segment. These lines can now be used for the computations of ''edges and corners''. First the angle between two lines is calculated using '''formula'''. | ||
Revision as of 20:36, 14 May 2021
Group Members
Students
Tutor
Jordy Senden |
j.p.f.senden@tue.nl |
Introduction
Due to COVID-19 the pressure on the hospitals has increased enormously, exposing the ongoing shortage of medical personnel. This reduces the care which can be provided to the ones in need of medical attention. To reduce the workload of the nurses, robotic devices could be implemented to assist in e.g. the retrieval of the patients medicines. During this course PICO's software is developed with this purpose in mind. In the first part the basics of the software are displayed and tested during the escape room challenge. In the second part more detailed software designing is employed for the hospital challenge.
Design Document
To get an overview of the project, a design document was created, which can be found here. In this document, the requirements, the components and specifications, and the functions and interfaces are described. Following this, the content will be used to better understand what will be done in the Escape Room Challenge and the Hospital Challenge.
Escape Room Challenge
Introduction
In this year's version of the course, we are going to use a simulation that reproduces the exact same behavior of the real PICO robot.
The first major milestone of this course is the Escape Room Challenge in which we are faced with the task of driving our robot out of a square room, through a corridor. This environment is said to have walls that are not perfectly straight and corners that are not perfectly perpendicular. This requires a more robust design of our code, such that if, for example, the robot encounters a slightly different corner angle, it would still operate successfully. In order to achieve this goal, we can use the data that is coming from the laser scanner, as well as the data coming from the encoders attached to the wheels of the robot.
We are given two trials to complete the challenge. On trial ends if our robot does at least one of the following actions:
- Bumps into a wall, but a slight touch is allowed if the judges consider it acceptable;
- Has not moved nor done any action for 30 seconds;
- The total time spent in the room exceeds 5 minutes;
The challenge is completed if the robot does not bump into any wall, respects the time limitations, and when the entire rear wheel of the robot has passed the finish line that is placed at least at 3 m into the corridor.
The robot's behavior
In the figure presented on the right, one can visualize the state machine after which the robot behavior is created for the Escape Room Challenge. PICO starts by scanning its environment while turning a maximum of 180 degrees. Since PICO has a viewing range of 4 rad, corresponding to approximately 230 degrees, turning 180 degrees will result in a visualization of the entire environment. If during this rotation the corridor is detected, the middle of the corridor is set as a target. PICO will start driving toward the target, while continuously scanning and aligning with the middle of the corridor. Therefore, PICO will already be fully aligned with the entrance of the corridor upon arrival and it can simply drive into the corridor. If PICO gets too close to a wall in the corridor it will start following the wall algorithm. The wall algorithm will first align the robot with the closest wall and it will start driving alongside this wall. PICO only turns if it encounters a corner or another corridor. Also, it will continuously correct its position relative to the wall, meaning that if the detected wall is not entirely perpendicular to the robot's front direction, it will adjust its position accordingly. If PICO does not find a corridor during the initial scanning procedure, it will rotate away from the closest corner and start driving forward. While driving PICO keeps scanning and if it detects a corridor it will target the middle of the corridor and follow the same process described above. In the unfortunate case PICO reaches a wall, without detecting a corridor, it will start following the previously described wall algorithm.
Corridor detection
In the escape room challenge, the only goal is to detect a corridor and drive through it. With this in mind, obstacle detection and the orientation are not considered relevant for this specific challenge. The software should be able to:
- Divide the LRF data into data segments.
- Fit lines through the segments.
- Use these lines to compute edges and corners.
- Determine target positions.
Data segmentation
First, distances outside the range of the LRF, corresponding to 0.1 - 10m, are removed. The remaining data point are transformed from polar to cartesian coordinates using:
where ρi is the distance between PICO and an obstacle, θi is the angle of the obstacle with regard to the robot, θmin=-2 radians and is the lowest angle that can be measured, and i is the number of increments (0 to 999) where each increment is of size 4/1000.
The data segmentation is done using the split and merge algorithm. This algorithm can be used to detect line segments, which in this case represent the wall. The first step is data segmentation:
- The indices of the first and last data point resulting from the LRF data are used as i_start and i_end.
- Calculate the perpendicular distance from a line from i_start to i_end using:
- Find the maximum distance calculated in step 2 and its corresponding index.
- If the distance is above a certain threshold: split the line. Now recursively go back to step 2 for both the segments before and after the index corresponding to the maximum distance. Keep repeating this process until all i_start and the used index in step 2 are adjacent.
- If the distance is below a certain threshold: a line is found. Save the current i_start and i_end.
After all the segments are found a line is drawn between the start and the end points of each individual segment. These lines can now be used for the computations of edges and corners. First the angle between two lines is calculated using formula.
//under construction :)
Before heading to the software description of the laser scanner part, one can find the image on the right in which the range of the laser scanner is presented. The minimum angle at which the robot can sense is -2 radians, which is approximately -114.6 degrees and the maximum angle is 2 radians or 114.6 degrees, having the reference the x-direction, or the front direction of the robot. This creates a total range of 4 radians or 229.2 degrees in which the sensor is capable of collecting data from the environment.
Another key aspect is the maximum scanning range in the radial direction that is bounded at 10 m, while the minimum range is at 0.1 m.
Based on the difference between two laser rays, there was given the angle_increment variable that is equal to approximately 0.004 radians. This information is going to be used later in the calculation of the angle at which the robot has to rotate depending on the data that is coming from a certain laser ray.
We are now starting to describe the software choices and steps taken that led us towards accomplishing the goal of the challenge:
- We start by creating a struct that has as members the double variables called distance and angle, and the third boolean variable called found. In the right section in this code description, we are going to further explain the utilization of each member.
- We then create a class called LRFT that initializes the data collected by the laser scanner. Under this class two boolean functions are defined. One called available() that check if there is data coming from the laser or not, and the other Wall_in_stop_radius that checks whether or not the robot is positioned too close to a wall, meaning that is closer than a defined value of 0.35 m.
- Another function is called Closest_to_wall. Here, the variables mentioned in the initial struct are defined as follows. The variable distance returns the distance sensed by the laser scanner, which is in between the minimum and maximum range. This prevents the usage of irrelevant data which appears when the objects are out of range and the values do not have any useful information. The variable angle returns the angle of a certain direction towards a sensed point. This variable is computed based on the minimal angle, the increment, and the index at which the measurement data is collected. The variable found returns whether or not there has been a useful value found in the range covered by the scanner.
Results of the Escape Room Challenge
On May 12th 2021, all groups of the course 4SC020 Mobile Robot Control performed their fastest and/or most reliable version of how a PICO robot should exit a room through the only corridor present. Figure 3 shows the execution of the Escape Room Challenge by Group 3. As can be seen, the corridor through which the PICO needs to leave to finish is behind it. At the start, the robot turns counterclockwise to scan for the angles indicating a corridor. However, as can be seen more clearly in Figure 4, the robot thinks it has found a corridor right from the start. Though, this is actually not the case considering the corridor is outside of the LRF's field of view at that point in time. What PICO has found are the angles on the wall to the bottom left of the robot. It drives towards that, but since PICO is programmed to keep on scanning its surroundings, it is able to correct itself when it finds the sharper angles of the actual corridor. Following this, it aligns itself with the entrance of the corridor before driving towards the exit. Considering PICO is able to align itself with the middle of the entrance of the corridor, it can simply drive straight to the corridor. Moreover, because the corridor itself is not particularly narrow, PICO does not have to align itself with one of its walls.
Following the prerequisites, during this execution, PICO did not:
- bump into any walls
- stop moving after the start of the trial
- spend more than 5 minutes inside the room
What's more, Group 3 has completed this challenge in one try in approximately 13 seconds using the Risky Algorithm. With this time, Group 3 is the fastest of all groups and wins the Escape Room Challenge 2021 and has, thereby, successfully completed the first major milestone of the project. While this first algorithm was successful, a more safe algorithm was also provided as a back-up.
Hospital Challenge
Task Division
In order to show on what task each student from the group is, or was, working on, we made this excel table in which every student entered their participation on a certain task. Please note that this table is often updated.
If there is a need for updating or just visualizing the current state of the table, please access the following link: [Task division.]
Everyone in this group has editing right and everyone else has just viewing right to the excel under the link found above.