Mobile Robot Control 2020 Group 6: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 1: Line 1:
__TOC__
__TOC__
<div class="text">
</div>


=Group Members=
=Group Members=

Revision as of 15:34, 19 May 2020

Group Members

Students (name, id nr):
Joep Selten, 0988169
Emre Deniz, 0967631
Aris van Ieperen, 0898423
Stan van Boheemen, 0958907
Bram Schroeders, 1389378

Logs

This section will contain information regarding the group meetings

List of Meetings
Date/Time Roles Summary Downloads
Meeting 1 Wednesday 29 April, 13:30 Chairman: Aris
Minute-taker: Emre
Introductionary meeting, where we properly introduced ourselves. Discussed in general what is expected in the Design Document. Brainstormed of solutions for the Escape Room challenge. Set up division of tasks (Software Exploration/Design Document). Minutes
Meeting 2 Wednesday 6 May, 11:30 Chairman: Emre
Minute-taker: Stan
Discussing our V1 of the Design Document with Wouter. Devised a plan of attack of the escape room competition and roughly divided the workload into two parts (Perception + world model and Strategy + Control). Minutes
Meeting 3 Monday 11 May, 11:00 Chairman: Stan
Minute-taker: Joep
Discussing what needs to be finished for the Escape Room challenge. Minutes
Meeting 4 (Date), Time Chairman: Joep
Minute-taker: Bram
Evaluating the escaperoom challenge and the groupwork so far. Made agreements to improve the further workflow of the project. Minutes
Meeting 5 (Date), Time Chairman: Bram
Minute-taker: Pim
(Summary goes here) (Downloadable minutes goes here)
Meeting 6 (Date), Time Chairman: Pim
Minute-taker: Aris
(Summary goes here) (Downloadable minutes goes here)


Design Document

The design document, which describes the design requirements, specification, components, functions and interfaces can be seen here.

Escape Room Challenge

The escape room challenge required the PICO robot to escape a room with limited prior knowledge of the environment. The information architecture of the embedded software has been designed in the design document, the main components being: Perception, Monitor & Strategy, Control and the World Model.

Perception

The objective of the escape room challenge is finding and driving out of the exit. To be able to achieve this, the robot should recognize the exit and find its location, which is the main objective concerning perception. For this challenge, no localization algorithm is used, as this requires unnecessary complex techniques. Instead, the robot tries to recognize the exit by exploiting its unique features compared to the rest of the room, and then tries to determine its location w.r.t. the robot. A line detection and an edge detection functionality has been implemented in order detect the walls of the room in local coordinates. This way, at each time step, the begin point, the end point, and the nearest point of the walls can be observed by PICO. The functions work in the following manner:

  • Line detection: the LRF data consist of 1000 points with each a range value, which is the absolute distance to PICO. The line detection function loops over the data and calculates the absolute distance between two neighboring data points. When the distance exceeds the value dgap, the line-segment can be separated.
  • Edge detection: the line detection algorithm only detects if data points have a large distance relative to each other. The edge detection function detects if the line-segments (which result from the line detection) contain edges. The basic idea of the implemented algorithm can be seen in the figure below. The line segment in that figure has a starting data point A and an end point B. A virtual line, AB, is the drawn from point A to B. Finally the distance from the data points Ci, which lie inside the segment, to the virtual line AB is calculated, dedge. The largest value dedge can be considered an edge.

With the ability to observe and locate walls, gaps can be easily detected. basic idea of this gap detection algorithm is that the robot looks for a big step difference in two subsequent beams from the LRF. The threshold for this difference is fine tuned by testing a lot with different sizes of holes. Also, beams of the LRF that do not detect anything are filtered out.

Besides detecting the exit, the robot should also know where to drive to when one is found. First, this should be the position just before the beginning of the exit. Hereafter, the robot should drive at least a distance of 3 meters in the direction of this exit. To determine these locations, three situations are distinguished. The robot can detect the exit on its right side, it can detect the exit on its left side and it can look directly into the exit. The first two situations are distinguished by the order in which the robot looks into the exit, i.e. if the difference the LRF detects is increasing or decreasing in subsequent beams. If the two walls before and after the big step difference in two subsequent beams are parallel, the robot is in the third situation. From each situation, the position in front of the exit is calculated geometrically. This local position is updated in the world model.

Monitor and strategy

Control

Validation

Challenge

On May 13th the escape room challenge was held, where the task was to exit a simple rectangular room through its one exit without any prior information about the room. We had prepared two branches of code, to allow ourselves to have a backup. With the software described in the previous sections, the first attempt showed behavior which was very close to the video below. Unfortunately, when the robot turned its back towards the wall it should be following, it got stuck in a loop which it could not escape. From the terminal we could read that the robot remained in a single state, called FollowWall. However, its reference direction did constantly change.

ER fail 4-2020-05-18 10.34.30.gif

The code for the second attempt, which omitted the use of the states GoToGap and GoToFinish, made use of two states only, being FindWall and FollowWall. This meant that the issue we faced in the first attempt was still present in the new code, hence exactly the same behavior was observed.

During the interview, it was proposed by our representative that the issue was a result of the robot turning its back to the wall, meaning that the wall behind it is not entirely visible. In fact, because the robot can not see directly behind, the wall seems to be made out of two parts. During turning, the part of the wall which is closest to the robot is used in the FollowWall function changes, hence the reference point changes position. Then, with the new reference point the robot turns again, making the other section of the wall closest, causing the robot to turn back and enter a loop.

During testing with the room that was provided after the competition, a different root to our problems was concluded. As it turned out, the wall to the rear left of the robot almost vanishes when the robot is turning clockwise and its back is facing the wall, as can be seen in the left video above. This means that this wall no longer qualifies as a wall in the perception algorithm, hence it is not considered as a reference wall anymore. This means that the robot considers the wall to its left as its reference, meaning that it should turn counterclockwise again to start moving parallel to that. At that point, the wall below it passes over the threshold again, triggering once again clockwise movement towards the exit.

With this new observation about the reason the robot got stuck, which could essentially be reduced to the fact that the wall to be followed passed under the threshold, the first debugging step would be to lower this threshold. Reducing it from 50 to 20 points, allowed the robot to turn clockwise far enough, so that the portion of the wall towards the exit came closest and hence could be followed. This meant that the robot was able to drive towards the exit, and out of the escape room without any other issues, as can be seen in the video below. All in all, it turned out that the validation we had performed before the actual challenge missed this specific situation where the robot was in a corner and had to move more than 90 degrees towards the exit. As a result, we did not tune the threshold on the minimum amount of points in a wall well enough, which was actually the only change required to have the robot finish the escaperoom.

ER succ 4-2020-05-18 10.36.58.gif