Embedded Motion Control 2014 Group 3: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(63 intermediate revisions by 4 users not shown)
Line 27: Line 27:




= Final software =
= Software architecture and used approach =
A schematic overview of the software architecture is shown in the figure below. The software basically consists only of 3 nodes, laserProcessor, arrowDetection and strategy. A short elaboration of the nodes:


'''laserProcessor:''' This node reads the ''/pico/laser'' topic which sends the laser data in polar coordinates. All this node does is it converts the polar coordinates to cartesian coordinates and filters out points closer than 10 cm. The cartesian coordinate system is preferred over the polar coordinate system because it feels more intuitive. Finally once the conversion and filtering is done it publishes the transformed coordinates onto topic ''/cloud''.


== Overview of final strategy ==
'''arrowDetection:''' This node reads the camera image from topic ''/pico/asusxtion/rgb/image_color'' and detects red arrows. If it detects an arrow the direction is determined and posted onto topic ''/arrow''.
<b>(PICTURE)</b>


== LaserProcessing ==
'''strategy:''' This node does multiple things. First it detects openings where it assigns a target to, if no openings are found the robot finds itself in a dead end. Secondly it reads the topic ''/arrow'' and if an arrow is present it overrides the preferred direction. Finally the robot determines the direction of the velocity of the robot using the Potential Field Method (PFM) and sends the velocity to the topic ''/pico/cmd_vel''. 


=== LaserData from Pico ===
The approach is very simple and easy to implement. The strategy node for example has approximately only 100 lines of code. This is the reason why there has not been chosen to divide all elements over different nodes. The benefit of easy code is the easiness to tackle bugs and that bugs are less likely to occur. Another benefit of this 'simple' code is the easiness to tweak the robot's behavior to optimize for performance.
The data from the laser on pico is in lasercloud format. This means that the data is represented in an array of distances. The starting angle and angle increment are known. This means we have the distances from laser to objects for a range of angles.


=== LaserCloud to Pointcloud ===
[[File:softarch2.png]]
Because we are going to fit lines through the walls, it would be easier to have the data in Carthesian Coordinates. In this node the laserData is transformed into a PointCloud, which is published on the topic. It is also possible to filter the data in this node when needed. For now all data is transformed into the PointCloud.


= Applied methods description =
== Arrow detection ==


== Arrow detection ==
The following steps describe the algorithm to find the arrow and determine its direction:
 
'''1.''' Read RGB image from ''/pico/asusxtion/rgb/image_color'' topic.
 
'''2.''' Convert RGB image to HSV image.
 
'''3.''' Filter out the red color using ''cv::inRange''.
 
'''4.''' Find contours and convex hulls and apply filter.
 
The filter removes all contours and convex hulls where the following relationship does not hold:
 
<math>0.5 < \frac{Contour \ area}{Convex \ hull \ area} < 0.65 </math>.


The following steps describe the algorithm to find the arrow and determine the direction:
'''5.''' Use ''cv::approxPolyDP'' over the contours and apply filter.


'''1. Read rgb image from "/pico/asusxtion/rgb/image_color" topic.'''
The function ''cv::approxPolyDP'' is used to fit polylines over the resulting contours. The arrow should have approximately 7 lines per polyline. The polyline with the number of lines as described in the following formula is the arrow:


'''2. Convert rgb image to hsv color space.'''
<math>5 \ < \ number \ of \ lines \ in \ polyline \ < \  10 </math>


[[File:Rgbtohsv1.png  | thumb | 2. Rgb to hsv color space]]
'''6.''' Determine if arrow is pointing left or right.


'''3. Filter out the red color using cv::inRange'''
First the midpoint of the arrow is found using <math>x_{mid} = \frac{x_{min}+x_{max}}{2}</math>. When the midpoint is known the program iterates over all points of the arrow contour. Two counters are made which count the number of points left and right of <math>x_{mid}</math>. If the left counter is greater than the right counter the arrow is pointing to the left, otherwise the arrow is pointing to the right.


[[File:inrange.png  | thumb | 3. Filter out red color]]
'''7.''' Making the detection more robust.


'''4. Find contours and convex hulls and filter it'''
Extra robustness is required for example when at one frame the arrow is not detected the program still has to know there is an arrow. This is done by taking the last 5 iterations, check if in all these iterations the arrow is detected then publish the direction of the arrow onto the topic ''/arrow''. If in the last 5 iterations no arrow has been detected the arrow is not visible anymore thus publish that there is no arrow onto the topic ''/arrow''.


The filter removes all contours where the following relationship does not hold: <math>0.5 < \frac{Contour \ area}{Convex \ hull \ area} < 0.65 </math>. This removes some of the unwanted contours. The contour and convex hull of the arrow:
[[File:arrowwiki.png]]


[[File:convex.png  | thumb | 4. Find contours]]
== Strategy ==


'''5. Use cv::approxPolyDP over the contours'''
=== Target ===


The function cv::approxPolyDP is used to fit polylines over the resulting contours. The arrow should have approximately 7 lines per polyline. The polylines fitted over the contours with <math>5 \ < \ number \ of \ lines \ in \ polyline \ < \  10 </math> is the arrow candidate.
In the approach used a target is required for Pico to drive to. In this section the determination of the target point is explained.
In the code a loop iterates through all data points, which are in order from right to left. Only the points within 1.4 meter from Pico are taken into account. If the distance between two consecutive points is greater than a determined threshold, the presence of a door (opening) is assumed. This will be further explained with the following figure.


[[File:polylines.png | thumb | 5. Fit polylines]]
[[File:Base_setpointPointWIKI.png]]


'''6. Determine if arrow is pointing left or right'''
In the figure the vision of Pico is illustrated by the dotted black arc and the detected points within this marge by red dots. In the left figure one can see that two openings are detected, i.e. the distance between two consecutive points is greater than a user defined threshold value (0.6 m). The potential targets (shown in blue) are just simply placed in the middle of the two consecutive points.
In the middle figure a problem is illustrated which occured while testing. When Pico looks past a corner it somehow detects points which are not really there, so called ghost points. This is a problem because in the case of ghost points the distance between two points becomes smaller than the threshold and no openings are detected. To filter this effect the distance is not taken between two points, but over five (found by testing). A downside of this method is that points far ahead of Pico give the possibility of false positives. But because in this case Pico only looks at a relative short distance, this will not occur. In the figure on the right the filter is applied and the door is found again.


First the midpoint of the arrow is found using <math>x_{mid} = \frac{x_{min}+x_{max}}{2}</math>. When the midpoint is known the program iterates over all points of the arrow contour. Two counters are made which count the number of points left and right of <math>x_{mid}</math>. If the left counter is greater than the right counter the arrow is pointing to the left, otherwise the arrow is pointing to the right.
The targets are used to orientate Pico. By adjusting <math>\theta</math> the target point is kept straight in front of Pico.
 
=== PFM ===
Now the target points are determined Pico needs to drive towards them without hitting any walls. In the first strategy, shown at the bottom of this page, walls were fitted so Pico knew where to drive. This method had robustness issues which lead to some problems during the corridor challenge. Therefore a more robust method was chosen for the maze competition.
With the Potential Field Method virtual repulsive forces are applied to Pico for every laser point. The force caused by a point depends on the distance, <math>r</math>, between Pico and a laser point. The virtual force is defined as: <math>F_r=-\frac{1}{r^5}</math>, where the resulting force <math>F</math> is pointing away from the obstacle.
When the sum is taken over all repulsive forces, the net repulsive force is pointing away from close objects.


'''7. Making the detection more robust'''
[[File:ForceWIKI.png]]
As last an effort is made to make the arrow detection more robust, for example when at one frame the arrow is not detected the program still knows there is an arrow. This is done by taking the last 5 iterations, check if in all these iterations the arrow is detected then publish the direction of the arrow onto the topic "/arrow". If in the last 5 iterations no arrow is seen the arrow is not visible anymore thus publish that there is no arrow onto the topic "/arrow".


A target is added as an attractive forces. The function for the attractive force is <math>F_a=r</math>. In one of the first iterations of the software the attractive force was composed like the repulsive force, but during testing a linear relation between distance and force of the target point proved to be working best.


== PFM ==
The resulting force is not always the same and because Pico must drive as fast as possible only the direction of the total force is taken. The maximum speed is set in the direction of the resulting force.


For robustness, it was decided to use a Potential Field Method (PFM). This means that a low potential is assigned to the setpoint, while the detected obstacles have a high potential. A loop over all points <math>i</math> in the pointcloud calculates their repulsive force in Carthesian components(<math>Fx</math> and <math>Fy</math>), and the force is normalized using the distance <math>d</math> to the point.
=== Dead-end ===
The strategy node detects a dead-end when no doors are detected. The robot anticipates by turning around its z-axis in opposite direction of the preferred direction (which is always left or always right). When it detects a door it continues the normal procedure. Consider the following example:


<math>d_i=\sqrt{x_i^2+y_i^2}</math> <br>
'''1.''' The robot sees a door and keeps on going. The preferred direction of the robot is going left.
<math>Fx_i=-0.7*\frac{1}{d_i^5}*x_i</math> <br>
<math>Fy_i=-0.7*\frac{1}{d_i^5}*y_i</math> <br>


Every point in front of Pico that has a distance smaller than 1.2 meters from Pico will trigger a check to see if the next point of the pointcloud is more than 0.6 meters away from the previous point. If so, the jump in distance is assumed to relate to a door, and the setpoint is placed in the middle between the two points.
'''2.''' The robot sees 2 options, but it takes the most left door since this is the preferred direction.
The some of all the repulsive forces is added to the attracting force of the chosen setpoint and normalized.


[[File:Pfm.png  | thumb | Pfm]]
'''3.''' The robot sees one door and drives to the target.


= Organization =
'''4.''' The robot does not see any doors, it recognises a dead end. It now starts turning around its z-axis in clockwise direction (opposite to the preferred direction which was left).


== Time survey ==
'''5.''' The robot detects a new door and continues normal procedure.


Link: [https://docs.google.com/spreadsheets/d/1DMcFDPP0BPREcaW_ca-jIk2sWFjWIrZudXfHopwoYa4/edit?usp=sharing time survey group 3]
[[File:Base_deadendWIKI.png]]


== Planning ==
In this case a dead-end is only detected if it is within the vision of Pico, which is limited to 1.4 meters. When the dead-end is longer than in the example Pico will drive in the dead-end until the dead-end is within the vision. If the vision field for dead-ends is extended a dead-end will be detected sooner and pico will finish the maze faster. There was no time however to implement this in the algorithm.
=== Week 1 (28/4 - 4/5) ===
Finish the tutorials
=== Week 2 (5/5 - 11/5) ===
Be able to detect walls and convert them to start and end points
=== Week 3 (12/5 - 18/5) ===
Finish strategy to be able to successfully finish the competition


= Older Software =
=== Safety ===
==Overview of first strategy==
The importance of safety and the requirement that pico should not hit an obstacle under any circumstances was realized. For this reason a safety feature was introduced. In ideal situation, pico should never need to use this safety feature. The program was designed in such a way that the robot could complete its task without using 'Safety'. But there are many variables in real world, all of which are difficult to account particularly in the given time frame. It is always a possibility that for some unforeseen reason, pico gets too close to an object and might hit it. 'Safety' was designed to prevent this. The first version of 'Safety' was designed to stop pico if it moves very close to any object it detects. This version was, of course, not very effective as the robot would come to a stand-still once Safety was invoked which prevented it from finishing its task.
In the overview the different packages (dotted boxes) and nodes (solid boxes) are displayed. The topics are displayed at the sides of the nodes.
'Safety' was later upgraded. When too close, pico needs to move away from the obstacle and this movement should not interfere with path-detection and space orientation of the robot. In the latest version, a sophisticated safety design was incorporated. Two circles, critical circle and safety circle are considered. The program first counts the number of points detected within the critical circle. If more than 10 points are counted, 'Safety' mode is started. This is done for noise filtering. Safety is given highest priotity and overrides decisions for movement of pico. The program now calculates average x and y position of all points within safety circle (safety circle is larger than critical circle). Without changing the direction it is facing, pico moves in direction directly opposite to the average obstacle point. Since there is no turning involved, spacial orientation of pico remains unaffected. This point is updated with every cycle and pico keeps moving away until the point moves out of safety circle. 'Safety' mode is then turned off and normal operation is resumed.
[[File:topicoverview.png  | thumb | Topic overview]]


==Initial (boolean) safety==
[[File:base_safetyWIKI_upd.png]]
The safety node is created for testing. When something goes wrong and Pico is about to hit the wall the safety node will publish a Bool to tell the strategy it is not safe anymore. When the code is working well safety shouldn't be needed anymore.


==Obstacle Detection==
=== End of maze ===
===Finding Walls from PointCloud data===
The node findWalls reads topic "/cloud" which contains laserdata in x-y coordinates relative to the robot. The node findWalls returns a list containing(xstart,ystart) and (xend, yend) of each found wall (relative to the robot). The following algorithm is made:


- Create a cv::Mat object and draw cv::circle on the cv::Mat structure corresponding to the x and y coordinates of the laserspots.  
A part of the script is dedicated to determine if pico has reached the end of maze and its time to stop. In the point cloud, a maximum 1080 points can be detected in a 270 degree range in front of pico. There are many differences that pico can detect when its outside the maze compared to within the maze. One of those differences, the one that has been implemented in the program, is that the detected points are farther away from pico when it is outside the maze. Pico counts the number of points that are farther than 1.6 meters from itself. Within the maze, at all times, this number is not larger than 400 points. When this number is greater than 0.6 times the maximum possible (0.6x1080 = 648) pico runs in 'End of maze' mode. In this mode, pico drives straight for forty cycles after which it comes to a standstill. 'End of maze' counter is checked during every cycle and if, at any point, the counter is not large enough, the cycle counter is reset to zero and pico moves out of 'End of maze' mode. This is a preventive measure against false end-of-maze detection.


- Apply Probabilistic Hough Line Transform cv::HoughLinesP
= First software approach =


In the picture shown below one can see the laserspots at the left side of the picture. At the right side the lines of the cv::HoughLinesP are shown.
==Obstacle Detection==
===Finding Walls and Doors===
Our first approach was to find walls and doors (openings) expressed as a line so we could easily steer the robot to a certain door. The following algorithm was developed:
1. All laserpoints are plotted in a ''cv::Mat'' stucture (left). Secondly a Probalistic Hough Line Transform is applied to the plotted laserpoints (right).


[[File:Emc03wall.png]]
[[File:Emc03wall.png]]


The cv::HoughLinesP algorithm gives multiple lines per wall. In the end we want 1 line per wall. To solve this first the all the lines are sorted into lists by their angle they make, for example if a line makes angle of <math>27^\circ</math> it is stored in a list where all lines lie between <math>20^\circ </math> and <math> 30^\circ</math>. Once all lines have been sorted the two most common directions are plotted which is shown in the following pictures:
2. The resulting lines are filtered into two categories, a most common directory and a second most common directory. This is shown in the figure below.
 
[[File:mostcommon.png]]


[[File:secondmostcommon.png]]
[[File:mostcommon.png]] [[File:secondmostcommon.png]]


Finally to extract 1 line per wall the outer points of the contour are taken to get 1 line per wall. The result is shown in the following figure:
3. Since the Probalistic Hough Line Transformation return multiple lines per wall, i.e. a wall can consists of 20 lines, a line is fit through the contour of each wall. The result is 1 line per wall as shown in the figure below.


[[File:finalwallok.png]]
[[File:finalwallok.png]]


===Finding Doors from found Walls===
4. Now it is possible to fit 'doors' between walls which is shown in the figure below.
Now that the walls are given as lines doors are fitted between the walls. The result is shown in the following figure:


[[File:deurtjes.png]]
[[File:deurtjes.png]]


===Why this method doesn't work in real life===
===Why this method doesn't work in real life===
The method described in how to find walls and doors did not work properly in real life. We experienced some serious robustness problems due to the fact that in some of the iterations complete walls and/or doors were not detected, thus the robot couldn't steer in a proper fashion to the target. Secondly this method required some serious computation power which is not preferable (one cpu core ran at 80% cpu when the refresh rate of the algorithm was only 5 Hz).
The method described in how to find walls and doors did not work properly in real life. We experienced some serious robustness problems due to the fact that in some of the iterations complete walls and/or doors were not detected, thus the robot couldn't steer in a proper fashion to the target. Secondly this method required some serious computation power which is not preferable.
 
==Overview of second strategy==
 
See diagram, circles represent separate nodes that communicate using topics. Blocks represent functions, which will be written in separate c++ files and included in the "main" c++ file. This way, no one will interfere in code of someone else.


[[File:EMC03-strategy.png]]
= Evaluation of the maze competetion =
During the corridor competition it became clear that the chosen strategy was not robust enough to detect the maze in real life. This first strategy did work in the simulator but not in real-life. Thus to achieve a strategy which will work more robust we changed our strategy as can be read above.


During the testing of the second strategy software in the days before the maze competition we noticed that this was a good decision with excellent results. This conclusion was drawn because of the fact that PICO was not only able to drive correctly through a maze with straight walls and 90 degrees corners, it was also able to drive through a 'maze' with bended walls and random degree angles. Even when people were walking through the maze PICO still did not gave up solving the maze. This made us confident about the performance during the real competition.


=Notes (TODO)=
During the competition the ‘always go left’ rule was chosen, although it did not matter because of the design of the maze. As a result, PICO chose the correct direction at the first crossing. The correct action for the second crossing was taking a right turn. PICO first looked into the left and front corridor to check whether or not is was a dead end, which it was, so it turned right. The crossings in the rest of the maze were provided with a red arrow which was detected correctly and used to steer into the right corridor.


==Week3==
To improve the strategy a few adaptions for efficiency and robustness can be made. The first corner was taken really fast, it seemed like it was ‘drifting’ through the corned which is the most speed efficient way to pass a crossing. But at the second crossing it first had to check if the other directions where dead ends which costs valuable time. To improve this, the dead end detection should look at a wider range of point instead of only in front of PICO which will make it turn in the right direction even more early.
Combine detection and strategy part <br>
+ Determine waypoint (turnpoint) <br>
- Find door (using end of line) <br>
+ Turn <br>
+ Drive out of maze <br>


==Week 4 (week after corridor challenge)==
To accomplish an even better robustness the ghost point ‘filter’ should be improved. In the current strategy a fixed amount of points is chosen to sum the distances, but this is not perfectly robust. Although the PICO finished correctly, an algorithm to detect if a point is a ghost point (for example to check if there are other points close to the current point) should make the detection even more robust.
Because of the Elevator project Freek is not responsible for anything this week.


Jan and Roel will write (optimize) the wall finding algorithm and use it also for the doors.
In conclusion, the chosen strategy turned out to be robust and efficient compared to other groups. Only a few lines of code are used and the strategy does not use a lot of data. During the competition we became first with a total time of 1 minute and 2 seconds, a great achievement!
Kushagra will write the safety node.

Latest revision as of 15:00, 2 July 2014

Group Members

Name: Student id:
Jan Romme 0755197
Freek Ramp 0663262
Kushagra 0873174
Roel Smallegoor 0753385
Janno Lunenburg - Tutor -


Software architecture and used approach

A schematic overview of the software architecture is shown in the figure below. The software basically consists only of 3 nodes, laserProcessor, arrowDetection and strategy. A short elaboration of the nodes:

laserProcessor: This node reads the /pico/laser topic which sends the laser data in polar coordinates. All this node does is it converts the polar coordinates to cartesian coordinates and filters out points closer than 10 cm. The cartesian coordinate system is preferred over the polar coordinate system because it feels more intuitive. Finally once the conversion and filtering is done it publishes the transformed coordinates onto topic /cloud.

arrowDetection: This node reads the camera image from topic /pico/asusxtion/rgb/image_color and detects red arrows. If it detects an arrow the direction is determined and posted onto topic /arrow.

strategy: This node does multiple things. First it detects openings where it assigns a target to, if no openings are found the robot finds itself in a dead end. Secondly it reads the topic /arrow and if an arrow is present it overrides the preferred direction. Finally the robot determines the direction of the velocity of the robot using the Potential Field Method (PFM) and sends the velocity to the topic /pico/cmd_vel.

The approach is very simple and easy to implement. The strategy node for example has approximately only 100 lines of code. This is the reason why there has not been chosen to divide all elements over different nodes. The benefit of easy code is the easiness to tackle bugs and that bugs are less likely to occur. Another benefit of this 'simple' code is the easiness to tweak the robot's behavior to optimize for performance.

Softarch2.png

Applied methods description

Arrow detection

The following steps describe the algorithm to find the arrow and determine its direction:

1. Read RGB image from /pico/asusxtion/rgb/image_color topic.

2. Convert RGB image to HSV image.

3. Filter out the red color using cv::inRange.

4. Find contours and convex hulls and apply filter.

The filter removes all contours and convex hulls where the following relationship does not hold:

[math]\displaystyle{ 0.5 \lt \frac{Contour \ area}{Convex \ hull \ area} \lt 0.65 }[/math].

5. Use cv::approxPolyDP over the contours and apply filter.

The function cv::approxPolyDP is used to fit polylines over the resulting contours. The arrow should have approximately 7 lines per polyline. The polyline with the number of lines as described in the following formula is the arrow:

[math]\displaystyle{ 5 \ \lt \ number \ of \ lines \ in \ polyline \ \lt \ 10 }[/math]

6. Determine if arrow is pointing left or right.

First the midpoint of the arrow is found using [math]\displaystyle{ x_{mid} = \frac{x_{min}+x_{max}}{2} }[/math]. When the midpoint is known the program iterates over all points of the arrow contour. Two counters are made which count the number of points left and right of [math]\displaystyle{ x_{mid} }[/math]. If the left counter is greater than the right counter the arrow is pointing to the left, otherwise the arrow is pointing to the right.

7. Making the detection more robust.

Extra robustness is required for example when at one frame the arrow is not detected the program still has to know there is an arrow. This is done by taking the last 5 iterations, check if in all these iterations the arrow is detected then publish the direction of the arrow onto the topic /arrow. If in the last 5 iterations no arrow has been detected the arrow is not visible anymore thus publish that there is no arrow onto the topic /arrow.

Arrowwiki.png

Strategy

Target

In the approach used a target is required for Pico to drive to. In this section the determination of the target point is explained. In the code a loop iterates through all data points, which are in order from right to left. Only the points within 1.4 meter from Pico are taken into account. If the distance between two consecutive points is greater than a determined threshold, the presence of a door (opening) is assumed. This will be further explained with the following figure.

Base setpointPointWIKI.png

In the figure the vision of Pico is illustrated by the dotted black arc and the detected points within this marge by red dots. In the left figure one can see that two openings are detected, i.e. the distance between two consecutive points is greater than a user defined threshold value (0.6 m). The potential targets (shown in blue) are just simply placed in the middle of the two consecutive points. In the middle figure a problem is illustrated which occured while testing. When Pico looks past a corner it somehow detects points which are not really there, so called ghost points. This is a problem because in the case of ghost points the distance between two points becomes smaller than the threshold and no openings are detected. To filter this effect the distance is not taken between two points, but over five (found by testing). A downside of this method is that points far ahead of Pico give the possibility of false positives. But because in this case Pico only looks at a relative short distance, this will not occur. In the figure on the right the filter is applied and the door is found again.

The targets are used to orientate Pico. By adjusting [math]\displaystyle{ \theta }[/math] the target point is kept straight in front of Pico.

PFM

Now the target points are determined Pico needs to drive towards them without hitting any walls. In the first strategy, shown at the bottom of this page, walls were fitted so Pico knew where to drive. This method had robustness issues which lead to some problems during the corridor challenge. Therefore a more robust method was chosen for the maze competition. With the Potential Field Method virtual repulsive forces are applied to Pico for every laser point. The force caused by a point depends on the distance, [math]\displaystyle{ r }[/math], between Pico and a laser point. The virtual force is defined as: [math]\displaystyle{ F_r=-\frac{1}{r^5} }[/math], where the resulting force [math]\displaystyle{ F }[/math] is pointing away from the obstacle. When the sum is taken over all repulsive forces, the net repulsive force is pointing away from close objects.

ForceWIKI.png

A target is added as an attractive forces. The function for the attractive force is [math]\displaystyle{ F_a=r }[/math]. In one of the first iterations of the software the attractive force was composed like the repulsive force, but during testing a linear relation between distance and force of the target point proved to be working best.

The resulting force is not always the same and because Pico must drive as fast as possible only the direction of the total force is taken. The maximum speed is set in the direction of the resulting force.

Dead-end

The strategy node detects a dead-end when no doors are detected. The robot anticipates by turning around its z-axis in opposite direction of the preferred direction (which is always left or always right). When it detects a door it continues the normal procedure. Consider the following example:

1. The robot sees a door and keeps on going. The preferred direction of the robot is going left.

2. The robot sees 2 options, but it takes the most left door since this is the preferred direction.

3. The robot sees one door and drives to the target.

4. The robot does not see any doors, it recognises a dead end. It now starts turning around its z-axis in clockwise direction (opposite to the preferred direction which was left).

5. The robot detects a new door and continues normal procedure.

Base deadendWIKI.png

In this case a dead-end is only detected if it is within the vision of Pico, which is limited to 1.4 meters. When the dead-end is longer than in the example Pico will drive in the dead-end until the dead-end is within the vision. If the vision field for dead-ends is extended a dead-end will be detected sooner and pico will finish the maze faster. There was no time however to implement this in the algorithm.

Safety

The importance of safety and the requirement that pico should not hit an obstacle under any circumstances was realized. For this reason a safety feature was introduced. In ideal situation, pico should never need to use this safety feature. The program was designed in such a way that the robot could complete its task without using 'Safety'. But there are many variables in real world, all of which are difficult to account particularly in the given time frame. It is always a possibility that for some unforeseen reason, pico gets too close to an object and might hit it. 'Safety' was designed to prevent this. The first version of 'Safety' was designed to stop pico if it moves very close to any object it detects. This version was, of course, not very effective as the robot would come to a stand-still once Safety was invoked which prevented it from finishing its task. 'Safety' was later upgraded. When too close, pico needs to move away from the obstacle and this movement should not interfere with path-detection and space orientation of the robot. In the latest version, a sophisticated safety design was incorporated. Two circles, critical circle and safety circle are considered. The program first counts the number of points detected within the critical circle. If more than 10 points are counted, 'Safety' mode is started. This is done for noise filtering. Safety is given highest priotity and overrides decisions for movement of pico. The program now calculates average x and y position of all points within safety circle (safety circle is larger than critical circle). Without changing the direction it is facing, pico moves in direction directly opposite to the average obstacle point. Since there is no turning involved, spacial orientation of pico remains unaffected. This point is updated with every cycle and pico keeps moving away until the point moves out of safety circle. 'Safety' mode is then turned off and normal operation is resumed.

Base safetyWIKI upd.png

End of maze

A part of the script is dedicated to determine if pico has reached the end of maze and its time to stop. In the point cloud, a maximum 1080 points can be detected in a 270 degree range in front of pico. There are many differences that pico can detect when its outside the maze compared to within the maze. One of those differences, the one that has been implemented in the program, is that the detected points are farther away from pico when it is outside the maze. Pico counts the number of points that are farther than 1.6 meters from itself. Within the maze, at all times, this number is not larger than 400 points. When this number is greater than 0.6 times the maximum possible (0.6x1080 = 648) pico runs in 'End of maze' mode. In this mode, pico drives straight for forty cycles after which it comes to a standstill. 'End of maze' counter is checked during every cycle and if, at any point, the counter is not large enough, the cycle counter is reset to zero and pico moves out of 'End of maze' mode. This is a preventive measure against false end-of-maze detection.

First software approach

Obstacle Detection

Finding Walls and Doors

Our first approach was to find walls and doors (openings) expressed as a line so we could easily steer the robot to a certain door. The following algorithm was developed:

1. All laserpoints are plotted in a cv::Mat stucture (left). Secondly a Probalistic Hough Line Transform is applied to the plotted laserpoints (right).

Emc03wall.png

2. The resulting lines are filtered into two categories, a most common directory and a second most common directory. This is shown in the figure below.

Mostcommon.png Secondmostcommon.png

3. Since the Probalistic Hough Line Transformation return multiple lines per wall, i.e. a wall can consists of 20 lines, a line is fit through the contour of each wall. The result is 1 line per wall as shown in the figure below.

Finalwallok.png

4. Now it is possible to fit 'doors' between walls which is shown in the figure below.

Deurtjes.png


Why this method doesn't work in real life

The method described in how to find walls and doors did not work properly in real life. We experienced some serious robustness problems due to the fact that in some of the iterations complete walls and/or doors were not detected, thus the robot couldn't steer in a proper fashion to the target. Secondly this method required some serious computation power which is not preferable.

Evaluation of the maze competetion

During the corridor competition it became clear that the chosen strategy was not robust enough to detect the maze in real life. This first strategy did work in the simulator but not in real-life. Thus to achieve a strategy which will work more robust we changed our strategy as can be read above.

During the testing of the second strategy software in the days before the maze competition we noticed that this was a good decision with excellent results. This conclusion was drawn because of the fact that PICO was not only able to drive correctly through a maze with straight walls and 90 degrees corners, it was also able to drive through a 'maze' with bended walls and random degree angles. Even when people were walking through the maze PICO still did not gave up solving the maze. This made us confident about the performance during the real competition.

During the competition the ‘always go left’ rule was chosen, although it did not matter because of the design of the maze. As a result, PICO chose the correct direction at the first crossing. The correct action for the second crossing was taking a right turn. PICO first looked into the left and front corridor to check whether or not is was a dead end, which it was, so it turned right. The crossings in the rest of the maze were provided with a red arrow which was detected correctly and used to steer into the right corridor.

To improve the strategy a few adaptions for efficiency and robustness can be made. The first corner was taken really fast, it seemed like it was ‘drifting’ through the corned which is the most speed efficient way to pass a crossing. But at the second crossing it first had to check if the other directions where dead ends which costs valuable time. To improve this, the dead end detection should look at a wider range of point instead of only in front of PICO which will make it turn in the right direction even more early.

To accomplish an even better robustness the ghost point ‘filter’ should be improved. In the current strategy a fixed amount of points is chosen to sum the distances, but this is not perfectly robust. Although the PICO finished correctly, an algorithm to detect if a point is a ghost point (for example to check if there are other points close to the current point) should make the detection even more robust.

In conclusion, the chosen strategy turned out to be robust and efficient compared to other groups. Only a few lines of code are used and the strategy does not use a lot of data. During the competition we became first with a total time of 1 minute and 2 seconds, a great achievement!