Embedded Motion Control 2015 Group 3: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
No edit summary
 
(336 intermediate revisions by 10 users not shown)
Line 1: Line 1:
This is the Wiki-page for EMC-group 3.  
This is the Wiki-page for EMC-group 3, part of the [[Embedded_Motion_Control_2015|Embedded Motion Control 2015 course]].


= Group members =  
= Group members =  
{| class="wikitable"
{| border="1" class="wikitable"
|-
|-
! Name  
! Name  
Line 49: Line 49:
|}
|}


= General information =
This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.


= Log =
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.


= Project =
PICO is the name of the robot that will be used. In this case, PICO has two types of useful inputs:
This course is about software design and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.
 
The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot in the Maze Challenge, in which the robot has to find its way out in a maze.
 
PICO is the name of the robot that will be used. Basically, PICO has two types of inputs:
# The laser data from the laser range finder
# The laser data from the laser range finder
# The odometry data from the wheels
# The odometry data from the wheels


In the fourth week there is the "Corridor Compitition". During this challenge, called the corridor competition the students have to let the robot drive through a corridor and then take the first exit.
In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).
 
At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit.
 


At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.


== Week 1 ==
= Design =
Here, we present a summary of our initial design ideas:
In this section the general design of the project is discussed.


=== Requirements ===
=== Requirements ===
Two requirements are devised, based on the description of the maze challenge. The first one is to be able to complete the challenge by finding a way out of the maze. The second requirement is that the robot should avoid bumping into the walls or doors. While this second requirement is not necessary for the completion of the challenge, it is helpful for the longevity and operation of the robot.
The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:
 
* Move and reach the exit of the maze.
=== Functions ===
** As fast as possible
The robot will know a number of basic functions. These functions can be divided into two categories: actions and skills. The actions are the most basic actions the robot will be able to do. The skills are specific sets of actions that accomplish a certain goal. The list of actions is as follows:
** Enter a door
 
** Do not get stuck in a loop
* Drive
* The robot should avoid bumping into the walls.  
* Turn
* Therefore, it should perceive its surroundings.
* Scan
* The robot has to solve the maze in a 'smart' way.
* Wait
* Must be applicable to every maze.
These actions are used for the skills. The list of skills is as follows:
* Drive to location
* Drive to a position in the maze based on the information in the map. This includes (multiple) drive and turn actions. Driving will also always include scanning to ensure there are no collisions.
* Check for doors
* Wait at the potential door location for a predetermined time period while scanning the distance to the potential door to check if it opens.
* Locate obstacles
* Measure the preset minimum safe distance from the walls or measure not moving as expected according to the driving action.
* Map the environment
* Store the relative positions of all discovered object and doors and incorporate them into a map.
These skills are then used in the higher order behaviors of the software. These are specified in the specifications section.
 
=== Specifications ===
The first specification results from the second requirement: Driving without bumping into objects. In order to do this, the robot uses its sensors to scan the surroundings. It then adjusts its speed and direction to maintain a safe distance from the walls.
The way the robot will solve the maze comprises of a few principle things the robot will be able to do. Because of the addition of doors in the maze, the strategy of wall hugging is no longer effective. Hence a different approach is required. The second specification is that the robot will remember what it has seen of the maze and that it makes a map accordingly. The robot should then use this map to decide which avenue it should explore.  
The escape strategy of the robot is an algorithm. Because the doors in the maze might not be clearly distinguishable, they might be difficult to detect. The only way to know for sure if a door is present at a certain location, is to stand still in front of the door until it opens. Standing still in front of every piece of wall in order to check for the presence of doors takes a long time and is therefore not desirable for escaping the maze as fast as possible. Therefore the robot first assumes that there are no doors in the way to the exit. It then explores by following the wall and taking every left turn. Whenever a dead end is hit, the robot goes back to the last intersection and chooses the next left path. Because the robot maps the maze, it knows whether it has explored an area and when it moves in a loop.  


If no exit is found under the assumption that there are no doors, the robot starts checking for doors. See Figure 1 for different possible situations of where the doors are located. At first it assumes that there can only be doors at dead ends (1). If still no solution is found the robot also checks for doors at corners (2), followed by intersections (3) and finally on every outside wall of the currently mapped maze. In order to detect these doors the robot stands in front of the potential door for a certain time and checks with its sensors whether the distance to the nearest wall changes.
=== Functions & Communication ===


[[File:Corridorlayout.png|400px|thumb|center]]
[[File:behaviour_diagram.png|250px|thumb|right|Blockdiagram for connection between the contexts]] The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills.


=== Uncertainties ===
The task are the most high level proceedings the robot should be able to do. These are:
*Determine situation
*Decision making
*Skill selection


Certain aspects of the design are not yet clear due to uncertainties in the specifications of the robot and/ or the maze challenge.
The skills are specific actions that accomplish a certain goal. The list of skills is as follows:
Depending on the difference between the end of the maze and the inside of the maze, the robot may be enabled to detect it has completed the challenge. If however the outside of the maze is simply an open space similar to a place inside the maze, the robot might not be able to distinguish the difference. In this case the robot would have to be stopped manually.
*Handle intersections
*Handle dead ends
*Discover doors
*Mapping environment
*Make decisions based on the map
*Detect the end of the maze


The exact specifications of the robot are still unknown and without testing the precise accuracy and range of the sensors, the resolution of the map and the safe wall distance are unknown.
These skills need the following functions of the robot:
*Drive
*Rotate
*Read out sensor data to scan environment


In order to make the robot complete the challenge faster control over the speed of the robot could be used. This way it could move faster in area’s it has already mapped. This is only possible if the robot has the capability of moving at different speeds.
=== Software architecture ===


== Week 2 ==
[[File:Overrall structure.jpg|250px|thumb|right|Overall structure]]To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure on the right shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections.
Week 2 consisted mainly of trying to get all the tutorials to work. Only at the end of the week, we could actually get started on our project. We had a little brainstorm session to gather ideas to include in our software:


• Detecting openings in a corridor
Lets start with the Scan block:
* Scan receives information about the environment. To do this it uses his laser range finder data.
* Based on this data Scan consults its potential field algorithm to make a vector for Drive.
* Drive interprets the vector and sends the robot in that direction.
* Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.
* Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.
* Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive.
* Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.


• Detecting walls while driving straight


• Taking a corner
=== Calibration ===
<p>[[File:Originaldata.png|250px|thumb|right|Calibration: Difference between odometry and LRF data]] In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The top figure on the right shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:
*The laserdata and the odometry data do not return exactly the same values.
*The odometry seems to produce no noise at all.


• Detecting walls while driving around a corner
[[File:StaticLRF.png|250px|thumb|right|alt=Static LRF|Calibration: Static LRF]]


• Drive in the middle of a corridor
The noisy signal that was returned by the laser is presented in the bottom picture on the right. Here, a part of the laser data is picked from a robot that was not moving.
•Stopping for obstacles
* The maximum amplitude of the noise is roughly 12 mm.
''• What action do we need to take when an obstacle has been recognized?
* The standard deviation of the noise is roughly 5.5 mm
• What to do when it hits an obstacle that has not been detected by the sensors?''
* The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.
[ed
* The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)</p>
• Mapping the driven route (not necessary for corridor challenge)
<br><br><br><br><br><br><br><br><br><br><br><br>


• Recognizing dead ends, subsequently turning around.
= Software implementation =
In this section, implementation of this software will be discussed based on the block division we made.


• Indicate that the robot has found a door and wants to pass (beeping, sounding a horn, make a
Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.
360, use the pan-tilt function of the head, firing a missile...)(not necessary for corridor challenge)


=== Block diagram ===
=== Drive block ===
[[File:Drive.jpg|250px|thumb|right|Composition pattern of Drive]] Basically, the [[Embedded_Motion_Control_2015_Group_3/Drive|Drive block]] is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do.


[[File:blockdia.PNG|400px|thumb|center|Blockdiagram for connection between the contexts]]
Two other methods were also considered: [[Embedded_Motion_Control_2015_Group_3/Drive#Simple_method|Simple method]] and [[Embedded_Motion_Control_2015_Group_3/Drive#Path_planning_for_turning|Path planning]]. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.
<br><br><br><br><br>


== Week 3 and later... ==
=== Four main blocks ===
The problem is divided in different blocks.
We have chosen to make these four blocks; Drive, Scan, Decision and Mapping.
The next figure is a schematic representation of the connection between these blocks:


[[File:drivescanmapdec.png|400px|thumb|center|Cohesion of Drive-, Scan-, Decision- and Mapping block]]


==== Drive block ====


The drive part can, for the corridor challenge, do two things: go straight in a corridor, and take a corner (left or right).


Going straight in a corridor is done by checking the closest points at the left-hand and right-hand side of the corridor, since this will be where the wall is perpendicular  to the robot. Based on that, it checks what the correct angle for driving should be (difference between left and right angle). Then, it calculates the deviation from the centerline of the corridor, and based on a desired forward speed, it calculates a movement vector. Finally, it translates this vector to the local robot driving coordinates. It should be noted that the Drive class is not responsible for deciding whether it's driving in a corridor, so this particular algorithm is not robust for corners, intersections etc.
=== Scan block ===
[[File:Scan_cp_new.jpg|250px|thumb|right|Composition pattern of Scan]][[Embedded_Motion_Control_2015_Group_3/Scan|The block Scan]] processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.  


Taking a corner is done by looking at the two corner points of the side exit. Then, it tries to orient the robot to bisect the angle between those corner points, while maintaining forward speed. This way, a corner will be taken. The main vulnerability here is taking the corner too narrow, so a distance from the wall will be kept.
The composition pattern of the drive block:
[[File:cpdrive.png|400px|thumb|center|CP of Drive]]
==== Scan block ====
The scan block processes the laser data of the Laser Range Finders. By this, we want to detect the walls of course. This will be used to find corridors, doors and all kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.
# Scan directly gives information to 'drive'. Drive uses this to avoid collisions.
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.
# The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.
# Mapping also uses data from scan to map the maze.
# Mapping also uses data from scan to map the maze.


(from presentation:)
PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about the following properties is found in [[Embedded_Motion_Control_2015_Group_3/Scan|the block Scan]]:
# Left&right opening detection
 
* Detect opening by check distance change of laser data
* Potential field
* Obtain middle point position and nearest corner point position
* Detecting junctions/intersections
# Front opening detection:
* Virtual walls
* Assuming always front open
* Collision avoidance
* Until meet a front close case, laser data change
* Detecting doors


==== Decision block ====
=== Decision block ===
[[File:Composition_Pattern_Decision.png|250px|thumb|right|Composition pattern of Decision]]The [[Embedded_Motion_Control_2015_Group_3/Decision|Decision block]] contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.


[[Input:]]
<ins>Input:</ins>
* Mapping model
* Mapping model
* Scan data
* Scan data


 
<ins>Output:</ins>
[[Output:]]
* Specific drive action command
* Specific drive action command


[[Implement Trémaux's algorithm]]
The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose every time the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.
 
[[Different situations when visiting a node]]
* If it is a dead-end node
* Did the door open for me
* Any unvisited paths
* Any paths with 1 visit
* Paths with 2 visit (not a choice)
 
 
==== Mapping block ====
 
Three people were assigned to the wayfinding/maze-solving/mapping task. They decided to work alone for one week to see what kind of solutions they would come up with. In the meeting of 20 May, the entire group will decide on the solution that seems to be best.


A decision was made to use the Tremaux algorithm: [http://blog.jamisbuck.org/2014/05/12/tremauxs-algorithm.html].
=== Mapping block ===
[[File:Emc03 wayfindingCP1.png|250px|thumb|right|Mapping & solve algorithm]] [[Embedded_Motion_Control_2015_Group_3/Mapping|The mapping block]] will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.


[[File:Emc03 wayfindingCP1.png|400px|center|thumb|Map&solve algorithm]]
To do this, the [http://www.cems.uvm.edu/~rsnapp/teaching/cs32/lectures/tremaux.pdf Tremaux algorithm] is used, which is an implementation of Depth First Search.


The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction. an edge is the connection between one node and another. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a choice of possible directions that will lead to solving the maze.
The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.


The schedule looks like this:
In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation block]]. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.
* Updating the map:
<br><br><br>
** Robot tries to find where he is located in global coordinates. Now it can decide if it is on a new node or on an old node.
** The robot figures out from which node it came from. Now it can define what edge it has been traversing. It marks the edge as 'visited once more'.
** All sorts of other properties may be associated with the edge. Energy consumption, traveling time, shape of the edge... This is not necessary for the algorithm, but it may help formulating more advanced weighting functions for optimizations.
** The robot will also have to realize if the current node is connected to a dead end. In that case, it will request the possible door to open.
* Choosing a new direction:
** Check if the door opened for me. In that case: Go straight ahead and mark the edge that lead up to the door as ''Visited 2 times''. If not, choose the edge where you came from
** Are there any unvisited edges connected to the current node? In that case, follow the edge straight in front of you if that one is unvisited. Otherwise, follow the unvisited edge that is on your left. Otherwise, follow the unvisited edge on your right.
**Are there any edges visited once? Do not go there if there are any unvisited edges. If there are only edges that are visited once, follow the one straight ahead. Otherwise left, otherwise right.
**Are there any edges visited twice? Do not go there. According to the Tremaux algorithm, there must be an edge left to explore (visited once or not yet), or you are back at the starting point and the maze has no solution.
* Translation from chosen edge to turn command:
** The nodes are stored in a global coordinate system. The edges have a vector pointing from the node to the direction of the edge in global coordinates. The robot must receive a command that will guide it through the maze in local coordinates.
* The actual command is formulated
* A set-up is made for the next node
** e.g., the current node is saved as a 'nodeWhereICameFrom', so the next time the algorithm is called, it knows where it came from and start figuring out the next step.


=== Three methods ===
=== Localisation block ===
During this project, we have worked on three different methods. Basically, the approaches are getting better to the end. The 'Drive' and 'Scan' block have to be adapt for each method. The 'Decision' and 'Mapping' are independent of these methods.  
The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.


==== Simple method ====
The localisation algorithm is explained in on the [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation page]]; by separating and discussing the important factors.
The first approach is the most simple one, that is why it is called the simple method. This also means that is not the most fancy one. However, it is still important to have this working because we can always use this method when the other methods fail. In addition, we have learned a lot from it and used is as base for the other methods.


In brief, the simple method contains 3 steps:
= A-maze-ing Challenge =
# Drive to corridor without collision.
In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:
# Detect opening (left of right) and stop in front of it.
# Make a script only based on the corridor challenge.
# Turn 90 degrees and start driving again.
# Make a script for the corridor challenge but with clear references to the final maze challenge.
We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..


This method is a robust way to pass the corridor challenge. Although, it would not be the fastest way.
For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.


==== Pathplanning for turning ====
=== Run 1 ===
The path planning is the second method that we worked on. Briefly, it exist of two independent sub-methods.  
The first run is taped on video and can be seen [https://www.youtube.com/watch?v=fzsNA2OUwww here]. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.


# Collision avoidance: This is a function that is used to drive straight through corridors without touching the walls. The function measures the nearest wall and identifies whether it is on its left or right side. There is set a margin in order to avoid a collision. This margin determines when PICO has to adjust its direction. When for instance, a wall on the left is closer than the margin, PICO has to move to the right. This method works well for straight corridors. However, it will not work for driving around corners.
Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.
# Path planning for driving around corners: Path planning can be used when PICO approaches an intersection. Assume that PICO has to go left on a T-juntion; then only collision avoidance will not be sufficient anymore. So, for instance 0.2 meter before the corner the ideal path to drive around the corner is calculated. This means that Vx, Vy, Va and the time (for turning) have to be determined on that particular moment. Then basically,  
* Driving straight stops;
* Turning with the determined Vx, Vy and Va for the associated time to drive around the corner;
* Driving straight again.
In practice, this method turned out to be very hard. Because it is difficult to determine the right values for the variables.  


[[File:turningpath.png|400px|thumb|center|...]]
=== Run 2 ===
For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.


==== Potential field ====
The second run can be seen [https://www.youtube.com/watch?v=UHz_41Bsi7c here]. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!
[[File:vector_zicht.png|400px|thumb|center|]]


= Experiments =
= Experiments =
Here you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.
Seven experiments are done during the course. [[Embedded_Motion_Control_2015_Group_3/Experiments|Here]] you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.


== Experiment 1 ==
= Conclusion =
Date: Friday May 8
In the end, our final script was capable of solving the maze challenge, in a short time and robustly, in that it did not bump the wall. However, since we were not able to implement the higher order thinking into our script, and our final code was dependent on a random walk, the route the robot takes is up to chance. This still will solve the maze, eventually, as is shown in the second trial. 
Our recommendations therefore are to further the localisation, in combination with the mapping, and in this way implement the higher order learning, as was our aim.
What we learned from this project was to implement top down software design using algorithms. This helped us a lot to keep overview of such a big code, by compartmentalizing the code into blocks, and keeping a clear overview of the communications between the blocks. Also, it allowed for an easier bottom up implementation, which has the added benefits of being able to build the code up from scratch, in that we would now start by creating a composition pattern, then basing the code on this.
The classes did help us in figuring out the way to approach these diagrams, like the composition pattern. However we had trouble seeing the application to our problem right away, like how each block in this cp should be applied.


Purpose:
= Files & presentations =
* Working with PICO
* Some calibration to check odometry and LRF data


Evaluation:
# Initial design document (week 1):   [[File:init_design.pdf]]
* There were problems between the laptop we used and PICO.
# First presentation (week 3):        [[File:Group3_May6.pdf]]
# Second presentation (week 6):      [[File:Group3_May27.pdf]]
# Final design presentation (week 8): [[File:EMC03 finalpres.pdf]]


== Experiment 2 ==
= Videos =  
Date: Tuesday May 12


Purpose:
Experiment 4: Testing the potential field on May 29, 2015.
* Calibration
* https://youtu.be/UAZqDMAHKq8
* Do corridor challenge
 
Evaluation:
* We did some calibration tests:
[[File:Originaldata.png|400px|thumb|center|Difference between odometry and LRF]]
[[File:staticLRF.png|400px|thumb|center|Static LRF]]
* We could not succeed the corridor challenge yet.
 
== Experiment 3 ==
Date: Tuesday May 22
 
Purpose:
* Calibration
* Do corridor challenge


Evaluation:
Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.
* Combining the Scan and Drive for path planning was not succesfull.
* https://www.youtube.com/watch?v=fzsNA2OUwww
* Potential field script was not ready yet.  


== Experiment 4 ==
Maze challenge: Winning attempt! on June 17, 2015.
Date: Tuesday May 29
* https://www.youtube.com/watch?v=UHz_41Bsi7c


Purpose:
= EMC03 CST-wiki sub-pages =
* Test potential field
* [[Embedded_Motion_Control_2015_Group_3/Drive|Drive]]
* Test path planning
* [[Embedded_Motion_Control_2015_Group_3/Scan|Scan]]
 
* [[Embedded_Motion_Control_2015_Group_3/Decision|Decision]]
Evaluation:
* [[Embedded_Motion_Control_2015_Group_3/Mapping|Mapping]]
* Path planning was not very succesfull.
* [[Embedded_Motion_Control_2015_Group_3/Localisation|Localisation]]
* Potential field did very well in corridors (see video). Intersections need some extra attention.
* [[Embedded_Motion_Control_2015_Group_3/Experiments|Experiments]]
 
= Presentations =
 
# First presentation (week 3): [[File:Group3_May6.pdf]]
# Second presentation (week 6): [[File:Group3_May27.pdf]]
 
= Videos =
 
Experiment 4: Testing the potential field on May 29, 2015.
* https://youtu.be/UAZqDMAHKq8

Latest revision as of 19:05, 26 June 2015

This is the Wiki-page for EMC-group 3, part of the Embedded Motion Control 2015 course.

Group members

Name Student number Email
Max van Lith 0767328 m.m.g.v.lith@student.tue.nl
Shengling Shi 0925030 s.shi@student.tue.nl
Michèl Lammers 0824359 m.r.lammers@student.tue.nl
Jasper Verhoeven 0780966 j.w.h.verhoeven@student.tue.nl
Ricardo Shousha 0772504 r.shousha@student.tue.nl
Sjors Kamps 0793442 j.w.m.kamps@student.tue.nl
Stephan van Nispen 0764290 s.h.m.v.nispen@student.tue.nl
Luuk Zwaans 0743596 l.w.a.zwaans@student.tue.nl
Sander Hermanussen 0774293 s.j.hermanussen@student.tue.nl
Bart van Dongen 0777752 b.c.h.v.dongen@student.tue.nl

General information

This course is about software designs and how to apply this in the context of autonomous robots. The accompanying assignment is about applying this knowledge to a real-life robotics task.

The goal of this course is to acquire knowledge and insight about the design and implementation of embedded motion systems. Furthermore, the purpose is to develop insight in the possibilities and limitations in relation with the embedded environment (actuators, sensors, processors, RTOS). To make this operational and to practically implement an embedded control system for an autonomous robot, there is the Maze Challenge. In which the robot has to find its way out in a maze.

PICO is the name of the robot that will be used. In this case, PICO has two types of useful inputs:

  1. The laser data from the laser range finder
  2. The odometry data from the wheels

In the fourth week there is the "Corridor Competition". During this challenge, students have to let the robot drive through a corridor and then take the first exit (whether left or right).

At the end of the project, the "A-maze-ing challenge" has to be solved. The goal of this competition is to let PICO autonomously drive through a maze and find the exit. Group 3 was the only group capable of solving the maze.

Design

In this section the general design of the project is discussed.

Requirements

The final goal of the project is to solve a random maze, fully autonomously. Based on the description of the maze challenge, several requirements can be set:

  • Move and reach the exit of the maze.
    • As fast as possible
    • Enter a door
    • Do not get stuck in a loop
  • The robot should avoid bumping into the walls.
  • Therefore, it should perceive its surroundings.
  • The robot has to solve the maze in a 'smart' way.
  • Must be applicable to every maze.

Functions & Communication

Blockdiagram for connection between the contexts

The robot will know a number of basic functions. These functions can be divided into two categories: tasks and skills.

The task are the most high level proceedings the robot should be able to do. These are:

  • Determine situation
  • Decision making
  • Skill selection

The skills are specific actions that accomplish a certain goal. The list of skills is as follows:

  • Handle intersections
  • Handle dead ends
  • Discover doors
  • Mapping environment
  • Make decisions based on the map
  • Detect the end of the maze

These skills need the following functions of the robot:

  • Drive
  • Rotate
  • Read out sensor data to scan environment

Software architecture

Overall structure

To solve the problem, it is divided into different blocks with their own functions. We have chosen to make these five blocks: Scan, Drive, Localisation, Decision and Mapping. The figure on the right shows a simplified scheme of the software architecture and the cohesion of the individual blocks. In practice, Drive/Scan and Localisation/Mapping are closely linked. Now, a short clarification of the figure will be given. More detailed information of each block will be discussed in the next sections.

Lets start with the Scan block:

  • Scan receives information about the environment. To do this it uses his laser range finder data.
  • Based on this data Scan consults its potential field algorithm to make a vector for Drive.
  • Drive interprets the vector and sends the robot in that direction.
  • Together the LRF and odometry data determine the traveled distance and direction. Localisation saves this in an orthogonal grid.
  • Mapping consults these positions to 'tell' Decision at what interesting point the robot is. For instance, this can be a junction or a dead end.
  • Then it should know if the robot has been there before. Based on that, Decision can send a new action to Scan/Drive.
  • Now the new vector is based on the environment data and the information from Decision. In this way, the robot should find a strategic way to drive through the maze.


Calibration

Calibration: Difference between odometry and LRF data

In the lectures, the claim was made that 'the odometry data is not reliable'. We decided to quantify the errors in the robot's sensors in some way. The robot was programmed to drive back and forth in front of a wall. At every time instance, it would collect odometry data as well as laser data. The laser data point that was straight in front of the robot was compared to the odometry data, i.e. the driven distance is compared to the measured distance to the wall in front of the robot. The top figure on the right shows these results. The starting distance from the wall is substracted from the laser data signal. Then, the sign is flipped so that the laser data should match the odometry exactly, if the sensors would provide perfect data. Two things are now notable from this figure:

  • The laserdata and the odometry data do not return exactly the same values.
  • The odometry seems to produce no noise at all.
Static LRF
Calibration: Static LRF

The noisy signal that was returned by the laser is presented in the bottom picture on the right. Here, a part of the laser data is picked from a robot that was not moving.

  • The maximum amplitude of the noise is roughly 12 mm.
  • The standard deviation of the noise is roughly 5.5 mm
  • The laser produces a noisy signal. Do not trust one measurement but take the average over time instead.
  • The odometry produces no notable noise at all, but it has a significant drift as the driven distance increases. Usage is recommended only for smaller distances (<1 m)













Software implementation

In this section, implementation of this software will be discussed based on the block division we made.

Brief instruction about one block can be found here. In addition, there are also more detailed problem-solving processes and ideas which can be found in the sub-pages of each block.

Drive block

Composition pattern of Drive

Basically, the Drive block is the doer (not the thinker) of the complete system. The figure shows the composition pattern of Drive. In our case, the robot moves around using potential field. How the potential field works in detail is shown in Scan. Potential field is an easy way to drive through corridors, and making turns. Important is to note that information from the Decision maker can influence the tasks Drive has to do.

Two other methods were also considered: Simple method and Path planning. Especially, we worked a lot of time on the Path planning method. However, after testing, the potential field was the most robust and most convenient method.






Scan block

Composition pattern of Scan

The block Scan processes the laser data of the Laser Range Finders. This data is used to detect corridors, doors, and different kind of junctions. The data that is retrieved by 'scan' is used by all three other blocks.

  1. Scan directly gives information to 'drive'. Drive uses this to avoid collisions.
  2. The scan sends its data to 'decision' to determine an action at a junction for the 'drive' block.
  3. Mapping also uses data from scan to map the maze.

PICO moves always to the place with the most space using its potential field. However, at junctions and intersections the current potential field is incapable of leading PICO into the desired direction. Virtual walls are constructed to shield potential path ways, than PICO will move to its desired direction which is made by the decision maker. To create an extra layer of safety, collision avoidance has been added on top of the potential field. Also, the scan block is capable of detecting doors, which is a necassary part of solving the maze. More detailed information about the following properties is found in the block Scan:

  • Potential field
  • Detecting junctions/intersections
  • Virtual walls
  • Collision avoidance
  • Detecting doors

Decision block

Composition pattern of Decision

The Decision block contains the algorithm for solving the maze. It can be seen as the 'brain' of the robot. It receives the data of the world from 'Scan'; then decides what to do (it can consult 'Mapping'); finally it sends commands to 'Drive'.

Input:

  • Mapping model
  • Scan data

Output:

  • Specific drive action command

The used maze solving algorithm is called: Trémaux's algorithm. This algorithm requires drawing lines on the floor. Every time a direction is chosen it is marked bij drawing a line on the floor (from junction to junction). Choose every time the direction with the fewest marks. If two direction are visited as often, then choose random between these two. Finally, the exit of the maze will be reached.

Mapping block

Mapping & solve algorithm

The mapping block will store the corridors and junctions of the maze. This way, the decision maker can make informed decisions at intersections, to ensure that the maze will be solved in a strategic way.

To do this, the Tremaux algorithm is used, which is an implementation of Depth First Search.

The maze will consist of nodes and edges. A node is either a dead end, or any place in the maze where the robot can go in more than one direction (i.e., an intersection). An edge is the connection between one node and another, and as such is also called a corridor. An edge may also lead to the same node. In the latter case, this edge is a loop. The algorithm is called by the general decision maker whenever the robot encounters a node (junction or a dead end). The input of the algorithm is the possible routes the robot can go (left, straight ahead, right, turn around) and the output is a direction that the Mapping block considers the best option.

In order to detect loops, the position of the robot must be known as well as the position of each node, to see when the robot has returned to the same location. This is decoupled from the Mapping block and done in the Localisation block. Since the localization did not work in the end, a Random Walk implementation was also made in the Mapping block.


Localisation block

The purpose of the localisation is that the robot can prevent itself from driving in a loop for infinite time, by knowing where it is at a given moment in time. By knowing where it is, it can decide based on this information what to do next. As a result, the robot will be able to exit the maze within finite time, or it will tell there is no exit if it has searched everywhere without success.

The localisation algorithm is explained in on the Localisation page; by separating and discussing the important factors.

A-maze-ing Challenge

In the third week of this project we had to do the corridor challenge. During this challenge, we have to let the robot drive through a corridor and then take the first exit (whether left or right). This job can be tackled with two different approaches:

  1. Make a script only based on the corridor challenge.
  2. Make a script for the corridor challenge but with clear references to the final maze challenge.

We chose the second approach. This implies that we will have to do some extra work to think about a properly structured code. Because only then the same script can be used for the final challenge. After the corridor competition, we can discuss about our choice because we failed the corridor challenge while other groups succeed. But most of these group had selected approach 1 and we already had a decent base for the a-maze-ing challenge. And this was proving its worth later..

For the a-maze-ing challenge we decided on using two versions of our software package. In the first run (see section Video's further down on the page), we implemented Tremaux's algorithm together with a localiser that would together map the maze and try to solve it. Our second run was conducted with the Tremaux's algorithm and localisation algorithm turned off. Each time the robot encountered a intersection, a random decision was made on where to go next.

Run 1

The first run is taped on video and can be seen here. The robot recognizes a four-way cross-section and decides to turn to the left corridor. It then immediately starts do chatter as the corridor was more narrow than expected. Next, it follows the corridor smoothly until it encounters the next T-juction. The robot is confused because of the intersection immediatly to its left. After driving closer to the wall, it mistakes it for a door. Because it (of course) didn't open, it decides to turn to right and explore the dead end. In the part between 20 seconds and 24 seconds in to the video, the robot is visibly having a hard time with the narrow corridor. It tries to drive straight but also evade the walls to the left and to the right. It recognizes another dead-end and turns around swiftly. It crosses the T-junction again by going straight and at 43 seconds it again thinks it is in front of a door. After ringing the bell, it waits for the maximum of 5 seconds that it can take to open the door. Now, it recognized that also this is a dead-end and not a door. After turning around it drives back to the starting position. Between 1:11 and 1:30, it explores the edges that he has not yet seen. Here, the Tremaux's algorithm and the localiser 'seem' to be doing their job just fine. Unfortunately, as can be seen in the rest of the video, something went wrong with the other nodes to be placed. It decides to follow the same route as the first time, fails to drive to the corridor with the door in it and eventually got stuck in a loop.

Main reason for failure is thought to be the node placement. The first T-junction that the robot encountered made PICO go into its collision avoiding mode, which might have interfered with the commands to place a node. It is also possible that this actually happened, but that the localization went wrong because of all the lateral swaying to avoid collisions with the wall. It was clear that the combination of localisation, the maze-solving algorithm and the situation recognition by LRF was not yet ready to be implemented as a whole. Therefore, we decided to make the second run with a more simple version of our software, running only the core-modules that were tested and found to be reliable.

Run 2

For the second run, we ran a version of our software without the Tremaux's algorithm implemented and with the global localiser absent. These features were developed later in the project and were not finished 100%. For this run, a random decision was passed to the decision maker every time it asked for a new direction to head to.

The second run can be seen here. Again the robot immediately decides to go left. Note that the first corner it takes in the corridor, between 0:02 and 0:04 are exactly the same. This is because the robot is driven by separate blocks of software. The blocks that are active during the following of a corridor were exactly the same for both runs. At 00:7, the collision detection works just in time to prevent a head on collision with the wall in front of PICO at the T-junction. Now, a random decision is made to go left, followed by a right turn in to the corridor with the door. It recognizes the door in front of it exactly as expected and stops to ring the doorbell. Although the door started moving immediately after ringing the bell, the robot is programmed to wait for five seconds until it is allowed to move again. During these five seconds, it used the LRF to check if the door moved out of its way. After the passage was all clear, the robot started exploring the new area. Now, the robot drives in to the open space. Note that, between 0:30 and 0:36, the robot made a zigzag manoeuvre. When it first drives into the open space, the potential field points at the center of this open space. Between 0:36 and 0:46 it drives in 'open space mode'. This means that the robot will drive to the nearest wall and starts driving alongside of it. It should thereby always find a new node where a new decision can be made. By doing so, it drives into a corridor. Note that at 0:47, the normal 'corridor mode' started working again. The potential field method will direct the robot towards the middle of the corridor. This explains the sharp turn it made at 0:47. After hearing the presenter ask to 'Please go left... Please go left?!?', the robot makes another random decision. As luck would have it, the random decision was indeed to go left. It slightly overturns, but the collision detection saves PICO from crashing into the wall yet again at 1:06. At 1:10, the well earned applause for PICO started as he finished the maze in a total time of 1:16!

Experiments

Seven experiments are done during the course. Here you can find short information about dates and goals of the experiments. Also there is a short evaluation for each experiment.

Conclusion

In the end, our final script was capable of solving the maze challenge, in a short time and robustly, in that it did not bump the wall. However, since we were not able to implement the higher order thinking into our script, and our final code was dependent on a random walk, the route the robot takes is up to chance. This still will solve the maze, eventually, as is shown in the second trial. Our recommendations therefore are to further the localisation, in combination with the mapping, and in this way implement the higher order learning, as was our aim. What we learned from this project was to implement top down software design using algorithms. This helped us a lot to keep overview of such a big code, by compartmentalizing the code into blocks, and keeping a clear overview of the communications between the blocks. Also, it allowed for an easier bottom up implementation, which has the added benefits of being able to build the code up from scratch, in that we would now start by creating a composition pattern, then basing the code on this. The classes did help us in figuring out the way to approach these diagrams, like the composition pattern. However we had trouble seeing the application to our problem right away, like how each block in this cp should be applied.

Files & presentations

  1. Initial design document (week 1): File:Init design.pdf
  2. First presentation (week 3): File:Group3 May6.pdf
  3. Second presentation (week 6): File:Group3 May27.pdf
  4. Final design presentation (week 8): File:EMC03 finalpres.pdf

Videos

Experiment 4: Testing the potential field on May 29, 2015.

Maze challenge: Tremaux's algorithm, but failing to solve the maze. June 17, 2015.

Maze challenge: Winning attempt! on June 17, 2015.

EMC03 CST-wiki sub-pages