Embedded Motion Control 2014 Group 6: Difference between revisions
No edit summary |
|||
(133 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
Wout Laarakkers 0828580 | |||
Rik Boonen 0805544 | Rik Boonen 0805544 | ||
Line 6: | Line 5: | ||
Dhruv Khandelwal 0868893 | Dhruv Khandelwal 0868893 | ||
Suraj Prakash 0870060 | <s>Suraj Prakash 0870060</s> | ||
Hans Reijnders 0806260 | Hans Reijnders 0806260 | ||
== Updates == | |||
* '''Include [[Bool action_done]] and [[Bool wall_close]] in topic [[drive]] for handshake (23 May)''' | |||
== Planning == | == Planning == | ||
Line 52: | Line 56: | ||
* Testing parts | * Testing parts | ||
* Integrating parts | * Integrating parts | ||
* '''Deadline for the nodes''' | |||
== Changes in group == | == Changes in group == | ||
Line 59: | Line 64: | ||
[[File:flowchart_corridor_competition-500x500.png|center|thumb|200px]] | [[File:flowchart_corridor_competition-500x500.png|center|thumb|200px]] | ||
== Software design == | == Global Software design == | ||
=== Design Goals === | |||
for ore global software we had 3 design choices<br> | |||
1.Split in to equal parts<br> | |||
2.Make logical sense<br> | |||
3.Individual testable<br> | |||
<br> | |||
To accommodated this design we came up with a global software design that exist out of 3 nodes.A drive node that does all the driving like a car driver would do. Derision node, This would make all the decisions and would tell the drive node what action to execute. Arrow , would tell the decision node if it sees a arrow so it can take it in consideration for the next decision. This design is based on a real life situation where a driver(drive node) listens to his car navigation (derision-node). witch is constantly updated with the latest traffic information (arrow-node).<br> | |||
<br> | |||
This software makes logical sense and is very good individual testebel.<br><br> | |||
Drive-node: can use numpad to input what situation the dissension node would send to him<br> | |||
Decision-node: can drive around whit teleop to see if it detects the correct situations and makes the correct decisions <br> | |||
Arrow-node: bag file's<br> | |||
<br> | |||
The strong part of this design is the individually testebel parts and the logic structure of the software.<br> | |||
=== Structure === | === Structure === | ||
The first draft of the structure of nodes and topics is shown below. | The first draft of the structure of nodes and topics is shown below. | ||
[[File: | [[File:Node_Topic3.png|center|thumb|1000px]] | ||
The integer 'situation' and 'action' have certain values corresponding to different cases. These cases are defined as shown below. | The integer 'situation' and 'action' have certain values corresponding to different cases. These cases are defined as shown below. | ||
Line 133: | Line 153: | ||
=== Taking corners === | === Taking corners === | ||
[[File:Taking_corners.png|center|thumb|1000px]] | [[File:Taking_corners.png|center|thumb|1000px]] | ||
== Decision node == | |||
The decision node is designed to act like a navigation tool. It tells the drive node where to go, it has to publish which situation pico is approaching and which way pico has to go. So, for example: T-junction up ahead and go right. | |||
In order to do this the decision node first has to detect which situation pico is approaching. Then it has to decide which way to go. We will first look at the Situation determination and then look at the Action determination (telling which way to go). | |||
=== Situation determination === | |||
To detect a situation 2 methods have been used. The first method focuses on sending the right situation and not making errors, but is slower in giving the situation. The second method will give the situations earlier, but will make some more errors. We will first look at method 1, its algorithm, simulation/test results and conclusion. Then we discus method 2 also including its algorithm and simulation/test results and give a conclusion. | |||
==== Method 1: Situation determination ==== | |||
In order to determine which situation we are approaching, we want to know 3 things: is there a corner to the left, is there a corner to the right and can we go straight ahead? | |||
We determine these 3 things by using the laser data. Unfortunately we will not always detect the corners and dead end at the same time, | |||
so when we detect one of these 3 we will drive to a fixed distance from the situation. In this time we drive we will determine if we also find the other two characteristics of the situation. Using the information we have we can determine the situation and publish it on the topic. | |||
The flowchart shown below shows the steps in this algorithm. Also there is an example situation to clearify the algorithm better. | |||
===== Flowchart ===== | |||
[[File:Flowchart_group06.png|center|thumb|1000px]] | |||
===== Example ===== | |||
In the Example below we see: | |||
'''In picture 1''': Pico approaches an unidentified situation. | |||
'''In picture 2''': Pico has detected a right corner, this could mean that the next situation is a corner to the right, but pico is not sure if there is a corner to the left and if we can go straight ahead. | |||
'''In picture 3''': Pico has set a threshold, meaning that when reaching the threshold pico will give the situation it has detected. In the time driving towards the threshold pico will look if there is a corner to the left or he can go straight ahead. | |||
'''In picture 4''': Pico has reached the threshold and has seen that he can go straight and to the right. So the node will send situation 6. | |||
[[File:Example_group06.png|center|thumb|1000px]] | |||
===== Simulation Results ===== | |||
To test this algorithm in simulation we used a few methods. | |||
First one was to put in pico in a maze and let it detect a situation and not use the drive node, because the drive node was not yet ready. This worked very well. Although there were some minor flaws. | |||
One flaw is that a situation like a T-junction where pico can go straight and left or right and the straight had dead-end, then this algorithm detects a corner. During first simulations this did not look like an problem, but later it showed the drive node could not handle with this. | |||
The second method was to combine the drive node and the decision node. Now we saw that the drive node could not handle the flaw as stated above. Also the situation often came to late to the drive node, and the drive node could not make sure to see the turn and act on it. | |||
===== Test Results ===== | |||
Testing the decision node with the real pico we noticed a few things: | |||
1. The distance to a corner is not reliable, so the threshold is not used correctly | |||
2. If the situation and threshold is used correctly then the situation is to late and the drive node cannot make sure a good turn. | |||
===== Conclusion ===== | |||
This algorithm works well without the drive node. There are some flaws like the unreliable distance and the dead-end detected at a T-crossing with dead-end, but these problems can be solved without changing the overall algorithm to much. The real problem is that of giving the situation late. This is a main characteristic of this algorithm, giving the situation later but not making mistakes. | |||
To solve this problem either the drive node or the decision node had to change. We chose to change the decision node and create a new algorithm for the situation determination. This new algorithm is described in Method 2. | |||
==== Method 2: Situation determination ==== | |||
This second method has been made, because in method 1 the situation was send to late. Where in method 1 the focus was on not making errors, here the focus lies more on sending the next situation early. | |||
This works better when combining this node with the drive node, because when a situation is detected (perhaps only a right corner of a T-junction) then the drive node can already start looking for a right corner. Maybe this can be seen as the driver in the car that knows where to look for and then finds it faster. | |||
===== Situation counter ===== | |||
When sending every detected situation, false or right, we noticed that the situation is mostly right and sometimes wrong. Inorder to get rid of most of these falsely determined situation we added a counter. This counter will only publish the situation if there is 5 times the same situation detected. | |||
===== Flowchart ===== | |||
[[File:Flowchart_method2_group06.png|center|thumb|1000px]] | |||
===== T-junction straight and right or left ===== | |||
Just like in method one there were difficulties with a T-junction straight and right or left where the straight has a dead-end very close. | |||
To fix this we added a part where pico looks in front of him and tries to detect the edge of the corridor straight ahead. This can be seen in the picture below. | |||
[[File:T-junc_group06.png|center|thumb|1000px]] | |||
In a corner we will not be able to find an edge and we can say there is no corridor. If we find an edge we can say there is a corridor, and with our dead-end detection we already know it is a dead end. | |||
===== Simulation results and video ===== | |||
In simulation this method works well. Definitely with the counter, we see the right situation almost everytime when we are close to the situation. | |||
The most difficult situations are T-junction (straight and right or left). These situations sometimes get confused with a single corner if we are far from the situation. When we are close to the situation we will see it correctly and have no problem with it. | |||
This link will lead to a video of a simulation: | |||
[http://youtu.be/0XZtKu_gdPE Simulation of maze] | |||
===== Test results ===== | |||
When testing we did not have any problem with this method and node. The corner detection did not seem to be detecting corner as fast as in simulation, but this did not lead to anyproblem. | |||
We were able to detect all situations. However, because of problems with turning we could not test whole mazes. | |||
===== Conclusion ===== | |||
We can conclude that this method works pretty well. Especially together with the drive node. When designing method 1 we should have communicated better with our collegue of the drive node. | |||
We can detect situations fairly robust and send them to the drive node in time. Method 2 is quite a bit simpler than method 1 which is a good thing for it is easier to use. Still there are a lot of parameters. These all have to be tuned, and when not tuned correctly it will not work properly. | |||
=== Action determination === | |||
When we have published a situation, then the drive node makes sure we drive to the center of the situation. If we are in the center, we want to make a decision which way to go. The decision that we take is based on the data that the Arrow topic sends us, the laserdata and the previous actions that we have made. In the flowchart below you can see how which of these data has the highest priority in determining an action. If the Arrow node tells us that he has detected an arrow, we always listen to the arrow node. If we haven't received a message from the arrow node we base our action on laserdata and the previous actions we have made. The previous actions are only needed if we have driven into a dead end and we had to turn around. If this has been the dead end handling becomes true to make sure that we don't drive back in the direction where we came from. This will be explained later. | |||
==== Flowchart ==== | |||
[[File:Decision flowchart new.png|center|thumb|1000px]] | |||
==== Action based only on laserdata ==== | |||
If the arrow node hasn't detected an arrow and we aren't busy with a dead end handling we use a very simple algorithm to decide the action. At the middle of the crossing we first look to the right and measure the average distance over some datapoints. If this distance is larger than a certain threshold we consider the right as open and me decide that pico has to go to the right. If the measured distance is smaller than the threshold we consider the right as a dead end. If this happens we repeat the same algorithm at the left. If the distance measured at the left of pico is larger than the threshold we decide pico has to move to the left, if this isn't the case we repeat this algorithm for the front of pico. If the distance in front of pico is larger than the threshold we decide pico has to move forwards. If the distance in front of pico also isn't larger than the threshold we consider it as a dead end and tell pico to turn around. | |||
==== Dead end handling ==== | |||
Here we will discuss our dead end handling, the function of the dead end handling is to make that after a dead end we continu in the direction where we have to go and that pico doesn't move to where he came from and leaves the maze at it's entrance. In the figure below you can see what would happen if we don't use dead end handling. Because we use some kind of right wall follower moving to the right has the highest priority. The second highest priority is moving to the left and the last is moving forward. | |||
[[File:Example.png|center|thumb|400px]] | |||
With the help of the image below we will try to explain how our dead end handling works. First pico just moves likes he usually does, with moving to the right as highest priority, followed by moving to the left and as last moving straight. This is showed in the left part of the image. The green arrow shows in which way pico will move. The triangle is pico and inside the triangle you can see numbers, these numbers are the index of the action. At the crossing we store the action that we have decided to make in that index. As you can see the third action is at a dead end so we turn around. If this happens we set the boolean dead end detection high. Now we continue at the right part of the image. As soon as we again get at a crossing we have to make the 4th action. What our dead end handling now does is changing the priority of the actions. Because the action before the dead end (action with index 2) was moving to the left, we set this now as the highest priority for the action after the dead end (action with index 4). If this isn't possible, because there is no opening to the left we want to go straight, if there is no opening we move to the right. If we are able to do the same action as before the dead end we have finished the dead end handling and make this boolean false. In this case the index for the dead end is equal to 3, so we want action 4 to be the same as action 2. Action 2 was turning to the left. However action 4 can't be turning to the left, because there is no opening. So our dead end handling isn't finished. At the next crossing we have to determine action 5. This is the second action after we've encountered a dead end, so we have want to do the same as 2 actions before the dead end, which is action 1. This was turning to the left, because there is now a opening at the left pico will turn to the left and the dead end handling is finished. | |||
This video of a simulation shows that the dead-end handling works: | |||
[http://youtu.be/B4XXv2n2TCw Dead-end video] | |||
[[File:dead end handling.png|center|thumb|400px]] | |||
=== Example === | |||
[[File:Example_action_group06.png|center|thumb|1000px]] | |||
=== What makes our node so special? === | |||
What makes our node so special is even when the laserdata has a lot of noise in it we in general are able to determine the right situation. This is the case because we only publish a situation if we detect the same situation 5 times in a row, so if there is some noise we might see the wrong situation at that moment, but most likely it's discarded because usually we don't see the same wrong situation 5 times in a row due to the noise. What also makes our node special is that it's a sort of navigation while we are driving. Also because we stop at the center of a crossing and determine the action we want to make while we are standing still we have a more robust action determination, the negative side of this is that we can get less fast out of the maze. The last thing that we think that makes our node unique is that we store our actions in a memory and this allows us to use a dead end handling, which prevents us from driving in the wrong direction. | |||
== Drive Node == | |||
The drive node is designed to operate like the driver of a car. The Decision node provides localalized information about the current situation to the Drive node, which drives Pico to the centre of the situation. The Decision node can then make a better choice of the action required. After recieving the action from the Decision node, the Drive node performs the action, and then listens to the Decision node for the next situation. | |||
The Drive node listens to the Decision topic, the Laserdata topic and the Odometry topic to perform it's various functions. The Drive node consists of many small modular functions that perform specific tasks, such as driving straight with a specified velocity, performing a left turn, and so on. These small fuctions are called repeatedly and in an ordered fashion to perform the action suggested by the Decision node. | |||
=== Reaching the middle of a corner/crossing === | |||
The Drive node includes a corner detection algorithm, that allows it to drive Pico to the centre of a corner/crossing. However, while testing, we realized that detecting the corners alone was not sufficient to drive to the centre of a corner/crossing, due to the limited range of the laser data. Hence we included data from the Odometer to make this action more reliable. Thus, the node uses laser data to look forward and odometer to look back to reach the middle. | |||
=== The Drive topic === | |||
The drive node publishes on the cmd_vel topic and the drive topic. The drive topic consists of the following flags: | |||
* action_done | |||
* wall_close | |||
* middle_reached | |||
These flags inform the Decision node the position of Pico in the maze. The ''wall_close'' flag informs the Decision node that Pico is very close to a wall. The ''middle_reached'' flag informs the Decision node that Pico has reached the middle of the conren/crossing, and that the Decision node should make a decision on the action. The ''action_done'' flag informs the Decision node that Pico has completed the action and that the Decision node should start looking for the current situations again. | |||
=== Safe Drive === | |||
The Drive node is also responsible to ensure that Pico does not run into a wall. The node does this by checking all the laser data distances. If a wall is detected withing 30 cms, all motion is stopped and Pico is subsequently aligned to the corridor and driven to the center of the corridor's width. All actions are then resumed. | |||
=== Aligning === | |||
Our software structure relies heavily on the fact that Pico will be able to align itself when it is in a coridor. This allows better sitaution determination for the Descision node and also allows the drive node to drive Pico to the middle of a situation once the situation is determined. Hence, we decided to implement a two fold alignment algorithm which aligns Pico parallel to the two walls of a corridor and also drives Pico to the lateral center of the corridor. | |||
To align parallel to the walls, we implemented an algorithm similar to a gradient ascent algorithm. We use the laserdata values to calculate the gradient around the 0 degrees direction, and then turn Pico along the positive gradient until the laser range at 0 degrees hits a maxima. The algoithm is illustrated in the figure below. | |||
[[File:Drawing1.jpg|center|thumb|500px|x1300px]] | |||
The algorithm described above turns Pico such that it faces the longest direction when it is in a corridor, and hence this algorithms fails if the error in Pico's direction is more than 90 degrees. However, we do not expect such a large error to occur during the maze competition. Another weakness of this algorithm is that when Pico is close to the end of the corridor, this algorithm will make Pico turn towards the corner of the corridor and not straight ahead, as shown in the figure below: | |||
[[File:Drawing2.jpg|center|thumb|500px|x1300px]] | |||
To correct this error, we implemented another algorithm which aligns Pico parallel to the walls by comparing the distances on the left and on the right of Pico. The algorithm is illustrated in the figure shown below: | |||
[[File:Drawing4.jpg|center|thumb|500px|x1300px]] | |||
To check if Pico is at a tilt compared to the walls of the corridor, we calculate the distances (d1-d2) and (d3-d4), and if these distances are of opposite signs, i.e. if (d1-d2)*(d3-d4)<0, Pico can be aligned further. To make the algorithm robust, some tolerance values were used for the comparison. While this method of aligning Pico worked in simulation, we decided not to include it in the software for the final maze competetion as we did not get a chance to test this on the actual hardware setup. | |||
Aligning Pico to the center of the corridor is much simpler; we simply measure the distance to the left and right of Pico, and move Pico laterally until the distances are within a tolerance range of each other. | |||
Before using these alignment techniques, the drive node first checks whether Pico is in a corridor as these algorithms will fail if Pico is in, for example, a corner or a crossing. | |||
=== Function Description === | |||
A breif description of the functions created to perform the modular tasks is given below. The flow of logic in the node will be explained using these function names. | |||
* align_parallel(): This function uses the laser data to align Pico parallel to the two walls of the corridor it is currently in. This fuction uses an adapted form of a gradient ascent algorithm. | |||
* align_center(): Once Pico has been aligned parallel to the walls, this function is used to bring Pico to the center of the corridor. | |||
* driveStraight(): This fuction drives Pico straight. | |||
* turnAround(): This function turns Pico by 180 degrees. It integrates the pose values of the odometry sensor until the result is pi. | |||
* turnLeft(): This function turns Pico by 90 degrees to the left. | |||
* turnRight(): This function turns Pico by 90 degrees to the right. | |||
* reachMiddle_right(): This function allows Pico to reach the middle of a right hand corner. Here, a distinction has been made between a right hand corner and a right-straight T crossing. This function does not make use of corner detection. | |||
* reachMiddle_right_T(): This function allows Pico the reach the middle of a T-crossing with a right exit. It makes use of corner detection. | |||
* reachMiddle_left(): This function allows Pico to reach the middle of a left hand corner. Here, a distinction has been made between a left hand corner and a left-straight T crossing. This function does not make use of corner detection. | |||
* reachMiddle_left_T(): This function allows Pico the reach the middle of a T-crossing with a left exit. It makes use of corner detection. | |||
* stop(): This function stops Pico. | |||
* detectCornersLeft(): This function detects corners on the left side of Pico, i.e from 0 degrees to 120 degrees. | |||
* detectCornersRight(): This function detects corners on the right side of Pico, i.e from 0 degrees to -120 degrees. | |||
=== Sequence of Actions === | |||
Based on the situation and the subsequent action recieved by the Drive node, the sequence of actions performed by the Drive node is shown in the table below. | |||
{| border="1" style="border-collapse:collapse;" | |||
|+ Situation vs. Action | |||
! | |||
! Default | |||
! Action 1 | |||
! Action 2 | |||
! Action 3 | |||
! Action 4 | |||
! Action 5 | |||
|- | |||
! Situation 1 | |||
| align="center" | Align || align="center" | stop || align="center" | drive Straight || align="center" | Error || align="center" | Error || align="center" | Error | |||
|- | |||
! Situation 2 | |||
| align="center" | - || align="center" | stop || align="center" | Error || align="center" | Error || align="center" | Error || align="center" | turn Around | |||
|- | |||
! Situation 3 | |||
| align="center" | reach middle of corner || align="center" | stop || align="center" | Error || align="center" | turn right and finish corner || align="center" | Error || align="center" | turn Around | |||
|- | |||
! Situation 4 | |||
| align="center" | reach middle of corner || align="center" | stop || align="center" | Error || align="center" | Error || align="center" | turn left and finish corner || align="center" | turn Around | |||
|- | |||
! Situation 5 | |||
| align="center" | reach middle of T junction || align="center" | stop || align="center" | Error || align="center" | turn right and finish corner || align="center" | turn left and finish corner || align="center" | turn Around | |||
|- | |||
! Situation 6 | |||
| align="center" | reach middle of T junction || align="center" | stop || align="center" | drive Straight || align="center" | turn right and finish corner || align="center" | Error || align="center" | turn Around | |||
|- | |||
! Situation 7 | |||
| align="center" | reach middle of T junction || align="center" | stop || align="center" | drive Straight || align="center" | Error || align="center" | turn left and finish corner || align="center" | turn Around | |||
|- | |||
! Situation 8 | |||
| align="center" | reach middle of crossing || align="center" | stop || align="center" | drive Straight || align="center" | turn right and finish corner || align="center" | turn left and finish corner || align="center" | turn Around | |||
|- | |||
|} | |||
=== Flowchart === | |||
For better illustration, a flowchart depicting the flow of clogic for the first 4 situations (corridor, dead-end, right corner and left corner) is shown below: | |||
[[File:Flow.jpg|center|thumb|1000px]] | |||
In an earlier design, the software was integrated in such a manner that the drive node would act as soon as it recieved a situation from the decision node. This was not a good idea as the first situation determined was not always the right situation. Hence we changed the nodes such that they would communicate with each other once before just before Pico reaches a corner. At that instant, the Decision node can identify a more accurate situation. | |||
=== Novelty of the Drive node === | |||
From what we saw in the maze competion, not all teams aligned Pico to the corridors, most of the teams used algorithms in which Pico was repelled away from the walls. We found aligning Pico to the walls of a corridor a more intuitive way of implementing our drive node. As our software structure is inspired by a car driver-navigation system, we chose to design the drive node such that it would imitate a car driver. While driving a car, we instinctively tend to align the car to the road, and this is what we tried to replicate with Pico. | |||
Another part of the drive node that we were particularly happy about (until one week before the maze competition) was the turning. We used the odometer to turn Pico through specific angles, and the results were very reliable. Unfortunately, a couple of days before the maze competetion, we found out that the turning algorithm almost never works on the actual hardware. We had a word about this with one of the tutors, who suspected a fault in the encoder. However, it was too late to make any changes to the hardware setup. Thus we had to completely change the turning part of the drive node one day before the maze competition. | |||
== Arrow Node == | |||
The arrow node is made for detecting a arrow and its orientation of a arrow when asked by the information node. The info this node publishes then helps the decision node decide what turn to take. | |||
=== Design of Arrow-node=== | |||
the main design decision in the arrow node is based on the use of hough transform. this transformation in combination with the line detector tuned correctly make us able to so arrows of any distance. This would benefit the arrow detection because the exact distance to the arrow is unknown and variances between each maze. Also the custom flood fill that is being used had a much better result that the standard flood fill of the opencv library | |||
[[File:Flowchart_camera_part.jpg]] | |||
[[File:Flowchart goed.jpg|center|thumb|1000px]] | |||
=== Convert image to hsv === | |||
First the camera rgb camera image is converted to a hsv image. this for making the following step easyer | |||
=== Threshold image === | |||
The image is now threshold so that only the red parts of that would be red in a rgb picture become wit and the rest black (a binary image) | |||
=== Custom flood fill === | |||
A flood fill is used to fill in the small imperfections in the arrow and also make the edges of the arrow smoother. This custom flood fill also removes "alone" pixels that can be in the image from the camera noise | |||
=== Blur the image === | |||
The binary image is blurred to make the edges even smoother for the next step. | |||
=== Canny edge detector === | |||
a canny line algorithm is used to detect the outer lines of the arrow image | |||
=== Detect lines === | |||
The lines are then detected by using a Hough transform algoritm | |||
=== Calculate line gradients + position === | |||
The lines are filtered so only left 2 and right 2 lines are saved. For these 4 lines the gradient is calculated with the following formula | |||
<br> | |||
<math>\nabla = \frac{y_{1} - y_{2}}{x_{1} - x_{2}}</math> <br> | |||
By looking the gradient and the postion of the lines the direction of the arrow can be calculated | |||
=== Calculate the direction === | |||
by letting the above steps happen for 2 seconds and save how many time there was a arrow left right and no detected it can be sad if there is a arrow. | |||
=== Video example === | |||
This video shows that a left arrow is detected. If you watch closely you can see that it prints on the screen which arrow is detected. | |||
[http://youtu.be/XxCcQkKoJ9M Video of arrow detection] | |||
== Evaluation of the maze competition == | |||
Unfortunately we weren't able to finish the maze competition. Actually the maze competition was a disaster. When we saw the maze we really thought that pico would be able to solve the maze with our software, because he had been able to solve simular mazes in the simulation. However we knew that the turning might give us some problems. The day before the maze competition we had our last test hour with pico. We wanted to test some different situation and see if pico would be able to solve them. Pico drove to the middle of the crossing and start to turn into the right direction, but for some reason pico kept on overturning. Instead of 90 degrees he most of the times turned around 135 degrees. At this point we used the odometry to turn 90 degrees, we integrated the twist and compared the integrated value with pi/2, which is equal to 90 degrees. This used to work at the corridor competition and in previous test hours and in the simulation this has always worked perfect. We asked on of the tutors to take a look at our code and he concluded that our code should work. He thought that there had to be some kind of problem with the communication with pico, but he wasn't able to solve it in our scheduled test hour. | |||
After the test hour we decided to turn 90 degrees by looking at the laserdata instead of using the odometry. We were able to make this work in the simulation, pico always rotated between the 85 and 90 degrees which was good enough, because we align after each corner. However to make this more accurate and more robust we decreased the velocity during the turn. Also because we had to change this after the our last test hour we weren't able to test this way of turning in the real world. What happend at the maze competition is that pico perfectly detected the first crossing, he drove to the middle of the crossing and started turning to the left, because the opening on the right of pico was considered as a dead end. But instead of turning smooth pico really struggled with turning and after more than 1 minute he had only turned 10 degrees. We think this happend because we decreased the turning velocity, during a test before the corridor competition this happend to us before. That time the velocity was so low that it didn't even turn. | |||
We think that most of the work went good during this problem, but we could have improved the project a lot by testing more. Now it sometimes happend that we run our software during a test hour and pico didn't do what we expected him to do. While in the simulation pico did what we expected. If this happend we were the entire hour busy with understanding why it didn't worked and try to edit it. Most of the times we didn't even record a bagfile and bagfiles are really important, because you can test your code then later with data from the real world. Now it very often happend that the software worked in the simulation, but it wasn't robust in the real world. If we had used bagfiles to test this we already knew this before our test hour and we could use our test hour more efficient. | |||
To prove our code works in simulation, we made a simulation of the maze used in the competition: | |||
[http://youtu.be/bANSXvs3xrQ Simulation of maze] | |||
== Time survey == | |||
{| {{table}} | |||
| align="center" style="background:#f0f0f0;"|'''''' | |||
| align="center" style="background:#f0f0f0;"|'''''' | |||
| align="center" style="background:#f0f0f0;"|'''Lectures''' | |||
| align="center" style="background:#f0f0f0;"|'''Groupmeetings''' | |||
| align="center" style="background:#f0f0f0;"|'''Mastering ROS and C++''' | |||
| align="center" style="background:#f0f0f0;"|'''Preparing midterm assignment''' | |||
| align="center" style="background:#f0f0f0;"|'''Preparing final assignment''' | |||
| align="center" style="background:#f0f0f0;"|'''Wiki progress report''' | |||
| align="center" style="background:#f0f0f0;"|'''Other activities''' | |||
| align="center" style="background:#f0f0f0;"|'''''' | |||
| align="center" style="background:#f0f0f0;"|'''''' | |||
|- | |||
| Week 1||Rik||2||1||3||||||||||||6 | |||
|- | |||
| ||Dhruv||2||1||1||||||||||||4 | |||
|- | |||
| ||Hans||2||1||||||||||||||3 | |||
|- | |||
| ||Wout||1||1||||||||||||||2 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 2||Rik||2||1||16||||||||||||19 | |||
|- | |||
| ||Dhruv||2||1||6||||||||||||9 | |||
|- | |||
| ||Hans||2||1||3||||||||||||6 | |||
|- | |||
| ||Wout||2||1||4||||||||||||7 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 3||Rik||||1||||2||||||3||||6 | |||
|- | |||
| ||Dhruv||2||1||3||||||||||||6 | |||
|- | |||
| ||Hans||2||1||6||||||||||||9 | |||
|- | |||
| ||Wout||2||1||3||6||||||||||12 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 4||Rik||2||4||1||||3||1,5||||||11,5 | |||
|- | |||
| ||Dhruv||2||4||2||17,5||||0,5||||||26 | |||
|- | |||
| ||Hans||2||4||||15||||||||||21 | |||
|- | |||
| ||Wout||2||4||0||20||||||||||26 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 5||Rik||2||4||||||7||||1||||14 | |||
|- | |||
| ||Dhruv||3||4||||||3||||||||10 | |||
|- | |||
| ||Hans||2||4||||||8||||||||14 | |||
|- | |||
| ||Wout||2||4||0||0||0||6||||||12 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 6||Rik||2||2||||||8||||||||12 | |||
|- | |||
| ||Dhruv||2||2||||||7||||||||11 | |||
|- | |||
| ||Hans||2||2||||||||||||||4 | |||
|- | |||
| ||Wout||2||2||0||0||0||3||||||7 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 7||Rik||2||2||||||6||||||||10 | |||
|- | |||
| ||Dhruv||2||2||||||5||1||||||10 | |||
|- | |||
| ||Hans||2||2||||||15||||||||19 | |||
|- | |||
| ||Wout||2||2||0||0||0||3||||||7 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 8||Rik||||2||||||10||2||||||14 | |||
|- | |||
| ||Dhruv||2||2||||||6||||||||10 | |||
|- | |||
| ||Hans||||2||||||5||||||||7 | |||
|- | |||
| ||Wout||2||2||0||0||0||6||||||10 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 9||Rik||2||2||||||10||||||||14 | |||
|- | |||
| ||Dhruv||2||2||||||6||||||||10 | |||
|- | |||
| ||Hans||2||2||||||5||||||||9 | |||
|- | |||
| ||Wout||||2||0||0||5||||||||7 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| Week 10||Rik||||1||||||16||||||||17 | |||
|- | |||
| ||Dhruv||2||1||||||10||2||||||15 | |||
|- | |||
| ||Hans||||1||||||20||3||||||24 | |||
|- | |||
| ||Wout||||1||0||0||15||||||||16 | |||
|- | |||
| ||||||||||||||||||||0 | |||
|- | |||
| |||||||||||||||||||| | |||
|- | |||
| ||||66||80||48||60,5||170||28||4||||456,5 | |||
|- | |||
| | |||
|} |
Latest revision as of 20:28, 2 July 2014
Wout Laarakkers 0828580
Rik Boonen 0805544
Dhruv Khandelwal 0868893
Suraj Prakash 0870060
Hans Reijnders 0806260
Updates
- Include Bool action_done and Bool wall_close in topic drive for handshake (23 May)
Planning
Week 1 (2014-04-25 - 2014-05-02)
- Installing Ubuntu 12.04
- Installing ROS
- Following tutorials on C++ and ROS.
- Setup SVN
Week 2 (2014-05-03 - 2014-05-09)
- Finishing tutorials
- Interpret laser sensor
- Positioning of PICO
- having ore first meeting
Week 3 (2014-05-12 - 2014-05-16)
- Programming corridor competition
- Corridor competition
Week 4 (2014-05-19 - 2014-05-23)
- Creating basic structure for programming in Ros
- Planning
- Dividing tasks
* Drive node: Dhruv Khandelwal * Decision node: Hans Reijnders, Rik Boonen * Arrow node: Wout Laarakkers
- Programming individual parts
Week 5 (2014-05-26 - 2014-05-30)
- Programming individual parts
- Testing parts
- Integrating parts
Week 6 (2014-06-02 - 2014-06-06)
- Programming individual parts
- Testing parts
- Integrating parts
- Deadline for the nodes
Changes in group
Unfortunately Suraj Prakash has decided to quit the course, because he doesn't has enough time.
Corridor competition Software design
Global Software design
Design Goals
for ore global software we had 3 design choices
1.Split in to equal parts
2.Make logical sense
3.Individual testable
To accommodated this design we came up with a global software design that exist out of 3 nodes.A drive node that does all the driving like a car driver would do. Derision node, This would make all the decisions and would tell the drive node what action to execute. Arrow , would tell the decision node if it sees a arrow so it can take it in consideration for the next decision. This design is based on a real life situation where a driver(drive node) listens to his car navigation (derision-node). witch is constantly updated with the latest traffic information (arrow-node).
This software makes logical sense and is very good individual testebel.
Drive-node: can use numpad to input what situation the dissension node would send to him
Decision-node: can drive around whit teleop to see if it detects the correct situations and makes the correct decisions
Arrow-node: bag file's
The strong part of this design is the individually testebel parts and the logic structure of the software.
Structure
The first draft of the structure of nodes and topics is shown below.
The integer 'situation' and 'action' have certain values corresponding to different cases. These cases are defined as shown below.
Situation | |
1 | Detected corridor |
2 | Detected dead-end |
3 | Detected corner right |
4 | Detected corner left |
5 | Detected T-crossing (right-left) |
6 | Detected T-crossing (right-straight) |
7 | Detected T-crossing (left-straight) |
8 | Detected crossing |
9 | Unidentified situation |
.
action | |
1 | Stop |
2 | Straight |
3 | Right |
4 | Left |
5 | Turn around |
6 | Follow left wall |
7 | Follow right wall |
Taking corners
Decision node
The decision node is designed to act like a navigation tool. It tells the drive node where to go, it has to publish which situation pico is approaching and which way pico has to go. So, for example: T-junction up ahead and go right. In order to do this the decision node first has to detect which situation pico is approaching. Then it has to decide which way to go. We will first look at the Situation determination and then look at the Action determination (telling which way to go).
Situation determination
To detect a situation 2 methods have been used. The first method focuses on sending the right situation and not making errors, but is slower in giving the situation. The second method will give the situations earlier, but will make some more errors. We will first look at method 1, its algorithm, simulation/test results and conclusion. Then we discus method 2 also including its algorithm and simulation/test results and give a conclusion.
Method 1: Situation determination
In order to determine which situation we are approaching, we want to know 3 things: is there a corner to the left, is there a corner to the right and can we go straight ahead?
We determine these 3 things by using the laser data. Unfortunately we will not always detect the corners and dead end at the same time, so when we detect one of these 3 we will drive to a fixed distance from the situation. In this time we drive we will determine if we also find the other two characteristics of the situation. Using the information we have we can determine the situation and publish it on the topic.
The flowchart shown below shows the steps in this algorithm. Also there is an example situation to clearify the algorithm better.
Flowchart
Example
In the Example below we see:
In picture 1: Pico approaches an unidentified situation.
In picture 2: Pico has detected a right corner, this could mean that the next situation is a corner to the right, but pico is not sure if there is a corner to the left and if we can go straight ahead.
In picture 3: Pico has set a threshold, meaning that when reaching the threshold pico will give the situation it has detected. In the time driving towards the threshold pico will look if there is a corner to the left or he can go straight ahead.
In picture 4: Pico has reached the threshold and has seen that he can go straight and to the right. So the node will send situation 6.
Simulation Results
To test this algorithm in simulation we used a few methods.
First one was to put in pico in a maze and let it detect a situation and not use the drive node, because the drive node was not yet ready. This worked very well. Although there were some minor flaws.
One flaw is that a situation like a T-junction where pico can go straight and left or right and the straight had dead-end, then this algorithm detects a corner. During first simulations this did not look like an problem, but later it showed the drive node could not handle with this.
The second method was to combine the drive node and the decision node. Now we saw that the drive node could not handle the flaw as stated above. Also the situation often came to late to the drive node, and the drive node could not make sure to see the turn and act on it.
Test Results
Testing the decision node with the real pico we noticed a few things:
1. The distance to a corner is not reliable, so the threshold is not used correctly
2. If the situation and threshold is used correctly then the situation is to late and the drive node cannot make sure a good turn.
Conclusion
This algorithm works well without the drive node. There are some flaws like the unreliable distance and the dead-end detected at a T-crossing with dead-end, but these problems can be solved without changing the overall algorithm to much. The real problem is that of giving the situation late. This is a main characteristic of this algorithm, giving the situation later but not making mistakes.
To solve this problem either the drive node or the decision node had to change. We chose to change the decision node and create a new algorithm for the situation determination. This new algorithm is described in Method 2.
Method 2: Situation determination
This second method has been made, because in method 1 the situation was send to late. Where in method 1 the focus was on not making errors, here the focus lies more on sending the next situation early. This works better when combining this node with the drive node, because when a situation is detected (perhaps only a right corner of a T-junction) then the drive node can already start looking for a right corner. Maybe this can be seen as the driver in the car that knows where to look for and then finds it faster.
Situation counter
When sending every detected situation, false or right, we noticed that the situation is mostly right and sometimes wrong. Inorder to get rid of most of these falsely determined situation we added a counter. This counter will only publish the situation if there is 5 times the same situation detected.
Flowchart
T-junction straight and right or left
Just like in method one there were difficulties with a T-junction straight and right or left where the straight has a dead-end very close.
To fix this we added a part where pico looks in front of him and tries to detect the edge of the corridor straight ahead. This can be seen in the picture below.
In a corner we will not be able to find an edge and we can say there is no corridor. If we find an edge we can say there is a corridor, and with our dead-end detection we already know it is a dead end.
Simulation results and video
In simulation this method works well. Definitely with the counter, we see the right situation almost everytime when we are close to the situation.
The most difficult situations are T-junction (straight and right or left). These situations sometimes get confused with a single corner if we are far from the situation. When we are close to the situation we will see it correctly and have no problem with it.
This link will lead to a video of a simulation:
Test results
When testing we did not have any problem with this method and node. The corner detection did not seem to be detecting corner as fast as in simulation, but this did not lead to anyproblem.
We were able to detect all situations. However, because of problems with turning we could not test whole mazes.
Conclusion
We can conclude that this method works pretty well. Especially together with the drive node. When designing method 1 we should have communicated better with our collegue of the drive node.
We can detect situations fairly robust and send them to the drive node in time. Method 2 is quite a bit simpler than method 1 which is a good thing for it is easier to use. Still there are a lot of parameters. These all have to be tuned, and when not tuned correctly it will not work properly.
Action determination
When we have published a situation, then the drive node makes sure we drive to the center of the situation. If we are in the center, we want to make a decision which way to go. The decision that we take is based on the data that the Arrow topic sends us, the laserdata and the previous actions that we have made. In the flowchart below you can see how which of these data has the highest priority in determining an action. If the Arrow node tells us that he has detected an arrow, we always listen to the arrow node. If we haven't received a message from the arrow node we base our action on laserdata and the previous actions we have made. The previous actions are only needed if we have driven into a dead end and we had to turn around. If this has been the dead end handling becomes true to make sure that we don't drive back in the direction where we came from. This will be explained later.
Flowchart
Action based only on laserdata
If the arrow node hasn't detected an arrow and we aren't busy with a dead end handling we use a very simple algorithm to decide the action. At the middle of the crossing we first look to the right and measure the average distance over some datapoints. If this distance is larger than a certain threshold we consider the right as open and me decide that pico has to go to the right. If the measured distance is smaller than the threshold we consider the right as a dead end. If this happens we repeat the same algorithm at the left. If the distance measured at the left of pico is larger than the threshold we decide pico has to move to the left, if this isn't the case we repeat this algorithm for the front of pico. If the distance in front of pico is larger than the threshold we decide pico has to move forwards. If the distance in front of pico also isn't larger than the threshold we consider it as a dead end and tell pico to turn around.
Dead end handling
Here we will discuss our dead end handling, the function of the dead end handling is to make that after a dead end we continu in the direction where we have to go and that pico doesn't move to where he came from and leaves the maze at it's entrance. In the figure below you can see what would happen if we don't use dead end handling. Because we use some kind of right wall follower moving to the right has the highest priority. The second highest priority is moving to the left and the last is moving forward.
With the help of the image below we will try to explain how our dead end handling works. First pico just moves likes he usually does, with moving to the right as highest priority, followed by moving to the left and as last moving straight. This is showed in the left part of the image. The green arrow shows in which way pico will move. The triangle is pico and inside the triangle you can see numbers, these numbers are the index of the action. At the crossing we store the action that we have decided to make in that index. As you can see the third action is at a dead end so we turn around. If this happens we set the boolean dead end detection high. Now we continue at the right part of the image. As soon as we again get at a crossing we have to make the 4th action. What our dead end handling now does is changing the priority of the actions. Because the action before the dead end (action with index 2) was moving to the left, we set this now as the highest priority for the action after the dead end (action with index 4). If this isn't possible, because there is no opening to the left we want to go straight, if there is no opening we move to the right. If we are able to do the same action as before the dead end we have finished the dead end handling and make this boolean false. In this case the index for the dead end is equal to 3, so we want action 4 to be the same as action 2. Action 2 was turning to the left. However action 4 can't be turning to the left, because there is no opening. So our dead end handling isn't finished. At the next crossing we have to determine action 5. This is the second action after we've encountered a dead end, so we have want to do the same as 2 actions before the dead end, which is action 1. This was turning to the left, because there is now a opening at the left pico will turn to the left and the dead end handling is finished.
This video of a simulation shows that the dead-end handling works:
Example
What makes our node so special?
What makes our node so special is even when the laserdata has a lot of noise in it we in general are able to determine the right situation. This is the case because we only publish a situation if we detect the same situation 5 times in a row, so if there is some noise we might see the wrong situation at that moment, but most likely it's discarded because usually we don't see the same wrong situation 5 times in a row due to the noise. What also makes our node special is that it's a sort of navigation while we are driving. Also because we stop at the center of a crossing and determine the action we want to make while we are standing still we have a more robust action determination, the negative side of this is that we can get less fast out of the maze. The last thing that we think that makes our node unique is that we store our actions in a memory and this allows us to use a dead end handling, which prevents us from driving in the wrong direction.
Drive Node
The drive node is designed to operate like the driver of a car. The Decision node provides localalized information about the current situation to the Drive node, which drives Pico to the centre of the situation. The Decision node can then make a better choice of the action required. After recieving the action from the Decision node, the Drive node performs the action, and then listens to the Decision node for the next situation.
The Drive node listens to the Decision topic, the Laserdata topic and the Odometry topic to perform it's various functions. The Drive node consists of many small modular functions that perform specific tasks, such as driving straight with a specified velocity, performing a left turn, and so on. These small fuctions are called repeatedly and in an ordered fashion to perform the action suggested by the Decision node.
Reaching the middle of a corner/crossing
The Drive node includes a corner detection algorithm, that allows it to drive Pico to the centre of a corner/crossing. However, while testing, we realized that detecting the corners alone was not sufficient to drive to the centre of a corner/crossing, due to the limited range of the laser data. Hence we included data from the Odometer to make this action more reliable. Thus, the node uses laser data to look forward and odometer to look back to reach the middle.
The Drive topic
The drive node publishes on the cmd_vel topic and the drive topic. The drive topic consists of the following flags:
- action_done
- wall_close
- middle_reached
These flags inform the Decision node the position of Pico in the maze. The wall_close flag informs the Decision node that Pico is very close to a wall. The middle_reached flag informs the Decision node that Pico has reached the middle of the conren/crossing, and that the Decision node should make a decision on the action. The action_done flag informs the Decision node that Pico has completed the action and that the Decision node should start looking for the current situations again.
Safe Drive
The Drive node is also responsible to ensure that Pico does not run into a wall. The node does this by checking all the laser data distances. If a wall is detected withing 30 cms, all motion is stopped and Pico is subsequently aligned to the corridor and driven to the center of the corridor's width. All actions are then resumed.
Aligning
Our software structure relies heavily on the fact that Pico will be able to align itself when it is in a coridor. This allows better sitaution determination for the Descision node and also allows the drive node to drive Pico to the middle of a situation once the situation is determined. Hence, we decided to implement a two fold alignment algorithm which aligns Pico parallel to the two walls of a corridor and also drives Pico to the lateral center of the corridor.
To align parallel to the walls, we implemented an algorithm similar to a gradient ascent algorithm. We use the laserdata values to calculate the gradient around the 0 degrees direction, and then turn Pico along the positive gradient until the laser range at 0 degrees hits a maxima. The algoithm is illustrated in the figure below.
The algorithm described above turns Pico such that it faces the longest direction when it is in a corridor, and hence this algorithms fails if the error in Pico's direction is more than 90 degrees. However, we do not expect such a large error to occur during the maze competition. Another weakness of this algorithm is that when Pico is close to the end of the corridor, this algorithm will make Pico turn towards the corner of the corridor and not straight ahead, as shown in the figure below:
To correct this error, we implemented another algorithm which aligns Pico parallel to the walls by comparing the distances on the left and on the right of Pico. The algorithm is illustrated in the figure shown below:
To check if Pico is at a tilt compared to the walls of the corridor, we calculate the distances (d1-d2) and (d3-d4), and if these distances are of opposite signs, i.e. if (d1-d2)*(d3-d4)<0, Pico can be aligned further. To make the algorithm robust, some tolerance values were used for the comparison. While this method of aligning Pico worked in simulation, we decided not to include it in the software for the final maze competetion as we did not get a chance to test this on the actual hardware setup.
Aligning Pico to the center of the corridor is much simpler; we simply measure the distance to the left and right of Pico, and move Pico laterally until the distances are within a tolerance range of each other.
Before using these alignment techniques, the drive node first checks whether Pico is in a corridor as these algorithms will fail if Pico is in, for example, a corner or a crossing.
Function Description
A breif description of the functions created to perform the modular tasks is given below. The flow of logic in the node will be explained using these function names.
- align_parallel(): This function uses the laser data to align Pico parallel to the two walls of the corridor it is currently in. This fuction uses an adapted form of a gradient ascent algorithm.
- align_center(): Once Pico has been aligned parallel to the walls, this function is used to bring Pico to the center of the corridor.
- driveStraight(): This fuction drives Pico straight.
- turnAround(): This function turns Pico by 180 degrees. It integrates the pose values of the odometry sensor until the result is pi.
- turnLeft(): This function turns Pico by 90 degrees to the left.
- turnRight(): This function turns Pico by 90 degrees to the right.
- reachMiddle_right(): This function allows Pico to reach the middle of a right hand corner. Here, a distinction has been made between a right hand corner and a right-straight T crossing. This function does not make use of corner detection.
- reachMiddle_right_T(): This function allows Pico the reach the middle of a T-crossing with a right exit. It makes use of corner detection.
- reachMiddle_left(): This function allows Pico to reach the middle of a left hand corner. Here, a distinction has been made between a left hand corner and a left-straight T crossing. This function does not make use of corner detection.
- reachMiddle_left_T(): This function allows Pico the reach the middle of a T-crossing with a left exit. It makes use of corner detection.
- stop(): This function stops Pico.
- detectCornersLeft(): This function detects corners on the left side of Pico, i.e from 0 degrees to 120 degrees.
- detectCornersRight(): This function detects corners on the right side of Pico, i.e from 0 degrees to -120 degrees.
Sequence of Actions
Based on the situation and the subsequent action recieved by the Drive node, the sequence of actions performed by the Drive node is shown in the table below.
Default | Action 1 | Action 2 | Action 3 | Action 4 | Action 5 | |
---|---|---|---|---|---|---|
Situation 1 | Align | stop | drive Straight | Error | Error | Error |
Situation 2 | - | stop | Error | Error | Error | turn Around |
Situation 3 | reach middle of corner | stop | Error | turn right and finish corner | Error | turn Around |
Situation 4 | reach middle of corner | stop | Error | Error | turn left and finish corner | turn Around |
Situation 5 | reach middle of T junction | stop | Error | turn right and finish corner | turn left and finish corner | turn Around |
Situation 6 | reach middle of T junction | stop | drive Straight | turn right and finish corner | Error | turn Around |
Situation 7 | reach middle of T junction | stop | drive Straight | Error | turn left and finish corner | turn Around |
Situation 8 | reach middle of crossing | stop | drive Straight | turn right and finish corner | turn left and finish corner | turn Around |
Flowchart
For better illustration, a flowchart depicting the flow of clogic for the first 4 situations (corridor, dead-end, right corner and left corner) is shown below:
In an earlier design, the software was integrated in such a manner that the drive node would act as soon as it recieved a situation from the decision node. This was not a good idea as the first situation determined was not always the right situation. Hence we changed the nodes such that they would communicate with each other once before just before Pico reaches a corner. At that instant, the Decision node can identify a more accurate situation.
Novelty of the Drive node
From what we saw in the maze competion, not all teams aligned Pico to the corridors, most of the teams used algorithms in which Pico was repelled away from the walls. We found aligning Pico to the walls of a corridor a more intuitive way of implementing our drive node. As our software structure is inspired by a car driver-navigation system, we chose to design the drive node such that it would imitate a car driver. While driving a car, we instinctively tend to align the car to the road, and this is what we tried to replicate with Pico.
Another part of the drive node that we were particularly happy about (until one week before the maze competition) was the turning. We used the odometer to turn Pico through specific angles, and the results were very reliable. Unfortunately, a couple of days before the maze competetion, we found out that the turning algorithm almost never works on the actual hardware. We had a word about this with one of the tutors, who suspected a fault in the encoder. However, it was too late to make any changes to the hardware setup. Thus we had to completely change the turning part of the drive node one day before the maze competition.
Arrow Node
The arrow node is made for detecting a arrow and its orientation of a arrow when asked by the information node. The info this node publishes then helps the decision node decide what turn to take.
Design of Arrow-node
the main design decision in the arrow node is based on the use of hough transform. this transformation in combination with the line detector tuned correctly make us able to so arrows of any distance. This would benefit the arrow detection because the exact distance to the arrow is unknown and variances between each maze. Also the custom flood fill that is being used had a much better result that the standard flood fill of the opencv library
Convert image to hsv
First the camera rgb camera image is converted to a hsv image. this for making the following step easyer
Threshold image
The image is now threshold so that only the red parts of that would be red in a rgb picture become wit and the rest black (a binary image)
Custom flood fill
A flood fill is used to fill in the small imperfections in the arrow and also make the edges of the arrow smoother. This custom flood fill also removes "alone" pixels that can be in the image from the camera noise
Blur the image
The binary image is blurred to make the edges even smoother for the next step.
Canny edge detector
a canny line algorithm is used to detect the outer lines of the arrow image
Detect lines
The lines are then detected by using a Hough transform algoritm
Calculate line gradients + position
The lines are filtered so only left 2 and right 2 lines are saved. For these 4 lines the gradient is calculated with the following formula
[math]\displaystyle{ \nabla = \frac{y_{1} - y_{2}}{x_{1} - x_{2}} }[/math]
By looking the gradient and the postion of the lines the direction of the arrow can be calculated
Calculate the direction
by letting the above steps happen for 2 seconds and save how many time there was a arrow left right and no detected it can be sad if there is a arrow.
Video example
This video shows that a left arrow is detected. If you watch closely you can see that it prints on the screen which arrow is detected. Video of arrow detection
Evaluation of the maze competition
Unfortunately we weren't able to finish the maze competition. Actually the maze competition was a disaster. When we saw the maze we really thought that pico would be able to solve the maze with our software, because he had been able to solve simular mazes in the simulation. However we knew that the turning might give us some problems. The day before the maze competition we had our last test hour with pico. We wanted to test some different situation and see if pico would be able to solve them. Pico drove to the middle of the crossing and start to turn into the right direction, but for some reason pico kept on overturning. Instead of 90 degrees he most of the times turned around 135 degrees. At this point we used the odometry to turn 90 degrees, we integrated the twist and compared the integrated value with pi/2, which is equal to 90 degrees. This used to work at the corridor competition and in previous test hours and in the simulation this has always worked perfect. We asked on of the tutors to take a look at our code and he concluded that our code should work. He thought that there had to be some kind of problem with the communication with pico, but he wasn't able to solve it in our scheduled test hour.
After the test hour we decided to turn 90 degrees by looking at the laserdata instead of using the odometry. We were able to make this work in the simulation, pico always rotated between the 85 and 90 degrees which was good enough, because we align after each corner. However to make this more accurate and more robust we decreased the velocity during the turn. Also because we had to change this after the our last test hour we weren't able to test this way of turning in the real world. What happend at the maze competition is that pico perfectly detected the first crossing, he drove to the middle of the crossing and started turning to the left, because the opening on the right of pico was considered as a dead end. But instead of turning smooth pico really struggled with turning and after more than 1 minute he had only turned 10 degrees. We think this happend because we decreased the turning velocity, during a test before the corridor competition this happend to us before. That time the velocity was so low that it didn't even turn.
We think that most of the work went good during this problem, but we could have improved the project a lot by testing more. Now it sometimes happend that we run our software during a test hour and pico didn't do what we expected him to do. While in the simulation pico did what we expected. If this happend we were the entire hour busy with understanding why it didn't worked and try to edit it. Most of the times we didn't even record a bagfile and bagfiles are really important, because you can test your code then later with data from the real world. Now it very often happend that the software worked in the simulation, but it wasn't robust in the real world. If we had used bagfiles to test this we already knew this before our test hour and we could use our test hour more efficient.
To prove our code works in simulation, we made a simulation of the maze used in the competition: Simulation of maze
Time survey
' | ' | Lectures | Groupmeetings | Mastering ROS and C++ | Preparing midterm assignment | Preparing final assignment | Wiki progress report | Other activities | ' | ' |
Week 1 | Rik | 2 | 1 | 3 | 6 | |||||
Dhruv | 2 | 1 | 1 | 4 | ||||||
Hans | 2 | 1 | 3 | |||||||
Wout | 1 | 1 | 2 | |||||||
0 | ||||||||||
Week 2 | Rik | 2 | 1 | 16 | 19 | |||||
Dhruv | 2 | 1 | 6 | 9 | ||||||
Hans | 2 | 1 | 3 | 6 | ||||||
Wout | 2 | 1 | 4 | 7 | ||||||
0 | ||||||||||
Week 3 | Rik | 1 | 2 | 3 | 6 | |||||
Dhruv | 2 | 1 | 3 | 6 | ||||||
Hans | 2 | 1 | 6 | 9 | ||||||
Wout | 2 | 1 | 3 | 6 | 12 | |||||
0 | ||||||||||
Week 4 | Rik | 2 | 4 | 1 | 3 | 1,5 | 11,5 | |||
Dhruv | 2 | 4 | 2 | 17,5 | 0,5 | 26 | ||||
Hans | 2 | 4 | 15 | 21 | ||||||
Wout | 2 | 4 | 0 | 20 | 26 | |||||
0 | ||||||||||
Week 5 | Rik | 2 | 4 | 7 | 1 | 14 | ||||
Dhruv | 3 | 4 | 3 | 10 | ||||||
Hans | 2 | 4 | 8 | 14 | ||||||
Wout | 2 | 4 | 0 | 0 | 0 | 6 | 12 | |||
0 | ||||||||||
Week 6 | Rik | 2 | 2 | 8 | 12 | |||||
Dhruv | 2 | 2 | 7 | 11 | ||||||
Hans | 2 | 2 | 4 | |||||||
Wout | 2 | 2 | 0 | 0 | 0 | 3 | 7 | |||
0 | ||||||||||
Week 7 | Rik | 2 | 2 | 6 | 10 | |||||
Dhruv | 2 | 2 | 5 | 1 | 10 | |||||
Hans | 2 | 2 | 15 | 19 | ||||||
Wout | 2 | 2 | 0 | 0 | 0 | 3 | 7 | |||
0 | ||||||||||
Week 8 | Rik | 2 | 10 | 2 | 14 | |||||
Dhruv | 2 | 2 | 6 | 10 | ||||||
Hans | 2 | 5 | 7 | |||||||
Wout | 2 | 2 | 0 | 0 | 0 | 6 | 10 | |||
0 | ||||||||||
Week 9 | Rik | 2 | 2 | 10 | 14 | |||||
Dhruv | 2 | 2 | 6 | 10 | ||||||
Hans | 2 | 2 | 5 | 9 | ||||||
Wout | 2 | 0 | 0 | 5 | 7 | |||||
0 | ||||||||||
Week 10 | Rik | 1 | 16 | 17 | ||||||
Dhruv | 2 | 1 | 10 | 2 | 15 | |||||
Hans | 1 | 20 | 3 | 24 | ||||||
Wout | 1 | 0 | 0 | 15 | 16 | |||||
0 | ||||||||||
66 | 80 | 48 | 60,5 | 170 | 28 | 4 | 456,5 | |||