Embedded Motion Control 2015 Group 1: Difference between revisions
(→Week 5) |
(→Week 8) |
||
(230 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
[[Embedded Motion Control]] (EMC) Group 1 is a | [[Embedded Motion Control]] (EMC) Group 1 is a multidisciplinary and multilingual project group working on the 'A-MAZE-ING PICO'. This wiki page consists of the weekly progress presented in the [[#Journal]], and the final approached followed to design the system, alongside explanation on implementation. | ||
The report focussus on the individual components of the software architecture, this includes basic idea description, mathematical analyses and possible software implementation. | The report focussus on the individual components of the software architecture, this includes basic idea description, mathematical analyses and possible software implementation. | ||
= Introduction = | |||
Technology has made a tremendous growth over these years. There was a significant growth in the field of Robotics, Mechatronics and Electronics all over the world. Many research and developments have been made in the field of robotics. Initially the robots where all wire driven and did not have much autonomous features. The development in the field of high speed micro processors and memory chips have lead to very robust robot designs. The era of autonomous robots have started after the development of the sensors. Sensors using LASER, digital signals etc, are widely used even nowadays. The autonomous driving has gained attention of the automobile and robotic fields. The development in the field of autonomous maneuvering has increased in all aspects. This project focuses on programming a robot called PICO. PICO is a programmable autonomous robot that can be also tuned and programmed to be used in a health and care industries. Here the concept is the same in case of autonomous driving and autonomous robot maneuvering. The main idea of this project is to make concepts and designs to program the PICO and make it drive through an unidentified maze. The robot will be programmed using C++; where the program will be executed and pulled inside the memory of the robot. So the robot can follow the algorithms programmed and try to solve the maze challenge. the challenge is divided in to two parts. the first is the corridor challenge and the final maze challenge. In this events the PICO will move and get to the destination autonomously. | |||
= Journal = | = Journal = | ||
The | The process of developing a software architecture and the implementation requires great coordination and collaboration. This journal will be updated to present and track the progress of the group. This journal includes weekly progress, countered problems and divided tasks. | ||
=== Week 1 === | === Week 1 === | ||
In the first week the overall process was discussed. Information from old groups | In the first week the overall process was discussed. Information from old groups was gathered and shared to clarify certain aspects and to get everyone in the group on the same level. The [[#Requirements | requirements ]], which were a self study assignment, were setup using the FURPS (Functionality, Usability, Reliability, Performance, and Supportability) model in the meeting for the initial design report. Additionally, the installation of Ubuntu and the relevant software were given as self study assignments next to a division of tasks for the initial design report. | ||
=== Week 2 === | === Week 2 === | ||
In the second week the various role of System’s Architect, Team Leader, Secretary were defined. The team was split in pairs in order to approach the code development and the initial design document. Furthermore, deeper system’s description | In the second week the various role of System’s Architect, Team Leader, Secretary were defined. The team was split in pairs in order to approach the code development and the initial design document. Furthermore, a deeper system’s description was defined and the software [[#Functions |functions]] were made clearer using communication and context tasks. | ||
Later in the same week the separately created system architectures (made by the earlier defined pair) were validated by discussing it in the meetings. Ideas on how to apply the 5C’s to low and low-level functionality were presented. Advantages and disadvantages on the use of potential fields were discussed. | Later in the same week the separately created [[#Design Architecture | system architectures]] (made by the earlier defined pair) were validated by discussing it in the meetings. Ideas on how to apply the 5C’s to low and low-level functionality were presented. Advantages and disadvantages on the use of potential fields were discussed. | ||
Tips were given by the tutor (in the first meeting with the tutor), with regards to the approach on the use of potential fields. | Tips were given by the tutor (in the first meeting with the tutor), with regards to the approach on the use of potential fields. Additionally the tutor oriented the team by giving tips and lessons learnt from teams that completed last year's course on EMC. | ||
For the self study assignments another division was created | For the self study assignments another division was created to separately work on the report, presentation and the basics of the code, including potential field and low level functions. Additionally, the installation of Ubuntu, tools, software and simulators was given as mandatory assignments for everyone. | ||
=== Week 3 === | === Week 3 === | ||
After the long weekend, lasting through two-fifth of week 3, the presentation was given on Wednesday | After the long weekend, lasting through two-fifth of week 3, the presentation was given on Wednesday after which a meeting took place. A walk through of Git was done collaboratively in the group. The use of only the developing branch was decided upon for the development of the new algorithms. The master branch will just be used for final versions, supervised by the software architect. From that meeting until Friday's meeting codes were developed for the corridor challenge. Coordination of the tasks were maintained by the software architect. | ||
=== Week 4 === | === Week 4 === | ||
Over the weekend from week 3 to 4, majority of the coding related to | Over the weekend from week 3 to 4, majority of the coding related to potential field implementation was done. On Monday morning the first test run on PICO was done. The system did not performed as expected, it was not able to turn when it arrived to the junction. Afterwards, possible solutions for the cornering were discussed. A proposal to have a hybrid solution between Potential Field and States-Decision was discussed. The State Machine consists of a Corridor Mode and an Intersection Mode. | ||
For the code overview and explanation, the # | For the code overview and explanation, the [[#Potential Field | potential field ]] and [[#Lower Level Skills|lower level skills]] are separately discussed. The results of the #Corridor Challenge are depicted later. | ||
After the corridor challenge, the group met to discuss the overall progress and the corresponding to do list and to do a quick intermediate evaluation of the group collaboration and the individual contribution thus far. | After the corridor challenge, the group met to discuss the overall progress and the corresponding to do list and to do a quick intermediate evaluation of the group collaboration and the individual contribution thus far. | ||
Line 32: | Line 34: | ||
* Decision Making - Refine behavior via updating parameters. | * Decision Making - Refine behavior via updating parameters. | ||
* Improve Driving - Fine tuning the parameters for corridors and cornering. | * Improve Driving - Fine tuning the parameters for corridors and cornering. | ||
* Update Wiki | * Clean up code - Remove hard coded section, save in new development branch. | ||
* Update Wiki: | |||
** Explanation of Safety Measurements. | ** Explanation of Safety Measurements. | ||
** System Architecture | ** Elaborate on System Architecture. | ||
** Explanation of Potential Field. | |||
** Journal | |||
** Corridor Challenge results & evaluation | |||
** Explanation | |||
** | |||
=== Week 5 === | === Week 5 === | ||
Now the corridor challenge is successfully completed, we need to think about more advanced decision making, maze mapping, and a quicker turning skill. In week 5, not much coding was done, but several design ideas were | Now that the corridor challenge is successfully completed, we need to think about more advanced decision making, maze mapping, and a quicker turning skill. In week 5, not much coding was done, but several design ideas were discussed on dept. [[EMC-group1 -Week 5]] | ||
The following components are thought through and coded in pseudocode during week 5: | |||
* Targeting correction: when a target is set in a new corridor, pico drives to it. However, that changes the target absolute position. A skill is created to cope with the absolute position, now a relative position is used, using odometry data to correct for it. | * Targeting correction: when a target is set in a new corridor, pico drives to it. However, that changes the target absolute position. A skill is created to cope with the absolute position, now a relative position is used, using odometry data to correct for it. | ||
* Node saving: to map the maze an array is used to store important points of interest. Containing: name, location, type of POI and linked POIs. | * Node saving: to map the maze an array is used to store important points of interest. Containing: name, location, type of POI and linked POIs. | ||
* Visualization of targets: to debug the program a vizualization tool is created. All the information around PICO can be plotted separtetly while running the simulation. | * Visualization of targets: to debug the program a vizualization tool is created. All the information around PICO can be plotted separtetly while running the simulation. | ||
The final design used can be found in [[#Maze Challenge | the maze challenge]]. | |||
=== Week 6 === | |||
On Tuesday in the meeting, the team redefined the planning for the next couple of weeks. | |||
Next week, another presentation has to be given containing the design and the new flowchart of the code. The presentation details for the presentation shall be discussed one day before. | |||
Also in week 7, a working prototype with the following specifications is required: | |||
* Basic maze solving algorithm. | |||
* Targeting functionality. | |||
* Basic decision making implementation. | |||
* Robust driving through junctions. | |||
The week after, the team wants a working robot requiring only fine-calibration to finish the maze. The algorithms for solving the maze can then be further developed. This includes also how to cope with doors, open spaces, loops, and dead ends. | |||
The last week, week 9, the maze competition is to be held. The program has to be finished, the days before are used for troubleshooting. | |||
In the meeting the approaches are discussed. We decided on splitting the work in two parts, and assign the group members as equally as possible. The first part is the mid-level program and consists of the following parts: | |||
* World model. | |||
* Potential field. | |||
* Targeting. | |||
* Implementation of the above | |||
The second part is the Method part, focused on research and more design approaches: | |||
* Maze solving, door handling, and overall logic. | |||
* Task monitor, controller, and feedback loop. | |||
* Decision method and implementation. | |||
* Additional parts of the code used in the corridor challenge. | |||
This division makes sure people work together and that the code, when written, is subjected to higher scrutiny. | |||
=== Week 7 === | |||
In week 7, only a short meeting is held after the presentations. The task division was clear and only an update on responsibilities was performed. The next day, testing on PICO is done. However, it was concluded while testing that the software is still not achieving its goals. The planning was revised. A working prototype could not be ready this week. | |||
The | |||
=== Week 8 === | |||
In the last week, the prototype was not yet finished. Though extensive research to approaches for the world model and decision making were done, the final decision on which method to use was not made. The goal of the last meeting before the maze challenge was to decide on which methods to use. All extensions were discarded and the development effort focused solely on the things needed to drive through and solve the maze. At this part, the robot was only able to cross doors in dead ends and create a basic node system. The filling of the nodes, and thus creating a map, was not yet finished. Ideas on [[ #Decision State Flow | decision making ]] were finalized in the meeting. | |||
Over the weekend, up to the days before the maze challenge, the following tasks were created: | |||
* Recognize junction type. | |||
* Store junction node. | |||
* Make decision. | |||
* Set target. | |||
* Store decision. | |||
This was chronologically the order of implementation, but also the priority within the production of [[ #Code | the code]]. In the weekend a lot of 'code-sessions' were held and the last four days before the maze challenge all the remaining parts (the itemize above) were programmed. | |||
The second testing was done with PICO for testing the bugs and the defects in the codes. It was found that PICO moved well but it was not able to handle loops, it considered the loops as turnings that occur continuously. The reason for this was that the mapping feature was not added to the code. | |||
= Design Architecture = | = Design Architecture = | ||
In this section, the initial idea of an embedded software design will be discussed. The main goal of the design is to make the PICO robot able to solve a maze. Throughout this report, the requirements, functions, components specifications and the interfaces of the system will be discussed. | |||
In this section, the initial idea of an embedded software design will be discussed. The main goal of the design is to make the | |||
=== Requirements === | === Requirements === | ||
The requirements of the system | The requirements of the system were divided according to the FURPS model. The abbreviation of this model stands for Functionality, Usability, Reliability, Performance, and Supportability. Functionality in this case denotes the features and capabilities of the system. Usability includes the human factors, consistency, and aesthetics. Reliability stands for the failure frequency, accuracy and predictability. Performance here means the efficiency, speed, response, i.e. measures of system performance. Finally, the supportability incorporates testability, maintainability and configurability. | ||
The main requirement of the system is to navigate the maze. It must be completed according to the challenge specification, which forms the main functional aspect of the requirements. | |||
Furthermore, the system design must be robust. The layout of the system must constitute all possible functions to account for the system architecture, which enables the usability of that design. To achieve this, possible fault modes can be taken into account. | |||
Also, the system design should pose a robust solution. This means that the provided system solution must be robust enough to cope with environmental uncertainties. | |||
Next, the doors in the maze should be handled correctly. The behavior of the door should be analysed to help differentiate the door from the wall. Firstly, the robot should be able to recognize a door from a wall and, secondly, cross the door. | |||
Furthermore, | Fifthly, the generated design should be tested for its reliability. Furthermore, the major functional requirement of the robot is the ability to solve the maze on its own. | ||
Next, the given hardware (PICO robot) must be used for the maze challenge without any modification and additional components mounted on it. | Next, the given hardware (PICO robot) must be used for the maze challenge without any modification and additional components mounted on it. | ||
Next, the performance requirement of taking quick decision is one of the essential factors for the PICO robot for completing the challenge within 7 minutes. | |||
And finally, the need for smart decision making is a crucial necessity due to the fact that the environment changes. | And finally, the need for smart decision making is a crucial necessity due to the fact that the environment changes. | ||
Line 197: | Line 205: | ||
* Actuate the wheels according to the input. | * Actuate the wheels according to the input. | ||
* System initialization for the working of the sensors and actuators. | * System initialization for the working of the sensors and actuators. | ||
=== Specifications === | === Specifications === | ||
[[File:Design architecture group1.jpg|right|top|500px]] | |||
The specifications of the system are formed by the available hardware and software, and by some of the requirements for the challenges. | The specifications of the system are formed by the available hardware and software, and by some of the requirements for the challenges. | ||
* Limited time to complete the maze challenge (7 min). | * Limited time to complete the maze challenge (7 min). | ||
* Limited access to the | * Limited access to the PICO robot. | ||
* Robot hardware is not to be modified. | * Robot hardware is not to be modified. | ||
* Hardware access is facilitated via ROS. | * Hardware access is facilitated via ROS. | ||
* Translational/rotational speed limit of the robot actuators. | * Translational/rotational speed limit of the robot actuators. | ||
* Use of C++ and SVN. | * Use of C++ and SVN. | ||
* Available sensors: | * Available sensors: odometer and laser. | ||
* Use of controller feedback. | * Use of controller feedback. | ||
* Unknown location of entrance and exit on the boundary of the maze. | * Unknown location of entrance and exit on the boundary of the maze. | ||
Line 216: | Line 222: | ||
Within the system, two types of interfaces are distinguished. First, there is the communication between the different components of the system, shown in the figure above as arrows. The second type of interface, is the interface on which the robot interacts with the environment. This includes a user interface to program the robot, and sensors to sense the environment. There is no user interface to communicate with the robot while it is solving the maze, for it is autonomous. The sensors used by the robot are discussed in the specifications. | Within the system, two types of interfaces are distinguished. First, there is the communication between the different components of the system, shown in the figure above as arrows. The second type of interface, is the interface on which the robot interacts with the environment. This includes a user interface to program the robot, and sensors to sense the environment. There is no user interface to communicate with the robot while it is solving the maze, for it is autonomous. The sensors used by the robot are discussed in the specifications. | ||
= Potential Field = | === Implementation === | ||
In order to solve the maze successfully, the robot needs to be able to drive autonomously. This involves collision prevention and the ability to reach a certain target. While the target itself is provided by a higher level of control, | So, how was the software implemented? This is indicated in the Figure on the right. As can be seen,there are different contexts with different tasks. Accordingly, the world model consists of a potential field which is used to built up the mapping and scanning. This context communicates with the Task context, and of course with the robot context, which supplies the laser data. The potential field and the world map are passed on to the Task context, which will use its information to make decisions and configure skills. These skills, for example, contain lower level driving skills such as turning. Of course, these skills communicate with the robot as well. For example, sending a driving speed. Finally, the abstraction layer converts low level skills into real driving. Luckily, this particular part of the system was already provided by the course's lecturing team. | ||
= Composition Pattern = | |||
Besides describing the behavior of the model in a task-skill-motion model, an overview of the model structure can be made using composition patterns. A composition pattern describes the structure of the robot model focusing on a certain role. The composition pattern can be decoupled using the 5C's to develop functionality. The 5C's are the Computation, Communication, Coordination, Configuration and Composition of a certain part of the model. As an example of decoupling part of the composition pattern to develop functionality using the 5C's, the decoupling of the [[# Potential Field | potential field ]] functionality will be explained. | |||
[[File: COMPOSITIONHIERCHEY.jpg|center|top|500px]] | |||
=== Trigger === | |||
To start the potentialfield, it should be triggered. The first time this will be done by constructing the world model, later on by updating the world model. | |||
''when triggered % in world model | |||
do {'' | |||
=== Configuration === | |||
The parameters of the potentialfield, the radius of the scanning circle are set. | |||
''rangeCircle = 1.2'' // Radius of the circle used for the potentialfield | |||
=== Communication === | |||
Next, the potentialfield skill receives the laser data from Pico, by updating the class containing the laserdata. | |||
''emc::LaserData * scan = pico->getScan(); // Get laser data'' | |||
=== Coordination === | |||
Determine which of the gathered laser data is within the configured range and make a selection. This is done by iterating through the scan data: | |||
''if(*it>scan->range_min && *it<scan->range_max && *it<rangeCircle) | |||
{ '' | |||
=== Computation === | |||
For every data point within the range the distance to Pico is calculated and a force vector is calculated like will be discussed in the potential field method. | |||
''r = it->getRadius() | |||
F = 1/pow(rMult*(r-0.15),5) | |||
wallForce.addPoint(-F*cos(it->getAngle()),-F*sin(it->getAngle()));'' | |||
=== Communication === | |||
The potential field is uploaded to the world model. | |||
''pot->update(); // Update potential field'' | |||
=== Coordination === | |||
Based on the potential field in the world model gaps are found. These gaps are used to make decisions. These decision will be set as target. | |||
=== Computation === | |||
From the set target a targetvector is calculated, similarly as was done determining the wall-forces but with a attracting instead of repulsive forces. Also, a new force vector will be calculated using the wall vectors and target vector. | |||
''pot->setTarget(atGap))); // Calculate a targetvector at the chosen gap'' | |||
=== Communication === | |||
The calculated force vector is saved publicly so it can be used for the driving of Pico. | |||
''pot->update(); '' | |||
=== Logging === | |||
The computed forces are shown in a visualization, so they can be evaluated and used for debugging. An example of the visualization is shown in the picture below. | |||
''visualise::pot'' | |||
[[File: Nflqv.gif |center|top|350px]] | |||
= Methods = | |||
This section contains various algorithms and skills that in the opinion of team, require a more detailed explanation. First, the potential field. It is used for driving and collision prevention. Then, an explanation on how this algorithm is used in the drive skill, which is fairly simple. Then, the safety layer and turn skill are explained. | |||
== Potential Field == | |||
In order to solve the maze successfully, the robot needs to be able to drive autonomously. This involves collision prevention and the ability to reach a certain target. While the target itself is provided by a higher level of control, a skill to actually reach it is needed. PICO's driving skill is able to do both by using a so-called resulting force vector. In principle, this system works easily. Objects and locations which are to be avoided have a repelling force, while desirable ones are given an attracting force. Then, the resulting force will give the most desirable direction to drive in, which only needs to be normalized and sent to the lowest level of drive control. For this to work, mathematically the function to calculate the force should be of a different order for undesirable and desirable targets, otherwise the robot could find itself not knowing what to do. | |||
In | In practice, this is implemented as follows. For all wall points provided by the laser range finder, the following 'force' is calculated: | ||
<math> | <math> | ||
Line 225: | Line 289: | ||
</math> | </math> | ||
Of course, this gives numerical problems when the distance to the wall <math>r=0.05</math>. Then, a very high value is assigned. <math>r_m</math> is a scaling constant, which can be tuned to reach the desirable | Of course, this gives numerical problems when the distance to the wall <math>r=0.05</math>. Then, a very high value is assigned. <math>r_m</math> is a scaling constant, which can be tuned to reach the desirable behavior later on. The contribution of each wall point in both plane directions (x,y) is summed to obtain one force vector. Then, the target point is given force | ||
<math> | <math> | ||
Line 231: | Line 295: | ||
</math> | </math> | ||
with <math>r_t</math> the Euclidean distance to the target. The resulting force will be pointing in the direction which is most desirable to drive in. Note how this method is self regulatory, i.e. when the robot for some reason does not actually drive in the direction of the resulting force, the new resulting force of the subsequent time step will automatically correct for this. There is no need to store forces of all wall points, simply summing them vector wise will | with <math>r_t</math> the Euclidean distance to the target. The resulting force will be pointing in the direction which is most desirable to drive in. Note how this method is self regulatory, i.e. when the robot for some reason does not actually drive in the direction of the resulting force, the new resulting force of the subsequent time step will automatically correct for this. There is no need to store forces of all wall points, simply summing them vector-wise will suffice. Also, there is no need for additional collision avoidance. | ||
By looking at the GIF showed in the previous section, the wall force vector, the target (white dot), and the target vector can be seen in action. | |||
== Lower Level Skills == | |||
In this section, the lower level skills are elaborated on. Also, the implementation and the design decisions are explained. | |||
=== Drive Skill === | |||
The Driving Skills are based on the information provided by the Potential Field and the higher level decision making. Once a target is set, the sum of the attracting force of the target and the repelling force of the walls will result in a net driving direction. This drive direction is scaled differently in the x and y direction with respect to PICO: | |||
<math>v_x = \frac{1}{2}\cos{(\theta)} v_{max}</math>, | |||
for the speed in x direction, where <math>\theta</math> denotes the angle of the drive vector, and | |||
<math>v_y = \frac{1}{5}\sin{(\theta)} v_{max}</math> | |||
For the speed in the y direction. | |||
The rotational speed is simply 1.5 times the angle of the net driving direction vector. However, when the drive angle becomes larger than 40 degrees, PICO only rotates towards it to prevent collisions. Then, pico->setSpeed is updated. In PICO's low level safety layer, a safety routine prevents the robot from moving backwards. When the drive vector is pointed that way, we allow it only to rotate and move in the y direction. | |||
=== Safety Layer === | |||
The first safety measure is the potential field algorithm itself. It's obvious that the robot should keep from getting too close to the walls. However, with regards to completeness and having safety redundancies in the code, another safety routine was implemented. The basic idea is to prevent the robot from hitting the walls (either head-on and/or otherwise), so the distance to the walls is continuously compared to a safety margin. If the distance is smaller the robot will stop and enter a routine that maneuvers the robot away from the wall and places the robot in a location to keep driving safely. | |||
A second safety measure is the connection state skill. A true state of this skill is required to run the main loop, activating all the other skills and tasks of PICO. This is a very basic function, nevertheless, required for robust operation. | |||
=== Sensor data acquisition === | |||
One of the low level skills is the interface with the hardware. PICO is equipped with a an Odometry sensor measuring the relative base movement and with a laser range finder. Both of these components can be individually activated as a low level skill. Other functions, skills, and tasks can use this globally available data. | |||
=== Turn Skill === | |||
In order to have a modular approach to solving the side turns. The operations were separated using a simple state machine. When advancing through the corridor, the robot is checking for large magnitude differences between the laser's measurement vectors. These differences could represent an "opening" in the wall, which being large enough indicate a new corridor. If the robot detect that the corridor has two possible paths, the state is changed to "Junction" mode. Immediately after this information of "Junction" mode at this point in the maze a junction location will be stored. This information is not used for the turning itself, but could prove useful when designing the higher decision skills to solve the maze. Once the data is stored, the current state will be "Drive Junction", which takes the robot to an estimate of the junction's center. At this point, the robot will turn to whichever side the new corridor is. These states are represented graphically in the following figures. | |||
[[File:TurnSkill_1.png|center|350px]] | |||
Once the robot has taken a turn it must enter the new corridor, detect the walls so that it can continue navigating in the new corridor. Such operations are carried out by a "Drive Out" state. If the laser signal indicates that the robot is again in a corridor it moves back to the "Navigate Corridor" state and continues. When the robot detects that there are no more walls in front of it and on the sides, and there are no large differences between the laser vectors (implying, there are no new corridors), its interpretation is that the corridor challenge has been completed. So a "Victory" state is reached that will stop the robot's motion. The next figure shows these states. | |||
[[File:TurnSkill_2.png|center|350px]] | |||
== Decision State Flow == | |||
= | === Recognizing maze Features === | ||
The detection of dead ends was done on the basis of comparing the number of targets that are detected in the potential field part of the code. As was already explained, the potential field environment automatically detects gap vectors and stores them. In principle, when there are no gaps detected, we have reached a dead end or an open space. To make the distinction more robust, an extra condition was added. This condition is based on the measurement of the laser in front of the robot. If it detects an object nearby, the robot is in a dead-end, else it is an open space. In an open space, the robot may also have no gaps, since it does not sense any walls. However, then it still sets a target away from the robot. | |||
The open spaces are basically detected using the range circle. This circle is made wider and broader, depending on the number of gap vectors that were detected during the 5 iterations of the main loop. The minimum number of gaps that were detected during these 5 iterations, (called minGaps) i.e. the robustly detected gaps, is determined and stored. This information is used to find out what kind of environment the robot is in and to switch the case from corridor to junction for example. | |||
The differentiation of the junction types, left-junction, right-junction, T-junction, and the cross-junction is also done with the gapsvectors. However, since we need to exactly know what kind of junction it is, we can not use the min/maxgapsvetors. Therefore, the last gapsvector size is used, this contains only information from the last measurement given by the potential field. The gapsvectors only produce an amount of vectors, so now we are only able to differentiate the 3 way junctions and the 4 way (cross) junction. | |||
Next, to differentiate the direction within the 3 way junction, we need to use the direction of the individual gapvectors. Since the 3 way junction only consist of 2 gapvectors (the entrence direction gap is not recognized) we only have to calculate the directions of these 2 gapvectors with respect to Pico. The gapvector with the smallest angle with respect to Pico is the most straight gapvector. Since we know that the gapvectors are calculated counterclock wise, we know where the corridors lie with respect to Pico. This information is also used to drive to the middle, explained in the previous section #turn skill. | |||
For a clear overview on how the different states are linked, see the state machine figure in the following section. | |||
=== State Machine === | |||
The high level navigation algorithm was constructed as a state machine. One of the main assumptions is that the robot will always start in a corridor, as such, the initial state of the diagram is "Junction". For a quick overview the following figure is presented: | |||
[[File: | [[File:Flow_Diagram1.png|middle|center|850px]] | ||
Here, it is possible to see that the information required for most of the state transitions is the number of gaps and in few cases a new check on the laser data. | |||
The groups of states including "Junction", "Drive Junction", "Store Junction" and "Drive out Junction" are used for the junction navigation. So, in terms of physical meaning inside the maze, they are all located in the different kinds of crossings. | |||
= Corridor Challenge = | = Challenges = | ||
== Corridor Challenge == | |||
Preparation of the [[Embedded Motion Control #Corridor Competition]] and the results shown in a short movie. | Preparation of the [[Embedded Motion Control #Corridor Competition]] and the results shown in a short movie. | ||
=== Available Approaches === | |||
Based on the results from the test run with the real robot, the following solutions were proposed: | Based on the results from the test run with the real robot, the following solutions were proposed: | ||
'''Robust Turning''' | '''Robust Turning''': Safe or Fast modes. | ||
'''Simple Turns''' | '''Simple Turns''': Hard-coded turns. | ||
A relevant concept in the development of the solutions was to be able to identify that the robot was turning and not only turn as an extension of "regular corridor driving". As a result, a simple state machine was implemented. It modifies the decision making of the robot as long as it recognizes to be in a | A relevant concept in the development of the solutions was to be able to identify that the robot was turning and not only turn as an extension of "regular corridor driving". As a result, a simple state machine was implemented. It modifies the decision making of the robot as long as it recognizes to be in a junction. | ||
{| | {| | ||
Line 269: | Line 374: | ||
|} | |} | ||
=== Results === | |||
The driving through the straight part of the corridor was smooth and fast reactions were not evident. Just as in the simulation | The driving through the straight part of the corridor was smooth and fast reactions were not evident. Just as in the simulation shows previously, the robot detected the side exit appropriately. The following video shows the result of the corridor competition. | ||
{| | {| | ||
Line 277: | Line 382: | ||
|} | |} | ||
The separation of the turning action into states is meant to facilitate the implementation of higher level navigation afterwards. After the challenge some new scenarios were tested to gain some insight | The separation of the turning action into states is meant to facilitate the implementation of higher level navigation afterwards. After the challenge some new scenarios were tested to gain some insight on the robustness of the code. First, a small separation between the wall-blocks. Second, narrower corridor. And finally, remove the side exit walls to check if the robot could still recognize it, turn and stop. From these new scenarios, the results were as follow: 1) the code is able to navigate well even if some discontinuities exist in the walls, 2) The robot can drive through the corridor regardless of it having a different width, however, if the width is smaller than the safety margin established in the code, the robot will not move. And 3) Even if the side-corridor has no walls the robot was able to recognize it, turn and stop. These tests show that the robust approach can deal with an uncertain environment up to this level. | ||
= Maze Challenge = | === Evaluation of the Challenge === | ||
After having completed the challenge and running extra simulations the comparison of the results and the initial requirements was performed. | |||
* '''Navigate the maze''': Achieved. PICO was able to navigate through the corridor and take the first available exit in the corridor. | |||
* '''Subsystem based design''': Partially achieved. The implementation of the system included a rudimentary but explicit separation of PICO's functionality. | |||
* '''Adaptable to environment''': Partially achieved. The decision making at this stage of the project was not robust to handle a dynamic environment. However, it was still able to solve the challenge successfully. | |||
* '''Door Handling''': Not applicable. | |||
* '''Testing and Simulation''': Achieved. The robot's functionality was tested successfully in both simulation and the laboratory. | |||
* '''Autonomous Operation''': Achieved. | |||
* '''Use given hardware''': Achieved. | |||
* '''Fast decision making''': Achieved. It is worth noting that for this challenge two approaches were available, Safe and Fast modes. In both of them the decision on taking a turn was rather fast because it did not demand a high level of complexity. | |||
* '''Smart Decision Making''': Not applicable. | |||
== Maze Challenge == | |||
After a discussion within the team on the approach for the maze challenge, a high-level logic was defined. The basic idea is presented in the following Figure: | After a discussion within the team on the approach for the maze challenge, a high-level logic was defined. The basic idea is presented in the following Figure: | ||
Line 287: | Line 405: | ||
These three steps have several implications on a lower level. For example, one of the main difficulties is how will the robot gain knowledge about the maze. Being able to know that the robot has been in a given section of the maze before, and also knowing if there is a possibility that has not been tried yet in a given junction. | These three steps have several implications on a lower level. For example, one of the main difficulties is how will the robot gain knowledge about the maze. Being able to know that the robot has been in a given section of the maze before, and also knowing if there is a possibility that has not been tried yet in a given junction. | ||
The | Before the maze competition the system was tested in the simulator and in the laboratory. The simulator was mainly used to check high level logic, such as recognizing intersections, turning, recognizing dead ends and following the door routine. The following video shows a simulation with the robot navigating a cross junction map with a door in one of the corridors. | ||
[[File: | |||
{| | |||
|[[File:EMC01_CrossJunctionSim_Name.png|center|250px|link=https://youtu.be/tqv0Sq8mdXE]] | |||
|} | |||
=== Results === | |||
In this section the maze challenge attempt videos are available, upon which analysis have been done to infer the causes for its behavior. | |||
Maze Challenge First Attempt video: | |||
{| | |||
|[[File:EMC01_MazeChallenge_Attempt01.PNG|center|250px|link=https://youtu.be/Xkqel6uaBBE]] | |||
|} | |||
It was noted that the robot did not go into the dead ends, therefore, it could not be evaluated whether there was a door or not. | |||
The navigation appeared not to be correcting its path abruptly. Regarding the decision making, it can also be appreciated in the video that the robot is selecting both left and right corridor options. | |||
This however, was not enough to enter the new corridor completely. | |||
Maze Challenge Second Attempt video: | |||
{| | |||
|[[File:EMC01_MazeChallenge_Attempt02.PNG|center|250px|link=https://youtu.be/-oSrP8NrOiQ]] | |||
|} | |||
It was noted that the robot again did not choose to enter a new corridor, thus, was not able to find the door. | |||
The system was not robust enough to avoid hitting the walls. One of the possibilities is that the PICO was in a state where the potential field was not a high priority. | |||
The other option could have been that the resultant rejection force vector's magnitude was not large enough to repel the PICO from the wall. | |||
=== Post-Challenge Simulation === | |||
Unfortunately the robot was not able to finish the Maze Challenge. Therefore, the team decided to reproduce the design of the maze in the simulator to test the system used on the day of the competition. The resulting maze can be seen in this illustration: | |||
[[File:New_Maze_Challenge.png|middle|center|500px]] | |||
The robot was then able to navigate through the first section of the maze, find and cross the door. Cross the open space after the door and approach the zone where the exit was finally located. Before actually going to the exit, a problem with the open space identification emerged, therefore the system did not know what to do and started switching between states very fast. This all can be seen in the next video: | |||
{| | |||
|[[File:EMC01_MazeSim_Name.png|center|250px|link=https://youtu.be/GkLQ3vlY4Ek]] | |||
|} | |||
=== Evaluation === | |||
* '''Navigate the maze''': Partially achieved. PICO was able to navigate the corridor, however, it could not recognize the dead ends. It is worth nothing that it was able to prevent going in loops. | |||
* '''Sub-system based design''': Achieved. The implementation of the system's functionality was explicitly separated in the code. So, theoretically they can be modified independently and still be able to interact among them. | |||
* '''Adaptable to environment''': Partially achieved. The system was able to differentiate between several states (e.g. junctions, dead ends, open spaces) and to make decisions based on that. However, the final challenge showed that the way in which states were defined was not as robust as expected. | |||
* '''Door handling''': Not achieved. During the challenge the robot could not find the door, thus, it is not possible to know if the door handling routine actually worked. | |||
* '''Testing and simulation''': Achieved. The system was intensively tested using simulations for the high level decision skills. Because of implementation problems, the laboratory time available was not fully used. Nevertheless, before the challenge a real world test was performed in a simple maze and it performed satisfactorily. | |||
* '''Autonomous Operation''': Achieved. | |||
* '''Use given hardware''': Achieved. | |||
* '''Fast decision making''': Achieved. All decisions were taken in real time. | |||
* '''Smart decision making''': Partially achieved. Because of problems with the implementation of a smart global navigation algorithm, the final system used a random decision making routine. Even though, this is not the best approach, the team agreed that given the project constraints it was the most feasible option. | |||
=== Uniqueness of the Design === | |||
* The implementation of the potential field methods to find the optimal path for the robot (PICO) is one of the best methods implemented by the group. With this the robot can detect walls very easily and maneuver on the path provided. It is considered as a very robust algorithm for object identification. The potential field is developed by the laser scanning (LFR) and it allows potential of low and high levels for free paths and walls. The walls get a high potential so the robot knows that there is an obstacle. | |||
* The state machine approach allowed PICO to identify the maze features and navigate accordingly. The Potential Field was particularly helpful in letting PICO make decisions based on the targets set in the maze, while the state machine approach let PICO identify the maze and take decisions based on this information. | |||
* A simple decision making algorithm was developed where PICO selects an option in the junctions randomly. This increases the probability of turning in different directions, thus, preventing loops. The team agree that this was the best method to use given the difficulties found in the node approach. | |||
== | = Code = | ||
In this section we explain which parts of the code are of high quality and reflect the lessons learned. The code presented in the snippet link is the potential field. We chose for this piece of code because it consist of very efficient computation algorithms. Also, the code is sophisticated because it is so simple. Moreover, the journey to get there was done in a efficient pragmatic way. In the beginning, the potential field really consisted of a grid of points each having a potential. Then, we calculated the desired drive direction using a gradient approach. In the end, however, this method still only yields a vector containing the drive direction. The revised method does the same, but without the huge 3D data matrix that was the potential field. The last reason why we chose to upload this code is that it is very robust, and is therefore also used in later developed functions. | |||
An additional remark, the potential field also consist of a function called: ''isGapGreat()'', which is found very amusing within our multilingual group. | |||
The code for the potential field can be found in the following GitHub gist: | |||
https://gist.github.com/anonymous/c3ecae8a053ac96616be | |||
The question | = Reflection and Outlook = | ||
The code design in general provides a lot of flexibility to change and interchange skills, which proved valuable in the programming and debugging. The decision to keep the design modular and clearly divided allowed for simultaneous programming of different parts of the code. | |||
Some implementation choices proved to be good, such as the potential field, which provided a robust way of driving when well tuned. Others still need work, such as the node and mapping, which could be greatly improved using feature detection methods, such as corner detection and length scanning. | |||
The code was kept relatively modular with respect to the RHAL. This means that the changes required to allow the use of a different robot are bound to specific parts of the code, while most methods would stay the same. | |||
As a further outlook, even multiple robots could be run at the same time using separate potential fields, but a common map. The lower level skills would need to be duplicated and tuned individually. Each robot has its own set of skills associated with it, which are completely independent of each other. These skills would include the driving, the potential field and the configuring skills. Skills that would have knowledge of multiple robots would include the map and the decision making. A global planner could be added to coordinate robot behavior. In this way, minimal changes to the code would allow multiple robots to have separate robust driving, but a shared map and coordination. The maze could then be explored by multiple robots simultaneously. | |||
The code was kept relatively modular with respect to the RHAL. This means that the changes required to allow the use of a different robot are bound to specific parts of the code, while most methods would stay the same. As a further outlook, even multiple robots could be run at the same time using separate potential fields, but a common map. | |||
Decisions that would need reevaluation when the code would be developed further include the range circle which was used. Because of the range circle, a lot of information is thrown away and skills need to access the raw data for more information. A more sophisticated filter is recommended, providing all the functionality needed. | |||
== Extra feature designs == | |||
Due to time shortage, two essential features that were thought of were not really implemented. The higher level idea is summarized below: first for recognizing the maze features, and second to make a world map. | |||
=== Recognizing Maze Features === | |||
In order to be able to store maze features in some sort of map, there is a need to identify them using the laser data, preferably in the least number of steps to increase the performance of the algorithm. To achieve this, a robust and quick way to distinguish between the maze features has to be conceived. For the corridor challenge gap vectors were already built from the laser data, as was already explained before. Now, these features are extended with the wall vectors. | |||
The main principle is as follows: | |||
From the laser data, the so called corner points are deducted (indicated in the figure referred as black circles). It is also noted that every maze feature has a unique number of corner points. For instance, a simple corner has only one, and a full junction 4. Then, by counting both the wall vectors and the corner points, every feature can be robustly identified. This is indicated in the figure below. | |||
[[File:emc01_pico_corridor.png|center|200px]] | |||
Before really implementing this solution, we noticed that the handling of open spaces by our default potential field method was already sufficient. Also, junctions are already detected by the gaps vector method. Then, only a door inside a corridor wall would be overlooked. Due to time shortage, we decided to drop the maze feature recognition, since it is a lot of implementation work for only small added robustness. Detection of dead ends and open spaces was, however, elaborated and made more robust. We will explain this in a later section. | |||
=== Maze Mapping === | |||
The question then becomes, how to build the world model? Starting with the premise of building a simplified model having only the information necessary to solve the maze. The approach proposed is to use nodes, which represent relevant features from the maze, e.g. start and end points, different kinds of junctions, corners and open areas. Relevant characteristics to be saved are the approximated global position of each node and how it is connected to the nodes around it. The following Figure shows a rough representation of a simple maze loop. | |||
[[File:Maze_3_Nodes_EMC_01.png|center|400px]] | [[File:Maze_3_Nodes_EMC_01.png|center|400px]] | ||
Line 329: | Line 520: | ||
* Movement direction referenced to initial robot orientation. | * Movement direction referenced to initial robot orientation. | ||
* Connected nodes. | * Connected nodes. | ||
== | |||
Again, due to time shortage and an implementation that was not robust enough, we decided to drop the mapping and make random decisions instead. This proved to solve the problem of looping, and is actually very robust, since no mapping mistakes can be made. However, it is of course not the fastest way to solve the maze. | |||
* | == Lessons Learnt == | ||
*To work with a version system facilitates decentralized development. | |||
*Merging and update of “stable code” versions works better when carried out by a single person. | |||
*How to implement a simple supervisory controller. | |||
*Functions are developed better when performed in pairs. | |||
*Overall solution for problems should be brainstormed. | |||
*Structures from the composition patterns are easier to understand when explicitly separated in the code (different file). | |||
*Testing in the real robot is paramount. | |||
*Simplified solutions tend to work better (e.g. state machines for navigation). | |||
*It’s helpful if the functionality has a visual representation (e.g. potential field vectors and targets). | |||
= Appendix = | = Appendix = | ||
=== References === | |||
Website References | |||
[1] http://www.instructables.com/id/Maze-Solving-Robot/ | |||
[2] http://letsmakerobots.com/node/37746 | |||
[3] http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1099-1514 | |||
[4] http://www.researchgate.net/journal/0143-2087_Optimal_Control_Applications_and_Methods | |||
[5] http://blog.miguelgrinberg.com/post/building-an-arduino-robot-part-ii-programming-the-arduino | |||
[6] http://robotics.stackexchange.com/questions/1544/arduino-c-c-progamming-tutorials | |||
[7] http://www.robotshop.com/blog/en/how-to-make-a-robot-lesson-10-programming-your-robot-2-3627 | |||
[8] http://cstwiki.wtb.tue.nl | |||
[9] http://www.instructables.com/id/Simple-Arduino-Robotics-Platform/ | |||
[10] http://www.cplusplus.com/doc/tutorial/ | |||
[11] http://www.tutorialspoint.com/cplusplus/ | |||
[12] http://www.cprogramming.com/tutorial.html | |||
[13] http://letsmakerobots.com/artificial-potential-field-approach-and-its-problems | |||
Journal and Hard Copy References | |||
[1] Airborne laser scanning—an introduction and overview Aloysius Wehr & Uwe Lohr,ISPRS Journal of Photogrammetry & Remote Sensing 54 1999 68– | |||
82 | |||
[2] Bachman, Chr. G., 1979. Laser Radar Systems and Techniques. Artech House, MA. | |||
[3] Baltsavias, E.P., 1999. Airborne laser scanning: existing systems and firms and other resources. ISPRS Journal of Photogrammetry and | |||
Remote Sensing 54 2–3 , 164–198, this issue. | |||
[4] Young, M., 1986. Optics and Lasers—Series in Optical Sciences. Springer, Berlin, p. 145. | |||
[5] Airborne laser scanning—present status and future expectations,ISPRS Journal of Photogrammetry & Remote Sensing 54 1999 64–67 | |||
[6] http://www.robotshop.com/media/files/pdf/making-robots-with-arduino.pdf | |||
[7] Design and Development of a Competitive Low-Cost Robot Arm with Four Degrees of Freedom Modern Mechanical Engineering, 2011, 1, 47-55 | |||
doi:10.4236/mme.2011.12007 Published Online November 2011 (http://www.SciRP.org/journal/mme) | |||
[8] Semantic World Modelling for Autonomous Robots, Jos Elfring | |||
=== Group members === | === Group members === | ||
Group 1 consists of the following members: | Group 1 consists of the following members: | ||
{| class="wikitable" | {| class="wikitable" |
Latest revision as of 22:20, 26 June 2015
Embedded Motion Control (EMC) Group 1 is a multidisciplinary and multilingual project group working on the 'A-MAZE-ING PICO'. This wiki page consists of the weekly progress presented in the #Journal, and the final approached followed to design the system, alongside explanation on implementation.
The report focussus on the individual components of the software architecture, this includes basic idea description, mathematical analyses and possible software implementation.
Introduction
Technology has made a tremendous growth over these years. There was a significant growth in the field of Robotics, Mechatronics and Electronics all over the world. Many research and developments have been made in the field of robotics. Initially the robots where all wire driven and did not have much autonomous features. The development in the field of high speed micro processors and memory chips have lead to very robust robot designs. The era of autonomous robots have started after the development of the sensors. Sensors using LASER, digital signals etc, are widely used even nowadays. The autonomous driving has gained attention of the automobile and robotic fields. The development in the field of autonomous maneuvering has increased in all aspects. This project focuses on programming a robot called PICO. PICO is a programmable autonomous robot that can be also tuned and programmed to be used in a health and care industries. Here the concept is the same in case of autonomous driving and autonomous robot maneuvering. The main idea of this project is to make concepts and designs to program the PICO and make it drive through an unidentified maze. The robot will be programmed using C++; where the program will be executed and pulled inside the memory of the robot. So the robot can follow the algorithms programmed and try to solve the maze challenge. the challenge is divided in to two parts. the first is the corridor challenge and the final maze challenge. In this events the PICO will move and get to the destination autonomously.
Journal
The process of developing a software architecture and the implementation requires great coordination and collaboration. This journal will be updated to present and track the progress of the group. This journal includes weekly progress, countered problems and divided tasks.
Week 1
In the first week the overall process was discussed. Information from old groups was gathered and shared to clarify certain aspects and to get everyone in the group on the same level. The requirements , which were a self study assignment, were setup using the FURPS (Functionality, Usability, Reliability, Performance, and Supportability) model in the meeting for the initial design report. Additionally, the installation of Ubuntu and the relevant software were given as self study assignments next to a division of tasks for the initial design report.
Week 2
In the second week the various role of System’s Architect, Team Leader, Secretary were defined. The team was split in pairs in order to approach the code development and the initial design document. Furthermore, a deeper system’s description was defined and the software functions were made clearer using communication and context tasks.
Later in the same week the separately created system architectures (made by the earlier defined pair) were validated by discussing it in the meetings. Ideas on how to apply the 5C’s to low and low-level functionality were presented. Advantages and disadvantages on the use of potential fields were discussed. Tips were given by the tutor (in the first meeting with the tutor), with regards to the approach on the use of potential fields. Additionally the tutor oriented the team by giving tips and lessons learnt from teams that completed last year's course on EMC.
For the self study assignments another division was created to separately work on the report, presentation and the basics of the code, including potential field and low level functions. Additionally, the installation of Ubuntu, tools, software and simulators was given as mandatory assignments for everyone.
Week 3
After the long weekend, lasting through two-fifth of week 3, the presentation was given on Wednesday after which a meeting took place. A walk through of Git was done collaboratively in the group. The use of only the developing branch was decided upon for the development of the new algorithms. The master branch will just be used for final versions, supervised by the software architect. From that meeting until Friday's meeting codes were developed for the corridor challenge. Coordination of the tasks were maintained by the software architect.
Week 4
Over the weekend from week 3 to 4, majority of the coding related to potential field implementation was done. On Monday morning the first test run on PICO was done. The system did not performed as expected, it was not able to turn when it arrived to the junction. Afterwards, possible solutions for the cornering were discussed. A proposal to have a hybrid solution between Potential Field and States-Decision was discussed. The State Machine consists of a Corridor Mode and an Intersection Mode.
For the code overview and explanation, the potential field and lower level skills are separately discussed. The results of the #Corridor Challenge are depicted later. After the corridor challenge, the group met to discuss the overall progress and the corresponding to do list and to do a quick intermediate evaluation of the group collaboration and the individual contribution thus far.
The following topics were discussed and divided as self study assignments:
- Mapping strategy - To be defined during next meetings.
- Decision Making - Refine behavior via updating parameters.
- Improve Driving - Fine tuning the parameters for corridors and cornering.
- Clean up code - Remove hard coded section, save in new development branch.
- Update Wiki:
- Explanation of Safety Measurements.
- Elaborate on System Architecture.
- Explanation of Potential Field.
- Journal
- Corridor Challenge results & evaluation
Week 5
Now that the corridor challenge is successfully completed, we need to think about more advanced decision making, maze mapping, and a quicker turning skill. In week 5, not much coding was done, but several design ideas were discussed on dept. EMC-group1 -Week 5 The following components are thought through and coded in pseudocode during week 5:
- Targeting correction: when a target is set in a new corridor, pico drives to it. However, that changes the target absolute position. A skill is created to cope with the absolute position, now a relative position is used, using odometry data to correct for it.
- Node saving: to map the maze an array is used to store important points of interest. Containing: name, location, type of POI and linked POIs.
- Visualization of targets: to debug the program a vizualization tool is created. All the information around PICO can be plotted separtetly while running the simulation.
The final design used can be found in the maze challenge.
Week 6
On Tuesday in the meeting, the team redefined the planning for the next couple of weeks. Next week, another presentation has to be given containing the design and the new flowchart of the code. The presentation details for the presentation shall be discussed one day before. Also in week 7, a working prototype with the following specifications is required:
- Basic maze solving algorithm.
- Targeting functionality.
- Basic decision making implementation.
- Robust driving through junctions.
The week after, the team wants a working robot requiring only fine-calibration to finish the maze. The algorithms for solving the maze can then be further developed. This includes also how to cope with doors, open spaces, loops, and dead ends. The last week, week 9, the maze competition is to be held. The program has to be finished, the days before are used for troubleshooting.
In the meeting the approaches are discussed. We decided on splitting the work in two parts, and assign the group members as equally as possible. The first part is the mid-level program and consists of the following parts:
- World model.
- Potential field.
- Targeting.
- Implementation of the above
The second part is the Method part, focused on research and more design approaches:
- Maze solving, door handling, and overall logic.
- Task monitor, controller, and feedback loop.
- Decision method and implementation.
- Additional parts of the code used in the corridor challenge.
This division makes sure people work together and that the code, when written, is subjected to higher scrutiny.
Week 7
In week 7, only a short meeting is held after the presentations. The task division was clear and only an update on responsibilities was performed. The next day, testing on PICO is done. However, it was concluded while testing that the software is still not achieving its goals. The planning was revised. A working prototype could not be ready this week.
Week 8
In the last week, the prototype was not yet finished. Though extensive research to approaches for the world model and decision making were done, the final decision on which method to use was not made. The goal of the last meeting before the maze challenge was to decide on which methods to use. All extensions were discarded and the development effort focused solely on the things needed to drive through and solve the maze. At this part, the robot was only able to cross doors in dead ends and create a basic node system. The filling of the nodes, and thus creating a map, was not yet finished. Ideas on decision making were finalized in the meeting. Over the weekend, up to the days before the maze challenge, the following tasks were created:
- Recognize junction type.
- Store junction node.
- Make decision.
- Set target.
- Store decision.
This was chronologically the order of implementation, but also the priority within the production of the code. In the weekend a lot of 'code-sessions' were held and the last four days before the maze challenge all the remaining parts (the itemize above) were programmed.
The second testing was done with PICO for testing the bugs and the defects in the codes. It was found that PICO moved well but it was not able to handle loops, it considered the loops as turnings that occur continuously. The reason for this was that the mapping feature was not added to the code.
Design Architecture
In this section, the initial idea of an embedded software design will be discussed. The main goal of the design is to make the PICO robot able to solve a maze. Throughout this report, the requirements, functions, components specifications and the interfaces of the system will be discussed.
Requirements
The requirements of the system were divided according to the FURPS model. The abbreviation of this model stands for Functionality, Usability, Reliability, Performance, and Supportability. Functionality in this case denotes the features and capabilities of the system. Usability includes the human factors, consistency, and aesthetics. Reliability stands for the failure frequency, accuracy and predictability. Performance here means the efficiency, speed, response, i.e. measures of system performance. Finally, the supportability incorporates testability, maintainability and configurability.
The main requirement of the system is to navigate the maze. It must be completed according to the challenge specification, which forms the main functional aspect of the requirements.
Furthermore, the system design must be robust. The layout of the system must constitute all possible functions to account for the system architecture, which enables the usability of that design. To achieve this, possible fault modes can be taken into account.
Also, the system design should pose a robust solution. This means that the provided system solution must be robust enough to cope with environmental uncertainties.
Next, the doors in the maze should be handled correctly. The behavior of the door should be analysed to help differentiate the door from the wall. Firstly, the robot should be able to recognize a door from a wall and, secondly, cross the door.
Fifthly, the generated design should be tested for its reliability. Furthermore, the major functional requirement of the robot is the ability to solve the maze on its own.
Next, the given hardware (PICO robot) must be used for the maze challenge without any modification and additional components mounted on it.
Next, the performance requirement of taking quick decision is one of the essential factors for the PICO robot for completing the challenge within 7 minutes.
And finally, the need for smart decision making is a crucial necessity due to the fact that the environment changes.
In the table below, the above requirements are summarized according to the FURPS structure.
Requirement | F | U | R | P | S |
---|---|---|---|---|---|
Navigate the maze | x | ||||
Subsystem based design | x | ||||
Adaptable to environment | x | ||||
Door handling | x | ||||
Testing and simulation | x | ||||
Autonomous operation | x | ||||
Use given hardware | x | ||||
Fast decision making | x | ||||
Smart decision making | x |
Functions
The functions of the system are classified according to three levels: high-level, mid-level, and low-level functionalities.
The high-level functionality incorporates the following three aspects:
- Obtain relative position with respect to the global frame of reference in the maze.
- Identification of the open or closed door.
- Maze solving algorithm or strategy.
The mid-level functionality consists of:
- Straight corridor navigation.
- Collision avoidance algorithm.
- Intersection mapping.
The low-level functionality contains:
- The ability to read the laser and camera input.
- Actuate the wheels according to the input.
- System initialization for the working of the sensors and actuators.
Specifications
The specifications of the system are formed by the available hardware and software, and by some of the requirements for the challenges.
- Limited time to complete the maze challenge (7 min).
- Limited access to the PICO robot.
- Robot hardware is not to be modified.
- Hardware access is facilitated via ROS.
- Translational/rotational speed limit of the robot actuators.
- Use of C++ and SVN.
- Available sensors: odometer and laser.
- Use of controller feedback.
- Unknown location of entrance and exit on the boundary of the maze.
Interfaces
Within the system, two types of interfaces are distinguished. First, there is the communication between the different components of the system, shown in the figure above as arrows. The second type of interface, is the interface on which the robot interacts with the environment. This includes a user interface to program the robot, and sensors to sense the environment. There is no user interface to communicate with the robot while it is solving the maze, for it is autonomous. The sensors used by the robot are discussed in the specifications.
Implementation
So, how was the software implemented? This is indicated in the Figure on the right. As can be seen,there are different contexts with different tasks. Accordingly, the world model consists of a potential field which is used to built up the mapping and scanning. This context communicates with the Task context, and of course with the robot context, which supplies the laser data. The potential field and the world map are passed on to the Task context, which will use its information to make decisions and configure skills. These skills, for example, contain lower level driving skills such as turning. Of course, these skills communicate with the robot as well. For example, sending a driving speed. Finally, the abstraction layer converts low level skills into real driving. Luckily, this particular part of the system was already provided by the course's lecturing team.
Composition Pattern
Besides describing the behavior of the model in a task-skill-motion model, an overview of the model structure can be made using composition patterns. A composition pattern describes the structure of the robot model focusing on a certain role. The composition pattern can be decoupled using the 5C's to develop functionality. The 5C's are the Computation, Communication, Coordination, Configuration and Composition of a certain part of the model. As an example of decoupling part of the composition pattern to develop functionality using the 5C's, the decoupling of the potential field functionality will be explained.
Trigger
To start the potentialfield, it should be triggered. The first time this will be done by constructing the world model, later on by updating the world model.
when triggered % in world model do {
Configuration
The parameters of the potentialfield, the radius of the scanning circle are set.
rangeCircle = 1.2 // Radius of the circle used for the potentialfield
Communication
Next, the potentialfield skill receives the laser data from Pico, by updating the class containing the laserdata.
emc::LaserData * scan = pico->getScan(); // Get laser data
Coordination
Determine which of the gathered laser data is within the configured range and make a selection. This is done by iterating through the scan data:
if(*it>scan->range_min && *it<scan->range_max && *it<rangeCircle) {
Computation
For every data point within the range the distance to Pico is calculated and a force vector is calculated like will be discussed in the potential field method.
r = it->getRadius() F = 1/pow(rMult*(r-0.15),5) wallForce.addPoint(-F*cos(it->getAngle()),-F*sin(it->getAngle()));
Communication
The potential field is uploaded to the world model.
pot->update(); // Update potential field
Coordination
Based on the potential field in the world model gaps are found. These gaps are used to make decisions. These decision will be set as target.
Computation
From the set target a targetvector is calculated, similarly as was done determining the wall-forces but with a attracting instead of repulsive forces. Also, a new force vector will be calculated using the wall vectors and target vector.
pot->setTarget(atGap))); // Calculate a targetvector at the chosen gap
Communication
The calculated force vector is saved publicly so it can be used for the driving of Pico.
pot->update();
Logging
The computed forces are shown in a visualization, so they can be evaluated and used for debugging. An example of the visualization is shown in the picture below.
visualise::pot
Methods
This section contains various algorithms and skills that in the opinion of team, require a more detailed explanation. First, the potential field. It is used for driving and collision prevention. Then, an explanation on how this algorithm is used in the drive skill, which is fairly simple. Then, the safety layer and turn skill are explained.
Potential Field
In order to solve the maze successfully, the robot needs to be able to drive autonomously. This involves collision prevention and the ability to reach a certain target. While the target itself is provided by a higher level of control, a skill to actually reach it is needed. PICO's driving skill is able to do both by using a so-called resulting force vector. In principle, this system works easily. Objects and locations which are to be avoided have a repelling force, while desirable ones are given an attracting force. Then, the resulting force will give the most desirable direction to drive in, which only needs to be normalized and sent to the lowest level of drive control. For this to work, mathematically the function to calculate the force should be of a different order for undesirable and desirable targets, otherwise the robot could find itself not knowing what to do.
In practice, this is implemented as follows. For all wall points provided by the laser range finder, the following 'force' is calculated:
[math]\displaystyle{ F_{w,i} = \frac{1}{r_m (r-0.05)^5}. }[/math]
Of course, this gives numerical problems when the distance to the wall [math]\displaystyle{ r=0.05 }[/math]. Then, a very high value is assigned. [math]\displaystyle{ r_m }[/math] is a scaling constant, which can be tuned to reach the desirable behavior later on. The contribution of each wall point in both plane directions (x,y) is summed to obtain one force vector. Then, the target point is given force
[math]\displaystyle{ F_t = r_t^2, }[/math]
with [math]\displaystyle{ r_t }[/math] the Euclidean distance to the target. The resulting force will be pointing in the direction which is most desirable to drive in. Note how this method is self regulatory, i.e. when the robot for some reason does not actually drive in the direction of the resulting force, the new resulting force of the subsequent time step will automatically correct for this. There is no need to store forces of all wall points, simply summing them vector-wise will suffice. Also, there is no need for additional collision avoidance.
By looking at the GIF showed in the previous section, the wall force vector, the target (white dot), and the target vector can be seen in action.
Lower Level Skills
In this section, the lower level skills are elaborated on. Also, the implementation and the design decisions are explained.
Drive Skill
The Driving Skills are based on the information provided by the Potential Field and the higher level decision making. Once a target is set, the sum of the attracting force of the target and the repelling force of the walls will result in a net driving direction. This drive direction is scaled differently in the x and y direction with respect to PICO:
[math]\displaystyle{ v_x = \frac{1}{2}\cos{(\theta)} v_{max} }[/math],
for the speed in x direction, where [math]\displaystyle{ \theta }[/math] denotes the angle of the drive vector, and
[math]\displaystyle{ v_y = \frac{1}{5}\sin{(\theta)} v_{max} }[/math]
For the speed in the y direction. The rotational speed is simply 1.5 times the angle of the net driving direction vector. However, when the drive angle becomes larger than 40 degrees, PICO only rotates towards it to prevent collisions. Then, pico->setSpeed is updated. In PICO's low level safety layer, a safety routine prevents the robot from moving backwards. When the drive vector is pointed that way, we allow it only to rotate and move in the y direction.
Safety Layer
The first safety measure is the potential field algorithm itself. It's obvious that the robot should keep from getting too close to the walls. However, with regards to completeness and having safety redundancies in the code, another safety routine was implemented. The basic idea is to prevent the robot from hitting the walls (either head-on and/or otherwise), so the distance to the walls is continuously compared to a safety margin. If the distance is smaller the robot will stop and enter a routine that maneuvers the robot away from the wall and places the robot in a location to keep driving safely.
A second safety measure is the connection state skill. A true state of this skill is required to run the main loop, activating all the other skills and tasks of PICO. This is a very basic function, nevertheless, required for robust operation.
Sensor data acquisition
One of the low level skills is the interface with the hardware. PICO is equipped with a an Odometry sensor measuring the relative base movement and with a laser range finder. Both of these components can be individually activated as a low level skill. Other functions, skills, and tasks can use this globally available data.
Turn Skill
In order to have a modular approach to solving the side turns. The operations were separated using a simple state machine. When advancing through the corridor, the robot is checking for large magnitude differences between the laser's measurement vectors. These differences could represent an "opening" in the wall, which being large enough indicate a new corridor. If the robot detect that the corridor has two possible paths, the state is changed to "Junction" mode. Immediately after this information of "Junction" mode at this point in the maze a junction location will be stored. This information is not used for the turning itself, but could prove useful when designing the higher decision skills to solve the maze. Once the data is stored, the current state will be "Drive Junction", which takes the robot to an estimate of the junction's center. At this point, the robot will turn to whichever side the new corridor is. These states are represented graphically in the following figures.
Once the robot has taken a turn it must enter the new corridor, detect the walls so that it can continue navigating in the new corridor. Such operations are carried out by a "Drive Out" state. If the laser signal indicates that the robot is again in a corridor it moves back to the "Navigate Corridor" state and continues. When the robot detects that there are no more walls in front of it and on the sides, and there are no large differences between the laser vectors (implying, there are no new corridors), its interpretation is that the corridor challenge has been completed. So a "Victory" state is reached that will stop the robot's motion. The next figure shows these states.
Decision State Flow
Recognizing maze Features
The detection of dead ends was done on the basis of comparing the number of targets that are detected in the potential field part of the code. As was already explained, the potential field environment automatically detects gap vectors and stores them. In principle, when there are no gaps detected, we have reached a dead end or an open space. To make the distinction more robust, an extra condition was added. This condition is based on the measurement of the laser in front of the robot. If it detects an object nearby, the robot is in a dead-end, else it is an open space. In an open space, the robot may also have no gaps, since it does not sense any walls. However, then it still sets a target away from the robot.
The open spaces are basically detected using the range circle. This circle is made wider and broader, depending on the number of gap vectors that were detected during the 5 iterations of the main loop. The minimum number of gaps that were detected during these 5 iterations, (called minGaps) i.e. the robustly detected gaps, is determined and stored. This information is used to find out what kind of environment the robot is in and to switch the case from corridor to junction for example.
The differentiation of the junction types, left-junction, right-junction, T-junction, and the cross-junction is also done with the gapsvectors. However, since we need to exactly know what kind of junction it is, we can not use the min/maxgapsvetors. Therefore, the last gapsvector size is used, this contains only information from the last measurement given by the potential field. The gapsvectors only produce an amount of vectors, so now we are only able to differentiate the 3 way junctions and the 4 way (cross) junction.
Next, to differentiate the direction within the 3 way junction, we need to use the direction of the individual gapvectors. Since the 3 way junction only consist of 2 gapvectors (the entrence direction gap is not recognized) we only have to calculate the directions of these 2 gapvectors with respect to Pico. The gapvector with the smallest angle with respect to Pico is the most straight gapvector. Since we know that the gapvectors are calculated counterclock wise, we know where the corridors lie with respect to Pico. This information is also used to drive to the middle, explained in the previous section #turn skill.
For a clear overview on how the different states are linked, see the state machine figure in the following section.
State Machine
The high level navigation algorithm was constructed as a state machine. One of the main assumptions is that the robot will always start in a corridor, as such, the initial state of the diagram is "Junction". For a quick overview the following figure is presented:
Here, it is possible to see that the information required for most of the state transitions is the number of gaps and in few cases a new check on the laser data.
The groups of states including "Junction", "Drive Junction", "Store Junction" and "Drive out Junction" are used for the junction navigation. So, in terms of physical meaning inside the maze, they are all located in the different kinds of crossings.
Challenges
Corridor Challenge
Preparation of the Embedded Motion Control #Corridor Competition and the results shown in a short movie.
Available Approaches
Based on the results from the test run with the real robot, the following solutions were proposed: Robust Turning: Safe or Fast modes. Simple Turns: Hard-coded turns.
A relevant concept in the development of the solutions was to be able to identify that the robot was turning and not only turn as an extension of "regular corridor driving". As a result, a simple state machine was implemented. It modifies the decision making of the robot as long as it recognizes to be in a junction.
Results
The driving through the straight part of the corridor was smooth and fast reactions were not evident. Just as in the simulation shows previously, the robot detected the side exit appropriately. The following video shows the result of the corridor competition.
The separation of the turning action into states is meant to facilitate the implementation of higher level navigation afterwards. After the challenge some new scenarios were tested to gain some insight on the robustness of the code. First, a small separation between the wall-blocks. Second, narrower corridor. And finally, remove the side exit walls to check if the robot could still recognize it, turn and stop. From these new scenarios, the results were as follow: 1) the code is able to navigate well even if some discontinuities exist in the walls, 2) The robot can drive through the corridor regardless of it having a different width, however, if the width is smaller than the safety margin established in the code, the robot will not move. And 3) Even if the side-corridor has no walls the robot was able to recognize it, turn and stop. These tests show that the robust approach can deal with an uncertain environment up to this level.
Evaluation of the Challenge
After having completed the challenge and running extra simulations the comparison of the results and the initial requirements was performed.
- Navigate the maze: Achieved. PICO was able to navigate through the corridor and take the first available exit in the corridor.
- Subsystem based design: Partially achieved. The implementation of the system included a rudimentary but explicit separation of PICO's functionality.
- Adaptable to environment: Partially achieved. The decision making at this stage of the project was not robust to handle a dynamic environment. However, it was still able to solve the challenge successfully.
- Door Handling: Not applicable.
- Testing and Simulation: Achieved. The robot's functionality was tested successfully in both simulation and the laboratory.
- Autonomous Operation: Achieved.
- Use given hardware: Achieved.
- Fast decision making: Achieved. It is worth noting that for this challenge two approaches were available, Safe and Fast modes. In both of them the decision on taking a turn was rather fast because it did not demand a high level of complexity.
- Smart Decision Making: Not applicable.
Maze Challenge
After a discussion within the team on the approach for the maze challenge, a high-level logic was defined. The basic idea is presented in the following Figure:
These three steps have several implications on a lower level. For example, one of the main difficulties is how will the robot gain knowledge about the maze. Being able to know that the robot has been in a given section of the maze before, and also knowing if there is a possibility that has not been tried yet in a given junction.
Before the maze competition the system was tested in the simulator and in the laboratory. The simulator was mainly used to check high level logic, such as recognizing intersections, turning, recognizing dead ends and following the door routine. The following video shows a simulation with the robot navigating a cross junction map with a door in one of the corridors.
Results
In this section the maze challenge attempt videos are available, upon which analysis have been done to infer the causes for its behavior. Maze Challenge First Attempt video:
It was noted that the robot did not go into the dead ends, therefore, it could not be evaluated whether there was a door or not.
The navigation appeared not to be correcting its path abruptly. Regarding the decision making, it can also be appreciated in the video that the robot is selecting both left and right corridor options.
This however, was not enough to enter the new corridor completely.
Maze Challenge Second Attempt video:
It was noted that the robot again did not choose to enter a new corridor, thus, was not able to find the door.
The system was not robust enough to avoid hitting the walls. One of the possibilities is that the PICO was in a state where the potential field was not a high priority.
The other option could have been that the resultant rejection force vector's magnitude was not large enough to repel the PICO from the wall.
Post-Challenge Simulation
Unfortunately the robot was not able to finish the Maze Challenge. Therefore, the team decided to reproduce the design of the maze in the simulator to test the system used on the day of the competition. The resulting maze can be seen in this illustration:
The robot was then able to navigate through the first section of the maze, find and cross the door. Cross the open space after the door and approach the zone where the exit was finally located. Before actually going to the exit, a problem with the open space identification emerged, therefore the system did not know what to do and started switching between states very fast. This all can be seen in the next video:
Evaluation
- Navigate the maze: Partially achieved. PICO was able to navigate the corridor, however, it could not recognize the dead ends. It is worth nothing that it was able to prevent going in loops.
- Sub-system based design: Achieved. The implementation of the system's functionality was explicitly separated in the code. So, theoretically they can be modified independently and still be able to interact among them.
- Adaptable to environment: Partially achieved. The system was able to differentiate between several states (e.g. junctions, dead ends, open spaces) and to make decisions based on that. However, the final challenge showed that the way in which states were defined was not as robust as expected.
- Door handling: Not achieved. During the challenge the robot could not find the door, thus, it is not possible to know if the door handling routine actually worked.
- Testing and simulation: Achieved. The system was intensively tested using simulations for the high level decision skills. Because of implementation problems, the laboratory time available was not fully used. Nevertheless, before the challenge a real world test was performed in a simple maze and it performed satisfactorily.
- Autonomous Operation: Achieved.
- Use given hardware: Achieved.
- Fast decision making: Achieved. All decisions were taken in real time.
- Smart decision making: Partially achieved. Because of problems with the implementation of a smart global navigation algorithm, the final system used a random decision making routine. Even though, this is not the best approach, the team agreed that given the project constraints it was the most feasible option.
Uniqueness of the Design
- The implementation of the potential field methods to find the optimal path for the robot (PICO) is one of the best methods implemented by the group. With this the robot can detect walls very easily and maneuver on the path provided. It is considered as a very robust algorithm for object identification. The potential field is developed by the laser scanning (LFR) and it allows potential of low and high levels for free paths and walls. The walls get a high potential so the robot knows that there is an obstacle.
- The state machine approach allowed PICO to identify the maze features and navigate accordingly. The Potential Field was particularly helpful in letting PICO make decisions based on the targets set in the maze, while the state machine approach let PICO identify the maze and take decisions based on this information.
- A simple decision making algorithm was developed where PICO selects an option in the junctions randomly. This increases the probability of turning in different directions, thus, preventing loops. The team agree that this was the best method to use given the difficulties found in the node approach.
Code
In this section we explain which parts of the code are of high quality and reflect the lessons learned. The code presented in the snippet link is the potential field. We chose for this piece of code because it consist of very efficient computation algorithms. Also, the code is sophisticated because it is so simple. Moreover, the journey to get there was done in a efficient pragmatic way. In the beginning, the potential field really consisted of a grid of points each having a potential. Then, we calculated the desired drive direction using a gradient approach. In the end, however, this method still only yields a vector containing the drive direction. The revised method does the same, but without the huge 3D data matrix that was the potential field. The last reason why we chose to upload this code is that it is very robust, and is therefore also used in later developed functions.
An additional remark, the potential field also consist of a function called: isGapGreat(), which is found very amusing within our multilingual group.
The code for the potential field can be found in the following GitHub gist:
https://gist.github.com/anonymous/c3ecae8a053ac96616be
Reflection and Outlook
The code design in general provides a lot of flexibility to change and interchange skills, which proved valuable in the programming and debugging. The decision to keep the design modular and clearly divided allowed for simultaneous programming of different parts of the code.
Some implementation choices proved to be good, such as the potential field, which provided a robust way of driving when well tuned. Others still need work, such as the node and mapping, which could be greatly improved using feature detection methods, such as corner detection and length scanning.
The code was kept relatively modular with respect to the RHAL. This means that the changes required to allow the use of a different robot are bound to specific parts of the code, while most methods would stay the same.
As a further outlook, even multiple robots could be run at the same time using separate potential fields, but a common map. The lower level skills would need to be duplicated and tuned individually. Each robot has its own set of skills associated with it, which are completely independent of each other. These skills would include the driving, the potential field and the configuring skills. Skills that would have knowledge of multiple robots would include the map and the decision making. A global planner could be added to coordinate robot behavior. In this way, minimal changes to the code would allow multiple robots to have separate robust driving, but a shared map and coordination. The maze could then be explored by multiple robots simultaneously.
The code was kept relatively modular with respect to the RHAL. This means that the changes required to allow the use of a different robot are bound to specific parts of the code, while most methods would stay the same. As a further outlook, even multiple robots could be run at the same time using separate potential fields, but a common map.
Decisions that would need reevaluation when the code would be developed further include the range circle which was used. Because of the range circle, a lot of information is thrown away and skills need to access the raw data for more information. A more sophisticated filter is recommended, providing all the functionality needed.
Extra feature designs
Due to time shortage, two essential features that were thought of were not really implemented. The higher level idea is summarized below: first for recognizing the maze features, and second to make a world map.
Recognizing Maze Features
In order to be able to store maze features in some sort of map, there is a need to identify them using the laser data, preferably in the least number of steps to increase the performance of the algorithm. To achieve this, a robust and quick way to distinguish between the maze features has to be conceived. For the corridor challenge gap vectors were already built from the laser data, as was already explained before. Now, these features are extended with the wall vectors.
The main principle is as follows: From the laser data, the so called corner points are deducted (indicated in the figure referred as black circles). It is also noted that every maze feature has a unique number of corner points. For instance, a simple corner has only one, and a full junction 4. Then, by counting both the wall vectors and the corner points, every feature can be robustly identified. This is indicated in the figure below.
Before really implementing this solution, we noticed that the handling of open spaces by our default potential field method was already sufficient. Also, junctions are already detected by the gaps vector method. Then, only a door inside a corridor wall would be overlooked. Due to time shortage, we decided to drop the maze feature recognition, since it is a lot of implementation work for only small added robustness. Detection of dead ends and open spaces was, however, elaborated and made more robust. We will explain this in a later section.
Maze Mapping
The question then becomes, how to build the world model? Starting with the premise of building a simplified model having only the information necessary to solve the maze. The approach proposed is to use nodes, which represent relevant features from the maze, e.g. start and end points, different kinds of junctions, corners and open areas. Relevant characteristics to be saved are the approximated global position of each node and how it is connected to the nodes around it. The following Figure shows a rough representation of a simple maze loop.
A very relevant problem to be solved is that of "loop closure", so that the robot can navigate smartly through the maze without repeating corridors if unnecessary. After discussing this problems with the team, the following characteristics were selected as the most relevant:
- Node identification number.
- Node type, e.g. start, T/X junction, dead end, open area, corner, end.
- Movement direction referenced to initial robot orientation.
- Connected nodes.
Again, due to time shortage and an implementation that was not robust enough, we decided to drop the mapping and make random decisions instead. This proved to solve the problem of looping, and is actually very robust, since no mapping mistakes can be made. However, it is of course not the fastest way to solve the maze.
Lessons Learnt
- To work with a version system facilitates decentralized development.
- Merging and update of “stable code” versions works better when carried out by a single person.
- How to implement a simple supervisory controller.
- Functions are developed better when performed in pairs.
- Overall solution for problems should be brainstormed.
- Structures from the composition patterns are easier to understand when explicitly separated in the code (different file).
- Testing in the real robot is paramount.
- Simplified solutions tend to work better (e.g. state machines for navigation).
- It’s helpful if the functionality has a visual representation (e.g. potential field vectors and targets).
Appendix
References
Website References
[1] http://www.instructables.com/id/Maze-Solving-Robot/ [2] http://letsmakerobots.com/node/37746 [3] http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1099-1514 [4] http://www.researchgate.net/journal/0143-2087_Optimal_Control_Applications_and_Methods [5] http://blog.miguelgrinberg.com/post/building-an-arduino-robot-part-ii-programming-the-arduino [6] http://robotics.stackexchange.com/questions/1544/arduino-c-c-progamming-tutorials [7] http://www.robotshop.com/blog/en/how-to-make-a-robot-lesson-10-programming-your-robot-2-3627 [8] http://cstwiki.wtb.tue.nl [9] http://www.instructables.com/id/Simple-Arduino-Robotics-Platform/ [10] http://www.cplusplus.com/doc/tutorial/ [11] http://www.tutorialspoint.com/cplusplus/ [12] http://www.cprogramming.com/tutorial.html [13] http://letsmakerobots.com/artificial-potential-field-approach-and-its-problems
Journal and Hard Copy References
[1] Airborne laser scanning—an introduction and overview Aloysius Wehr & Uwe Lohr,ISPRS Journal of Photogrammetry & Remote Sensing 54 1999 68– 82 [2] Bachman, Chr. G., 1979. Laser Radar Systems and Techniques. Artech House, MA. [3] Baltsavias, E.P., 1999. Airborne laser scanning: existing systems and firms and other resources. ISPRS Journal of Photogrammetry and Remote Sensing 54 2–3 , 164–198, this issue. [4] Young, M., 1986. Optics and Lasers—Series in Optical Sciences. Springer, Berlin, p. 145. [5] Airborne laser scanning—present status and future expectations,ISPRS Journal of Photogrammetry & Remote Sensing 54 1999 64–67 [6] http://www.robotshop.com/media/files/pdf/making-robots-with-arduino.pdf [7] Design and Development of a Competitive Low-Cost Robot Arm with Four Degrees of Freedom Modern Mechanical Engineering, 2011, 1, 47-55 doi:10.4236/mme.2011.12007 Published Online November 2011 (http://www.SciRP.org/journal/mme) [8] Semantic World Modelling for Autonomous Robots, Jos Elfring
Group members
Group 1 consists of the following members:
Name | Student nr | |
---|---|---|
Maurice Poot | 0782270 | m.m.poot@student.tue.nl |
Timo Ravensbergen | 0782767 | t.ravensbergen@student.tue.nl |
Bart Linssen | 0786201 | a.j.m.linssen@student.tue.nl |
Rinus Hoogesteger | 0757249 | m.m.hoogesteger@student.tue.nl |
Nitish Rao | 0927795 | n.rao@student.tue.nl |
Aakash Amul | 0923321 | a.v.h.amul@student.tue.nl |
Ignacio Vazquez | 0871475 | i.s.vazquez.rodarte@student.tue.nl |
Qt from terminal
If Qt Creator is not run from the terminal, the programs inside cannot connect to the ROS master. To be able to run Qt Creator from the desktop without any problems, follow a similar procedure to the one that is used in the Customizing Ubuntu tutorial for the Terminator file:
Edit
~/.local/share/applications/QtProject-qtcreator.desktop
and change the third line to:
Exec=bash -i -c /home/rinus/qtcreator-3.3.2/bin/qtcreator
This will run QtCreator from a terminal even if the desktop icon is used.