Embedded Motion Control 2019 Group 2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(332 intermediate revisions by 5 users not shown)
Line 1: Line 1:
<div style="width:calc(80vw)">
== Group members ==
== Group members ==


*1. Bob Clephas            | 1271431
{| class="TablePager" style="width: 450px; min-width: 460px; margin-left: 2em; float:center; color: black;"
*2. Tom van de laar       | 1265938
|-
*3. Job Meijer            | 1268155
! scope="col" | '''Name'''
*4. Marcel van Wensveen    | 1253085
! scope="col" | '''Student number'''
*5. Anish Kumar Govada    | 1348701
|-
| Bob Clephas            || 1271431
|-
| Tom van de Laar       || 1265938
|-
| Job Meijer            || 1268155
|-
| Marcel van Wensveen    || 1253085
|-
| Anish Kumar Govada    || 1348701
|}


= Intro =
= Introduction =
[[File:challenges.png|right|thumb|600px|Figure 1: The two challenges during the course. On the left the escape room challenge where PICO must drive out of the room autonomously. On the right the hospital challenge where PICO must visit multiple cabinets autonomously]]
Welcome to the wiki of group 2 of the 2019 Embedded Motion Control course. During this course the group designed and implemented their own software which allows a PICO robot to complete two different challenges autonomously. The first challenge is the "Escape room challenge" where the PICO robot must drive out of the room from a given initial position inside the room. In the second challenge called the "Hospital challenge" the goal is to visit an unknown number of cabinets in a specific order, placed in different rooms. For both challenges the group designed one generic software structure that is capable of completing both challenges without changing the complete structure of the program. The example maps of both challenges are shown in Figure 1 to give a general idea about the challenges. The full details of both challenges are given on the general wiki page of the 2019 Embedded Motion Control course ([http://cstwiki.wtb.tue.nl/index.php?title=Embedded_Motion_Control_2019#Escape_Room_Competition link]).
 
=== Escape room challenge ===
The main goal of the challenge was to exit the room as fast as possible, given an arbitrary initial position and orientation of PICO. The robot will encounter various constraints such as the length of the finish line from the door, width of the hall way, time constraints and accuracy.
The PICO robot was first placed in a rectangular room with unknown dimensions and one door/opening towards the corridor. The corridor was perpendicular to the room with an opening on its far end as well. The initial position and orientation of the PICO robot was completely arbitrary. PICO was supposed to cross the finish line placed at a distance greater than or equal to 3 metres from the door of the corridor. The walls of the room were also not straight which posed a challenge during the mapping of the room from the laser data. The challenge is discussed in much detail about the algorithms used, implementation, program flow and results of the challenge in the following sections.
=== Hospital challenge ===
The main goal of this challenge was to visit the cabinets in a particular order given by the user as fast as possible. The global map consisting of rooms and cabinets and the starting room of PICO were mentioned beforehand.
The hospital challenge contained multiple rooms with doors which can be open or closed. It tested the ability of PICO to avoid static/dynamic objects and plan an optimal path dynamically. The static objects included clutter objects/doors and the dynamic objects included human beings moving in the room while PICO was performing its task which were not specified on the given global map. Lastly, PICO was asked to visit the list of cabinets to visit in a specified order given by the user before the start of the challenge. The challenge is discussed in much detail in the following sections. The algorithms used are explained, how they are implemented is shown, the program flow is discussed, and the results of the challenge are told.


This project is a part of the Embedded Motion Control course in which a particular software architecture and behaviour is designed for the PICO robot to execute tasks autonomously keeping in mind the various constraints. It is comprised of the escape room challenge and the hospital challenge. The goal of the escape room challenge is to exit the room from any given initial position of the PICO inside the room in minimum time. The goal of the hospital challenge is to visit an unknown number of cabinets in a specified order placed in different rooms in minimum time. A very generic software structure was implemented so as to tackle both the challenges. The design requirements, specifications, software architecture, program flow and the results of the two challenges are shown in the following sections.


= Design document =
= Design document =
The group started by creating a design document, to arrive at a well-thought-out design of the software. This document describes the starting point of the project. The given constraints and hardware is listed in an overview and the needed requirements and specifications of the software. The last part of the design document illustrates the overall software architecture which the group wants to design. The design document provided a solid basis upon which was build during the remaining part of the project. The full document can be found  [[Media:4SC020_DesignDocument_Group2.pdf|here]]. Important specifications and requirements are listed in Table 1 and the general software architecture is explained in full detail in the chapter  in the [[#General software architecture and interface  | General software architecture and interface]].


To complete the two assignments for the course “Embedded motion control” specific software must be written. In this design document the global architecture of the software is explained, and the given constraints and hardware is listed. This document is a first draft and will be updated during the project. The first version can be found [[Media:4SC020_DesignDocument_Group2.pdf|here]].
''' Table 1: Requirements and specifications '''
{| class="TablePager" style="width: 650px; min-width: 800px; margin-center: 2em; float:center; color: black;"
|-
! scope="col" | '''Requirements'''
! scope="col" | '''Specifications'''
|-
| Accomplish predefined high-level tasks || 1. Find the exit during the "Escape room challenge"
2. Reach a predefined cabinet during the "Hospital Challenge"
|-
| Knowledge of the environment || 1. The robot should be able to identify the following objects:


== Requirements and specifications ==
* Location of walls
The requirements and related specifications are listed in the following table. The listed specifications are required for the final assignment and the escape room challenge.


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Requirements</th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top"><span style="font-weight:bold">Specifications</span></th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Accomplish predefined<br>high-level tasks</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">1. Find the exit (Escape room challenge)<br>2. Reach a predefined cabinet (Hospital Competition)<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Knowledge of the environment</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">1. Location of walls (via corner points)<br>2. Location of the doors (via corner points)<br>3. Location of the cabinets (via corner points)<br>4. Location of static objects (via corner points)<br>5. Location of dynamic objects (via corner points)<br>6. Map at 2D level<br>7. Accuracy of 0.1 meter<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Knowing where the robot is in<br>the environment</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Know the location at 2D level<br>2. XY with &lt; 0.1 meter accuracy<br>3. Orientation (alpha) with &lt;10 degree accuracy</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Being able to move</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Max. 0.5 [m/s] translational speed <br>2. Max 1.2 [rad/sec] rotational speed<br>3. Able to reach the desired position with &lt;0.1 meter accuracy<br>4. Able to reach the desired orientation with &lt;0.1 radians accuracy<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Do not bump into the wall</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Keep a circle with radius &gt;0.3 meter free of obstacles around the robot</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Standing still</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. It is not allowed to stand still for longer than 30 seconds</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Finish as fast as possible</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Within 5 minutes (Escape room challenge)<br>2. Within 10 minutes (Hospital Competition)</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Coding language</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Only allowed to write code in C++ coding language<br>2. GIT version control must be used</td></tr></table>
* Location of the doors  


== Components and functions ==
* Location of the cabinets
The components and their functions are split in software components and hardware components.


'''Software components'''
* Location of static objects


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top"><span style="font-weight:bold">Software block</span></th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top"><span style="font-weight:bold">General functionality</span></th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">World model</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">1. Storing all the relevant data (Map / tasks / position/ etc.)<br>2. Data communication between the other components (All data<br>goes through the world model)<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Task manager</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Set operation modes of other blocks depending on the current status of the blocks and high level tasks</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Perceptor</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Reading sensor data<br>2. Identifying walls and objects based on laser data and create local map<br>3. Fit local map onto global map (Hospital Competition)<br>4. Locate robot on local map (Escape room challenge)<br>5. Locate robot on global map (Hospital Competition)<br>6. Keep local map aligned with global map (Hospital Competition)</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Path planner</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Create a path from the combined map, current position and the desired position<br>2. Check if path is free and plan new path if current path is blocked<br>3. Keep track of open and closed doors</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Drive controller</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">1. Actuates the robot such that it arrives at the desired location (keep speed and acceleration in mind)<br>2. Check if the desired direction is safe to drive to (based on laser data obtained from perceptor, not based on the map)</td></tr></table>
* Location of dynamic objects


2. The map is at 2D level


'''Hardware components and their general functionalities'''
3. Overall accuracy of <0.1 meter
|-
| Knowing where the robot is in the environment || 1. Know the location at 2D level
2. XY with < 0.1 meter accuracy


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top"><span style="font-weight:bold">Hardware module</span></th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top"><span style="font-weight:bold">General functionality</span></th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">PICO Robotic platform<br>- Jazz telepresence robot</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">General framework with all the hardware. This<br>framework will allow to execute the assignments</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Sensors:<br>- Laser range finder<br>- Wheel encoders<br>- 170⁰ wide angle camera</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top"><br>Scan environment and detect objects<br>Determine the traveled distance by the wheels<br>Can be used for vision system (object detection for example)<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Actuators:<br>- Holonomic base (omni-wheels)<br>- Pan-tilt unit for head</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top"><br>Allows the robot to move on the ground<br>Can be used to move the head with the display and<br>camera<br></td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Computer<br>- Intel I7 processor<br>- OS: Ubuntu 16.04 (64-Bit)<br>- ROS with own software layer</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top"><br>Perform computations<br>Software that allows execution of programs<br>Allows to easily make connections between software and hardware</td></tr></table>
3. Orientation (alpha) with <10 degree accuracy
|-
| Being able to move || 1. Max. 0.5 [m/s] translational speed
2. Max 1.2 [rad/sec] rotational speed


== Environment ==
3. Able to reach the desired position with <0.1 meter accuracy
The environments for both assignments will meet the following specifications:  <br />


'''Escape room challenge'''  <br />
4. Able to reach the desired orientation with <0.1 radians accuracy
- Rectangular room, unknown dimensions. One opening with a corridor. <br />
|-
- Starting point and orientation is random, but equal for all groups. <br />
| Avoid obstacles || 1. Never bump into an object
- Opening will be perpendicular to the room. <br />
2. Being able to drive around an object when it is partially blocking the desired path
- Far end of the corridor will be open. <br />
|-
- Wall will not be perfectly straight, walls of the corridor will not be perfectly parallel. <br />
| Standing still || 1. Never stand still for longer than 30 seconds
- Finish line is at least 3 meters in the corridor, walls of the corridor will be a little bit longer.  <br />
|-
|Finish as fast as possible || 1. Within 5 minutes (Escape room challenge)
2. Within 10 minutes (Hospital Competition)
|-
|Coding language || 1. Only allowed to write code in C++ coding language
2. GIT version control must be used
|}


= General software architecture and interface =
<div id="General software architecture and interface"></div>


'''Final challenge''' <br />
A generic software architecture was implemented as discussed in the lectures during the course. The software architecture is split into several building blocks, Task manager, World model, Perceptor, Path planner, Drive controller. The respective data flow between these building blocks are as shown in Figure 2. The splitting of the building blocks is necessary to keep the software structured and divide functionalities of the robot.
- Walls will be perpendicular to each other <br />
- Global map is provided before competition <br />
- Location of cabinets is provided in global map <br />
- Static elements, not showed in global map, will be in the area <br />
- Dynamic (moving) elements will be in the area <br />
- Objects can have a random orientation <br />
- Multiple rooms with doors <br />
- Doors are time-varying opening an closing <br />
- list of "To-be-visited" cabinets is provided just before competition <br />


= General software architecture and interface =
TODO: Anish
The overall software is split in several building blocks such as the Task manager, World model, Perceptor, Path planner and Drive controller as shown in the figure below :
<br />
<br />
[[File:ExtendedOverview2.png|center]]
[[File:GeneralOverview.png|center|thumb|600px|Figure 2: Global overview of the software architecture used]]
The [[#Task manager | task manager]] helps in structuring the [[#program flow | program flow]] by setting modes to a software block (block mode). Based on received statuses from the software blocks (block status) a particular task is performed. It mainly focuses on the behavior of the program and segments the challenge into a step by step process/task division taking into account the fall back scenarios. It manages high level tasks which are discussed in more detail later. The [[#Perceptor | perceptor]] is primarily used for converting the input sensor data into usable information for the PICO. It creates a local map, fits it onto the global map and then aligns it when required. The [[#Path planner | path planner]] mainly focuses on planning an optimal path trough nodes, such that the PICO can reach the required destination. The [[#Drive controller | drive controller]] focuses on driving the PICO to the required destination by giving a motor set point in terms of translational and rotational velocity. The [[#World model | world model]] is used to store all the information and acts as a medium of communication between the other blocks. Detailed explanations of each block can be found in [[#Software blocks | software blocks]] section.


'''Data flow'''
'''Block modes & Block Statuses'''
{| border="2" cellpadding="5" cellspacing="0" align="center" style="margin-left: 3em;"
|-
| '''Block'''
| '''Data'''
| '''Transition from'''
| '''Transition to'''
| '''Block'''
| '''Data'''
| '''Transition from'''
| '''Transition to'''
|-
| '''Perceptor'''
| Lazer range finder data
| LRF Sensor input
| Perceptor
| '''Path planner'''
| Desired destination
| World model
| Path planner
|-
|
| Odometry data
| ODO Sensor input
| Perceptor
|
| Current position
| World model
| Path planner
|-
|
| Json file - Global Map and location of cabinets
| File read
| Perceptor
|
| Combined map
| World model
| Path planner
|-
|
| Perceptor block modes
| World model
| Perceptor
|
| Path planner block mode
| World model
| Path planner
|-
|
| Local map
| World model
| Perceptor
|
| Path planner block status
| Path planner
| World model
|-
|
| Global map
| World model
| Perceptor
|
| Next node
| Path planner
| World model
|-
|
| Current position
| World model
| Perceptor
|
| Path from Dijkstra algorithm
| Path planner
| World model
|-
|
| Zero position
| World model
| Perceptor
| '''Drive controller'''
| Desired location
| World model
| Drive controller
|-
|
| Global map
| Perceptor
| World model
|
| Current location
| World model
| Drive controller
|-
|
| Perceptor block status
| Perceptor
| World model
|
| Close proximity region
| World model
| Drive controller
|-
|
| Local map
| Perceptor
| World model
|
| Drive controller block mode
| World model
| Drive controller
|-
|
| Combined map
| Perceptor
| World model
|
| Drive controller block status
| Drive controller
| World model
|-
|
| Zero position
| Perceptor
| World model
|
| Motor set point
| Drive controller
| World model
|-
|
| Current position
| Perceptor
| World model
|-
|
| Close proximity region
| Perceptor
| World model
|-
| '''Task manager'''
| User input cabinet list
| User
| Task manager
|-
|
| Block status
| World model
| Task manager
|-
|
| High level tasks
| World model
| Task manager
|-
|
| Block modes
| Task manager
| World model
|-
|
| Updated high level tasks
| Task manager
| World model
|}


== Overall program flow ==
Block statuses and modes are primarily used for communication between the task manager and the other blocks. It also helps gain easy understanding of the segmentation of each task/phase. In each iteration of the software the Perceptor, Path planner and Drive controller receive a block mode from the task manager, describing the required functionality of that during that iteration. After executing the task, the software block returns a block status describing the outcome of the task. i.e. if it was successful or not. The block diagram shown below explains the transmission of block modes and statuses between the Task manager and the other software blocks. Also, the exact block modes and statuses are listed below.
[[File:Flowchart_phases.png|right|thumb|400px|Figure 2: Overall program structure with phases]]


The overall program is divided in 5 phases. During a phase all actions lead to one specific goal. When the goal is reached, a transition is made towards a new phase.
[[File:blockdiag.jpg]]
Within a phase it is possible to have multiple cases. Each iteration of the program it is evaluated to what phase and case the program should move. It is also possible to stay in a specific phase and case for longer than one program iteration, however, in most situations this is not needed.
Every case has a specific block mode for each software block and the transitions between cases and phases is based on the blockstatusses of all blocks.


== Phases flowchart ==
The block status that were defined are as follows :
In figure 2 the overall structure with the phases is shown. The different phases have the following goals:


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Phase</th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Goal</th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Init</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Initialise all blocks</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Fitting</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Fit the local map onto the global map and determine the location of the robot onto the global map</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Relocate</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Move the robot to a desired cabinet. This includes:<br>- Planning a path<br>- Driving the robot along the path<br>- Check if path is blocked<br>- Measure environment and update map<br>- Update location of the robot</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Cabinet_action</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Perform required interactions with the cabinet</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Error</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Bring the robot to a safe state, output an error to the user and exit the program</td></tr></table>
*STATE_BUSY - Set for each block in case it is still performing the task
*STATE_DONE - Set for each block if the task is completed
*STATE_ERROR - Set for each block if it results in an error


The block modes that were defined are as follows :


== Init phase ==
*MODE_EXECUTE - defined for all the blocks to execute its main functionality
The init phase focuses on the initialisation of the program. This phase has one case and the flowchart of this phase is shown in figure 3.
*MODE_INIT - defined for all the blocks to initialize
*MODE_IDLE - defined for all the blocks to do temporary disable the block
*PP_MODE_ROTATE - defined for the path planner which directs the drive controller to change orientation of PICO
*PP_MODE_PLAN - defined for the path planner to plan an optimal path
*PP_MODE_NEXTNODE - defined for the path planner to send the next node in the path
*PP_MODE_FOLLOW - defined for the path planner follows the path
*PP_MODE_BREAKLINKS - defined for the path planner breaks the links between a node when
*PP_MODE_RESETLINKS - defined for the path planner to rest the broken links between nodes which were broken due to an object between the two nodes.
*PC_MODE_FIT - defined for the perceptor to fit the local map onto the global map


These statuses and modes are used in the execution of appropriate phases and cases discussed in the [[# Overall program flow | overall program flow]] section.


[[File:Init_phase.png|right|thumb|400px|Figure 3: Init phase]]
<div id="program flow"></div>
= Overall program flow =
[[File:Flowchart_phases.png|right|thumb|400px|Figure 3: Overall program structure with phases]]


During the init phase the following actions are executed:
The overall software behavior is divided in five clearly distinguished phases. During each phase all actions lead to one specific goal and when that goal is reached, a transition is made towards the next phase.
The following five phases are identified and shown in Figure 3.


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Block</th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Actions</th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Taks manager</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Init blockmodes of init values for all blocks<br>- Set blockstatusses to init values for all blocks<br>- Read and store high level tasks<br>- Read taskplanner behavior from file and store</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Perceptor</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Read Json file and create map<br>- Generate global map<br>- Set zero coordinates<br>- Create cabinets</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Path planner</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Create nodes (around doors, middle of the room, in front of cabinets)<br>- Link nodes</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Drive controller</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Init PID controllers</td></tr></table>
''' Initialization phase '''


== Fitting phase ==
The software always starts in this phase. All the inputs and outputs of the robot are initialized and checked, during this phase. Moreover, all the required variables in the software are set to their correct values. At the end of the initialization phase the software is set and ready to perform the desired tasks, the software switches to the fitting phase.  
[[File:Fit_phase.png|right|thumb|400px|Figure 4: Fit phase]]
In the fitting phase the goal is to locate the robot relative to the given global map. This fitting is done in the perceptor by checking the laserdata, creating a local map and try to find a best fit between the local and global map. If this fit is successful the location of the robot is known and all new laser information can be used to update the location of the robot.


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case</th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Actions</th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case 1 - Fit</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Obtain laser data<br>- Create local map<br>- Try to fit local map onto global map</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case 2 - Change desired location</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- set new desired orientation to (current orientation + predefined angle)</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case 3 - Drive</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- drive robot to new desired location/orientation</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case 4 - Change desired location</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">- Set new desired location to the middle of the room</td></tr></table>
''' Fitting phase '''


== Relocate phase ==
During the fitting phase PICO tries to determine its initial position relative to the given global map. It determines the location with the help of the laser range finder data and tries to fit the environment around the robot to the given map. In the case that the obtained laser data is insufficient to get a good, unique fit, it will start to rotate the robot. If still no unique fit is obtained after the rotation, the robot will try to drive towards a location inside the room and will rotate again at this location. The full details on how the fitting algorithm works are described in the [[#Perceptor | perceptor]] section of this wiki. As soon as there is an unique and good fit, the location of PICO is known and the software switched to the relocate phase.


[[File:Relocate_phase.png|right|thumb|400px|Figure 5: Relocate phase]]
''' Relocate phase '''


In the "Relocate" phase the goal is to drive the robot to the next cabinet. The driving involves the path planning, actuating the drivetrain of the robot and avoiding obstacles. During the driving the perceptor keeps updating the world map and keeps adding objects to the map is necessary. Also the fitting will be improved once more laserdata is obtained.
During the relocate phase the goal is to move the PICO robot to the desired cabinet. To do this, a path is calculated from the current location towards the desired cabinet in the [[#Path planner | path planner]]. The [[#Drive controller | drive controller]] follows this path and avoids obstacles on its way. When it is found that the path is blocked, a new path is calculated around the blockage. As soon as the PICO robot has arrived at the desired cabinet the software switches to the cabinet action phase.


In figure 5 the overall flowchart is shown of the relocate phase. This phase contains the following cases:
''' Cabinet action phase '''


<table style="border-collapse:collapse;border-spacing:0" class="tg"><tr><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Case</th><th style="font-family:Arial, sans-serif;font-size:14px;font-weight:bold;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Actions</th></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Case 1 - Create path</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Calculate path from current position to the desired cabinet along predefined nodes (points)<br>The calculated path consists of a list of nodes that have to be visited in the defined order</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Case 2 - Proceed path</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Get next node from path and set it as desired position</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Case 3 - Drive to node</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Drive robot to desired location by actuating the drivetrain of the robot</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Case 4 - Break link</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:inherit;text-align:left;vertical-align:top">Break link between last and next node. <br>Since the drivecontroller was unable to reach the desired node from the last node it is assumed to be blocked.<br>This can be caused by a closed door or a static/dynamic object between the two nodes. <br>In both cases when the path planner calculates a new path it is not desired to plan a path between those two nodes, therefore the link is removed</td></tr><tr><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">Case 5 - Reset links</td><td style="font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;text-align:left;vertical-align:top">No path is found, while it should be possible. <br>This can be caused by the fact that some links between nodes are removed because it was not possible to drive from one to another. <br>In case it was blocked by a dynamic object a path is possible after resetting all links<br>if no links are reset, exit the program because recalculating a path doesn't make sense at this point</td></tr></table>
During the cabinet action phase the PICO robot executes the required actions at the cabinet. This includes saying 'I arrived at cabinet #' and taking a snapshot of the current laser data to proof that the robot has arrived at the correct location. After performing the required actions the software determines if the PICO robot should visit another cabinet, and if so, it switches back to the relocate phase. If all the cabinets are visited the software is stopped and the challenge is successfully completed.


== Cabinet_action phase ==
''' Error phase '''


[[File:Cabinet_action.png|right|thumb|400px|Figure 6: Cabinet_action phase]]
The error phase is different from the other phases in the sense that it is never the desired to end up in this phase. The only situation when the software switched to the error phase is when something is different then expected. This can for example happen when a required file is missing during the initialization phase or the fitting phase went through all the fallback mechanisms and still hasn't succeed in finding a unique fit. In all cases something unforeseen has happened and the software is out of options to recover itself. If that happens, the PICO robot is switched to a safe state e.g. all motors are disabled, and then all useful information for debugging is displayed to the operator. Finally the software is terminated and the possible cause can be found and fixed.


During the cabinet action phase the robot can interact with the cabinet.


The required interaction is specified in the competition and is consisting of taking a snapshot of the current laserdata and play a sound.
''' Detailed descriptions of the different phases '''


The flowchart of this phase is shown in figure 6.
For all the phases detailed flowcharts and descriptions are made. In those flowcharts all the fallback mechanisms are shown and the exact behavior is explained in full detail. During the writing of the software those flowcharts are used to create the software and to find bugs. They also helped the group to come to a clear agreement on how the software architecture should look like. Finally it also helped the group to discuss how to handle unforeseen situations and how a found solution can be fitted in the existing software.


== Error phase ==


The error phase is only visited if something unrecoverably went wrong in the program, e.g. a required file is missing.
The flowcharts with all the required details are grouped in [http://cstwiki.wtb.tue.nl/images/Phase_description.pdf this] document.
In this phase the output of the drive motors is set to zero and the error is displayed to the user. Then the program is terminated


= Software blocks =
= Software blocks =
== World model ==
The world model is the block which stores all the data that needs to be transferred between the different blocks. It acts as a medium of communication between the various blocks such as perceptor, task manager, path planner and drive controller. It contains all the get and set functions of the various blocks to perform respective tasks.
The detailed overview of the input and output data from the software blocks to the world model is shown in Figure 4. A description of what each data line represents is stated in Table 3.
[[File:Worldmodeloverview.png|center|thumb|800px|Figure 4: Detailed overview of software structure with data flow between the blocks]]
''' Table 3: Description of data that is transmitted'''
{| class="TablePager" style="width: 650px; min-width: 800px; margin-center: 2em; float:center; color: black;"
|-
! scope="col" | '''Data'''
! scope="col" | '''Description'''
|-
| LRF Sensor input ||  Laser range finder data is used to create the local map objects like walls and detect corners
|-
|  ODO Sensor input || Odometery data gives the current position and orientation of PICO
|-
| Local map|| This is the map created from the LRF data
|-
| Global map|| This is the given map with the position of cabinets and rooms specified
|-
| Close proximity region|| This is the region defined around PICO in order to avoid obstacles
|-
| Current position|| This data stores the current position of the PICO which gets updated
|-
| Next node|| Next node to visit in the optimal planned path
|-
| Optimal path|| Shortest path created by Dijkstra algorithm
|-
| Desired destination/ next cabinet node|| This stores the current cabinet number that needs to be visited by the PICO
|-
| Block modes|| Used by every block to help PICO perform a certain action
|-
| Block status|| This is the status of each block such as Done/busy/error and based on which a particular block modes are set and the required task is performed
|-
|}
== Task manager ==
== Task manager ==
The task manager functions as a finite state machine which switches between different tasks/states. It focuses mainly on the behavior of the whole program rather than the execution. It determines the next operation phase and case based on the current phase, current case, current block statuses and counters in which the corresponding block modes of the perceptor, path planner and drive controller are set for the upcoming execution. It communicates with the other blocks via the World model.
[[File:Escaperoomtask.png|right|thumb|800px|Figure 5: Task manager state machine for the escape room challenge]]
 
The task manager functions as a finite state machine which switches between different tasks/states. It focuses mainly on the behavior of the whole program rather than the execution. It determines the next operation mode based on the current operation mode, current block statuses and counters in which the corresponding block modes of the perceptor, path planner and drive controller are set for the next task execution. It communicates with the other blocks via the World model.
Since the "Escape room challenge" and the “Hospital challenge” require a complete different approach in terms of cooperation between the blocks, the task manager is rewritten with different program structure maintaining the same framework for the “Hospital challenge”.
 
'''ESCAPE ROOM CHALLENGE''' <br>
In this challenge, the task manager initially sets the modes of the different blocks for PICO to find the door and drive to the door. It either chooses to stay in the same state or drives to the exit based on the received status of the blocks. If PICO is driving to a possible door or is searching for a door, the task manager chooses to stay in the same state of driving to the door. If PICO drives to the found door, then the task manager sends a command to drive PICO to the finish. If PICO is driving to a possible exit or is searching for an exit, the task manager still commands PCIO to drive to the finish. Lastly, if PICO crosses the finish line, the task sets the mode of all the blocks to Idle. The functions of the task manager for the escape room challenge are described in the state machine diagram in Figure 5


Since the "Escape room challenge" and the "Hospital competition" require a complete different approach in terms of cooperation between the blocks, the task planner is completely rewritten for both challenges.  
'''HOSPITAL ROOM CHALLENGE''' <br>
In this challenge, the task manager maintains the same framework with a different program structure. It functions as a state machine and handles the behavior of the program in a well structured manner considering various fall back scenarios which can be edited whenever needed. The list of cabinets to visit are initially read by the task manager and sent to the world model. It works primarily on setting appropriate block modes, setting counter variables and changing from a particular phase and case of the program to another in order to perform a required task based on the block statuses it receives. The various phases and cases are discussed further in the [[#Overall program flow | overall program flow]] section. The functions of the task manager for the hospital challenge are described in the flowchart in figure 6 and can also be seen in description in [[#Overall program flow | overall program flow]] section.


'''ESCAPE ROOM CHALLENGE'''
The function description can be found here : [[Media:Task_manager_document.pdf|Task manager description]]
[[File:Taskhospital.png|center|thumb|800px|Figure 6: Task manager general state machine for the Hospital challenge]]


== Visualization ==


To make the debugging of the program faster and more effective, a visualization class is created. This visualization class draws an empty canvas of a given size. With one line of code the defined objects can be added to this canvas. Each loop iteration the canvas is cleared and can be filled again with the updated or new objects. The visualization class is excluded from the software architecture. There is chosen to not include this class in the software architecture, because this class has no real functionality except for debugging. Any other software block can make use of the visualization to quickly start debugging. However, the visualization is especially used in the main to keep the program structured.


'''BASIC BLOCK DIAGRAM''' :
The visualization makes use of the opencv library and a few of its basic functionalities such as imshow, newcanvas, Point2d, circle, line and putText. The struct "object", which is further explained in [[Media:Perceptor_used_data_structures.pdf‎|Perceptor data structures]] and defined in the [[#code snippets | code snippet]] "class definition", can be added to the visualization. For example, in the visualization can be seen if points are connected (blue) or not (white). Moreover, darker blue points indicate that the corner is concave, light blue points indicate convex corners with respect to the robot. The maps as defined in the world model can also be visualized. The map struct contains a vector of objects, such as walls, cabinets, nodes, doors, path etc. If needed the object structured can easily be extended to include new objects which can be visualized in the same manner.


[[File:blockdiag.jpg]]
In figure 7 example objects are visualized. The global map is created by the perceptor from the Json file, the walls of the global map are visualized with purple lines. The walls of the room where PICO starts in are defined by the black lines. The laser range finder data can be visualized as red dots. The close proximity region is visualized using red and green dots. A green dot indicates a direction which contains no objects, and a red dot indicates an object is within the defined close proximity region. Furthermore, arrows are used to visualize how close the object is to the robot. This region is used by the drive controller for the potential field algorithm. Using the LRF data the current scan is created. The data points from the LRF are converted into line points and visualized as red lines. The local map is stored in the world model and contains multiple current scan lines merged over time. Moreover, the local map is visualized as green lines and corner points which are colored based on their properties, such as convex, concave or not connected. Once a path is planned by the path planner block is can be added to the global map. Therefore, the planned path can also be visualized by visualizing the global map. The node set as destination for PICO is visualized dark blue. The final destination of a path, in this case always a cabinet node, is set visualized red. If needed the nodes and link from one node to another node can be visualized as well (normally white). Often only the nodes from the planned path are visualized, to prevent a chaotic canvas. If certain objects are visualized using the same color it can easily be changed.
 
[[File:Visualisation.gif|frame|border|center|600px|Figure 7:The simulation objects]]
 
== Perceptor ==


'''INITIALIZATION''':
<div id="Perceptor"></div>


The path planner is given a command “Drive_to_door” while the drive controller and the preceptor are given a command “Execute” as a part of the initialization process.


'''EXECUTION''':
The perceptor receives all of the incoming data from the PICO robot and converts the data to useful data for the worldmodel. The incoming data exists of odometry data obtained by the wheel encoders of the PICO robot and the laserdata obtained by the laser scanner. also the JSON file containing the global map and location of the cabinets, which is provided a week before the hospital challenge, is read and processed. Moreover, the output of the perceptor to the world model consists of the global map, a local map, a current map, the current/zero position and a close proximity region. The incoming data is handled within the perceptor by the functions described below.


The high-level tasks “Drive_to_door”, “Drive_to_exit”, “Execute”, “Idle” and “Disable” were given to appropriate blocks as shown below:
The inputs, functions and outputs of the perceptor are shown in the following Figure:
 
[[File:TASKMANAGER2.jpg]]


'''KEY''':
[[File:Perceptor.png]]


[[File:statemachine.jpg]]
The data structures used in the perceptor can be viewed in the following document: [[Media:Perceptor_used_data_structures.pdf‎|Perceptor data structures]].


'''HOSPITAL ROOM CHALLENGE'''
[[File:Fitting 2.gif|right|frame|550px|Figure 8:Visualization of the local map fitted onto the global map (fitting)]]
[[File:LineFitAlgorithm.gif|right|frame|550px|Figure 9: Visualisation of the line fit algorithm]]


Each function from the perceptor is described in more detail in the upcoming section.


''' Init'''<br>
The PICO robot saves the absolute driven distance since its startup. Therefore, when the software
starts it needs to reinitialize the current position of the robot. If the robot receives its first odometry
dataset it saves this position and sets it as zero position for the worldmodel. Later this zero position can be subtracted from the new odometry data, to obtain the position of PICO with respect to position where it initialized. The initialization of the odometry data only happens once in the program, when the software is started.


'''FLOW CHART''' :
''' Create global map '''<br>
A JSON file containing information of the global map is read in the perceptor once when the program is started. The JSON file is read and the information is directly transformed into the data structure used by the program. The global map contains all the walls of the hospital challenge, as well as the nodes and links between each node. Furthermore, the starting area of the robot is defined on the given map. The definition of this area is needed to make a more accurate and faster fit of the global map onto the local map.


[[File:Update_task_manager.jpg]]
''' Close proximity '''<br>
Dynamic objects are not measured by the local map. To prevent collisions between the robot and another object (dynamic or static), a close proximity region is determined. This region is described as a circle with a configured radius around the robot. The function returns a vector describing the measured distances of a filtered set of laser range finder data points to the worldmodel from where the drive controller can request it when needed.


The function description can be found here : [[File:TASK MANAGERfin.pdf]]
''' Locate '''<br>
This function makes use of the zero-frame which is determined by the init function. Once the
odometry data is read a transformation is used to determine the position of the robot with respect to
the initialized position. Other functions need the position of the robot to navigate through the maps. Therefore, this current position is outputted to the worldmodel.


== Perceptor ==
''' Fit map '''<br>
The perceptor receives all of the incoming data from the pico robot and converts the data to useful data for the worldmodel. The incoming data exists of odometry data obtained by the wheel encoders of the pico robot. The laserdata obtained by the laser scanners. A Json file containing the global map and location of the cabinets, this file is provided a week before the hospital challenge. Moreover, the output of the perceptor to the world model consists of the global map, a local map, a combined map, the current/zero position and a close proximity region. The incoming data is handled within the perceptor by the following functions. A detailed description on what each function does in the preceptor and more information about the data flow can be found [[Media:Perceptor.pdf|here]].
For the PICO robot to know where it is located on the given global map, a fit function is created. This fit function is used when the perceptor receives the ‘PC_MODE_FIT’ command from the task manager which only happens in the beginning of the program as described in the task manager section. The way this function works is by picking one object (with 2 points thus basically a line) from the global map and one from the local map. Next it rotates the global map such that the two lines are parallel, this happens 2 times for one set of objects, and thereafter translates the global map such that the middle points of these lines are on top of each other. When this is done the number of lines and points ‘on top’ of each other is determined. Two lines are on top of each other when the angle difference is smaller than a defined bound and when the distance between there middle points is smaller than a defined bound. To make the function more robust, also point properties such as convex and concave are taken into account and should match. A fit score is calculated based on the combined angle and distance errors of the matched walls. If there are static objects placed inside the initial room only parts of the walls can be observed in the local map. Therefore, the shorter walls of the local map are made equal in length to the global map so they can be tested if they match the global map walls.  Furthermore, the fitscore is only based on walls that are matched, in combination with the extending of the shorter walls from the local map this ensures robustness to static objects in the initial room. When the number of matching walls is higher or equal to that of another iteration then this transformation, number of matching walls and fit score is saved. The global map is transformed back to how it was, and another set of objects is tested. Once all the combinations of objects from the local map and global map are tested, the best transform is used to transform the global map. This transformation only happens if the global map matches correctly with the global map. In Figure 8 the fitting of the map can be seen. Initially the robot cannot match enough walls with a good fitscore, therefore, the robots starts rotating. Once the robot added enough walls to the local map which he is able to match the global map the transformation is saved. Lastly, this transformation is executed and the global map is transformed on the local map, even with the two static objects in the initial room.  


The inputs and outputs of the perceptor are shown in the following figure:
''' Align '''<br>
Once the global map is fitted onto the local map, the align function is used to keep the two maps on top of each other. The way this works is by checking the average angle difference and average offset between the current scan map and the global map. To do this, first it is checked which of the lines from the current scan map are ‘on top’ of the lines from the global map. In this case on top is slightly different as defined in the fit function. Here it means that 2 lines should be roughly parallel, there parallel distance should be small and they should have no gap between them. The exact definition of those terms are explained in Figure 10. When two lines are on top of each other, the angle error and parallel distance is summed up onto the total angle and total distance error. After all the all the objects are checked against each other this total error is divided by the number of matching walls and used to translate the global map to match the current scan.


[[File:Perceptor.png]]
''' Scan '''<br>
To create usable information from the laser rangefinder data, a scan function is created. However, this data is in polar coordinates. Therefore, the data is first transformed to cartesian coordinates. These transformed datapoints are then used in a fit algorithm. The aim of the fit algorithm is to create wall objects of the transformed datapoints. The first step in the fit algorithm is creating a new wall object with as corner point the first and second datapoint. Secondly, a fit error is determined by measuring the projection error from all the datapoints in between the corner points of the wall object, which in the first case are none. The fit error is split in a maximum fit error and an average fit error, for the explanation only the maximum fit error is used. Thirdly, this fit error is compared with a threshold value. If the maximum fit error does not exceed the threshold, the second corner of the wall object is updated to the next datapoint. When the maximum fit error exceeds the threshold value, the wall object can be added to the current map if it is fitted through enough datapoints. The minimum number of datapoints needed can be configured. Before the wall object is added, the last wall point is set to the previously evaluated datapoint. If there are unchecked datapoints left, a new wall object will be created. Depending on the distance between the previous and currently evaluating datapoint, the wall object will start from the previous or currently evaluating datapoint. In Figure 9 the algorithm is explained graphically and a code snippet of this algorithm can be found in [[#code snippets |code snippets]].


The data used in the perceptor:


[[File:Perceptor data.png]]
''' Merge '''<br>
In this function the new laser data is merged with an existing map to form a more robust and complete map of the environment. Therefore, laser data in the form of a current map created by the scan function is imported. Furthermore, the previous created output of the merge function is imported as well, which is called the local map. Firstly, the previous created map is transformed to the current position of the robot. Secondly, similar walls are merged. Walls are considered to be similar if they are parallel to each other, have a small difference in angle or are split into two pieces. If two walls are close to each other but one has a smaller length they are merged as well. The merge settings are stored in the configuration file. The different merge cases can be seen in Figure 10. Once similar walls are merged, the endpoints of walls are connected to form the corners points of the room. Each point of a wall has a given radius, and if another point has a distance to this point which is smaller than its radius then the points will be connected. To improve the robustness of the local map, the location of the connected corner point is based on their relative weight. Their weight is defined based on the number of datapoints the wall if fit through. Furthermore, the wall objects that are not connected at the end of the function will be removed. Therefore, the local map will only consist of walls which are connected to each other.


== Path planner ==
''' Identify '''<br>
The functionality of this function is to identify the property of the points in the local map. For instance, corner points can be convex or concave. This property is later used to help identify doors and improve the fit function. The position of the robot determines if the corner point is convex or concave. With this property information the map is scanned for doors. For the escape room challenge a door is identified as two convex points close to each other. A door is defined between two walls, these walls should be approximately in one line. Also, the corner points cannot be from the same wall to further increase the robustness of the map. It is unlikely that the local map immediately contains two convex points which can form a door. Therefore, a possible door is defined, so the robot can drive to the location and check if there is a real door at this position. There are multiple scenarios where a possible door can be formed. Such as, one convex point and one loose end, two loose ends or a loose end facing a wall. Concave points can never form a door and are therefore excluded. When forming a possible door, the length of the door and the orientation of the walls is important as well.


The path planner determines the path for the PICO robot based on the combined map, the current location and the desired location. The planned path is a set of positions that the PICO robot is going to drive towards, this set is called the ‘next set of positions’. This next set of positions is saved in the world model and used by the task manager to send destination points to the drive controller.


If the path planner is idle, it is waiting for the task manager to start planning. Once the path planner is planning the path from the position of the PICO robot towards a given destination, the following things happen. First the map is gridded, creating all possible locations that the PICO robot is able to move towards. Then different paths are planned, for example through different doors. After that the most optimal path is planned and sent to the world model as next set of positions. When the PICO robot is following the path, the path planner checks if no unexpected objects are interfering with the planned trajectory.
[[File:Lines on top.gif|center|frame|550px|Figure 10: Line matching]]


== Path planner ==


The inputs, functions and outputs of the path planner are as follows:
<div id="Path planner"></div>
[[File:Link broken2.gif|right|frame|550px|Figure 11:Visualization of an obstecale creating broken links]]


[[File:PathPlanner.png]]
For planning paths the choice is made to use nodes that are placed at the following locations: <br>
- In front of a door (one at each side) <br>
- In front of a cabinet <br>
- In the starting area <br>
- Distributed over each room in order to plan around objects, eg. in the middle <br>


A detailed description can be found [[Media:PathPlanner.pdf|here]].
Gridding the map is also considered but was not chosen because of the higher complexity and with separate nodes debugging is easier as well. The path planner determines the path for the PICO robot based on the list of available nodes and the links between these nodes. For planning the optimal path, [https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm Dijkstra's ] algorithm is used and the distance between the nodes is used as a cost. The choice for Dijkstra is based on the need for an algorithm that can plan the shortest route from one node to another. Also it was selected on being sufficient for this application, the extra complexity of the [https://en.wikipedia.org/wiki/A*_search_algorithm A*] algorithm was not needed. The planned path is a set of positions that the PICO robot is going to drive towards, this set is called the ‘next set of positions’. This next set of positions is saved in the world model and used by the task manager to send destination points to the drive controller. In Figure 11 is shown how the path planner behaves in case of a closed door: the link is broken and a new path around the closed door is planned.


== Drive controller ==
The path planner takes the desired destination, current position, list of all available nodes and the path planner block mode as inputs. The functions of the path planner are idling, planning a path, setting the next node of the planned path as current destination position, breaking a link between nodes, resetting all broken links and rotating the PICO robot. This results in the planned path (next set of positions), the destination position and the path planner block status. The inputs, functions and outputs of the path planner are visualized in the following Figure:
[[File:PathPlanner3.png]]


The drive Controller ensures that the pico robot drives to the desired location. It calculate the distance and direction to the desired location, relative to the current location. It checks if this direction is free of obstacles, if not it calculates an alternative direction that brings the pico robot closer to the desired destination. Finally it uses three PI controllers, one for each axis (rotational, X and Y) to calculate the desired speed for each axis and this is send to the pico-robot with the build-in function.  
All functions of the path planner are explained below. <br>


'''Idle''' <br>
When idling, all variables are set to their initial value and the path planner waits for the task manager to set the block mode.


'''Inputs and outputs'''  
'''Plan paths''' <br>
In this function all possible paths to the desired destination are planned based on the list of nodes and the links between these nodes. Links between nodes can be open or broken, where broken means that it is unavailable. The links between the nodes and links are determined by hand before the challenge and placed on the global map. To plan a path, Dijkstra's algorithm is used and the distance between the nodes is used as a cost. This results in the shortest possible path from the robot to the location of the desired cabinet. The optimal path consists of a list of nodes, which are positions that the PICO robot has to drive towards in order to reach the desired cabinet. This optimal path is stored in the world model as 'next set of positions', which is used during the following phase of the program.


[[File:DC_Input_output_block.png]]
'''Set next node''' <br>
Once all possible paths are planned, PICO has to follow it. To do this, the list of nodes from the created path is followed. The next node is set if the task manager changes the mode of the path planner to set next node, which happens when the last node is reached.


'''Break link''' <br>
If PICO is not able to reach a node, the link between the current destination node and the last node must be broken. This is done when the task manager sets the block mode to break link.


'''Flow chart'''  
'''Reset broken links''' <br>
When there is no possible path because all links are broken, the broken links are set to open again and a path is planned. This is implemented in case a dynamic obstacle was blocking the path. This function is called when the task manager changes the block mode to reset broken links or when the plan path function cannot create a path towards the given destination.
'''Rotate''' <br>
When PICO has to scan the room, it must be able to rotate. Using the rotate function the desired destination of PICO is changed, where the angle is adjusted by the in this function inserted value.


[[File:Drivecoontroller_Flowchart.png]]
== Drive controller ==


<div id="Drive controller"></div>
The drive controller software block ensures that the PICO robot drives to the desired location. it receives the current and desired location and automatically determines the shortest path towards the desired location. While driving  it also avoids small obstacles on it's path. If the object is too large, or the path is blocked by a door for example, the drive controller signals this to the path planner which calculated an alternative path.


The full details of the Drive Controller, including function descriptions can be found in the Drive Controller functionality description document found [http://cstwiki.wtb.tue.nl/images/DriveController_functionality.pdf here].
To avoid objects and ensure the PICO robot does not hit a wall or object, two algorithms are considered. The first algorithm takes a fixed radius around the robot and checks if each each laser data point on that radius is blocked or not. The algorithm then finds an direction that is as close as possible facing towards the actual goal with all free directions. The second algorithm that is considered is the potential field algorithm. In this algorithm the distance towards the object close to the robot is also taken into account instead of only blocked/not blocked. This second algorithm is more complex but will ensure a smoother driving of the PICO robot. Eventually the potential field algorithm is implemented because of the smoother driving behavior.
The potential field algorithm uses two types of forces which are added and used to find a free direction. The first force is an attractive force towards the desired location. Secondly all the objects (walls, static and dynamic objects) that are close to the robot have a repellent force away from them. The closer the object, the larger the repellent force. All the forces are added together and the resulting direction vector is determined. This new direction is used as a desired direction at that time instance. In the gif shown in Figure 12 the repellent forces are visualized and it can be seen that PICO uses this to drive around an object in its path. The green points are showing the free directions and in red the directions towards an object, together with an arrow visualizing the repellent force.


== Visualisation ==
More details of the Drive Controller, including function descriptions and a detailed flowchart can be found in the Drive Controller functionality description document [http://cstwiki.wtb.tue.nl/images/DriveController_functionality.pdf here]. In the [[#code snippets | code snippet]] section of this wiki the actual potential field algorithm is found.


[[File:Close_prx.gif|frame|border|center|600px|Figure 12: visualization of the PICO driving around an object with the potential field algorithm.]]


= Challenges =
= Challenges =


== Escape Room Challenge ==
== Escape Room Challenge ==
[[File:escaperoom3.gif|right|frame|550px|Figure 13:Simulation of the escape room]]
Our strategy for the escape room challenge was to use the software structure for the hospital challenge as much as possible. Therefore, the room is scanned from its initial position. From this location a local map of the room is created by the perceptor. Including, convex or concave corner points, doors and possible doors (if it is not fully certain the door is real). Based on this local map the task manager gives commands to the drive controller and path planner to position in front of the door. Once in front of the the possible door and verified as a real door the path planner sends the next position to the world model. Which is the end of the finish line in this case, which is detected by two lose ends of the walls. Also the robot is able to detect if there are objects in front of the robot  to eventually avoid them.
Our strategy for the escape room challenge was to use the software structure for the hospital challenge as much as possible. Therefore, the room is scanned from its initial position. From this location a local map of the room is created by the perceptor. Including, convex or concave corner points, doors and possible doors (if it is not fully certain the door is real). Based on this local map the task manager gives commands to the drive controller and path planner to position in front of the door. Once in front of the the possible door and verified as a real door the path planner sends the next position to the world model. Which is the end of the finish line in this case, which is detected by two lose ends of the walls. Also the robot is able to detect if there are objects in front of the robot  to eventually avoid them.


[[File:escaperoom3.gif|right|frame|550px|Figure 7:Simulation of the escape room]]




===Simulation and testing===
===Simulation and testing===
Multiple possible maps where created and tested. In most of the cases the robot was able to escape the room. However, in some cases such as the room in the escape room challenge the robot could not escape. The cases were analyzed but there was enough time to implement these cases. Furthermore, the software was only partly tested with the real environment at the time of the escape room challenge. Each separate function worked, such as driving to destinations, making a local map with walls, doors and corner points, driving trough a hallway and avoiding obstacles.  
Multiple possible maps where created and tested. In most of the cases the robot was able to escape the room. However, in some cases such as the room in the escape room challenge the robot could not escape. The cases were analyzed but there was enough time to implement these cases. Furthermore, the software was only partly tested with the real environment at the time of the escape room challenge. However, each separate function worked, such as driving to destinations, making a local map with walls, doors and corner points, driving through a corridor and avoiding obstacles and this created a solid basis for the hospital challenge.


===What went good during the escape room challenge:===
The software basis was made robust, it could detect the walls even though a few walls were placed under a small angle and not straight next to each other. Furthermore, the graphical feedback in from of a local map  was implemented on the “face” of the PICO robot. The PICO robot even drove to a possible door when later realizing this was not a door. Most important is that the software basis was set for the final challenge and most of the software written and tested for the escape room challenge can be used for the final challenge.


==What went good during the escape room challenge:==
===Improvements for the escape room challenge:===
The robot was made robust, it could detect the walls even though a few walls were placed under a small angle and not straight next to each other. Furthermore, the graphical feedback in from of a local map  was implemented on the “face” of the Pico. The Pico even drove to a possible door when later realizing this was not a door.  
Doors can only be detected if it consists of convex corners, or two loose ends facing each other. In the challenge it was therefore not able to detect a possible door.  The loose ends were not facing each other as can be seen in Figure 13. Furthermore, there was not back up strategy when no doors where found, other then scanning the map again. PICO should have re-positioned itself somewhere else in the room or the PO could have followed a wall. However, we are not intending to use a wall follower in the hospital challenge. Therefore, this does not correspond with our chosen strategy. Another point that can be improved is creating the walls. For now walls can only be detected with a minimal number of laser points. Therefore, in the challenge it was not able to detect the small wall next to the corridor straight away. This was done to create a robust map but therefore also excluded some essential parts of the map.




==Improvements for the escape room challenge:==
In the simulation environment the map is recreated including the roughly placed walls. As expected in this simulation of the escape room the PICO did not succeed to find the exit, the reasons are explained above.
Doors can only be detected if it consists of convex corners, or two loose ends facing each other. In the challenge it was therefore not able to detect a possible door.  The loose ends were not facing each other as can be seen in the gif below. Furthermore, there was not back up strategy when no doors where found, other then scanning the map again. Pico should have re-positioned itself somewhere else in the room or the pico could have followed a wall. However, we are not intending to use a wall follower in the hospital challenge. Therefore, this does not correspond with our chosen strategy. Another point that can be improved is creating the walls. For now walls can only be detected with a minimal number of laser points. Therefore, in the challenge it was not able to detect the small wall next to the corridor straight away. This was done to create a robust map but therefore also excluded some essential parts of the map.


== Hospital Challenge==


In the simulation environment the map is recreated including the roughly placed walls. As expected in this simulation of the escape room the pico did not succeed to find the exit, the reasons are explained above.
[[File:MapImageWcommentsCut.png|thumb|right|upright=3|Figure 14: Global map during hospital challenge]]
[[File:Finalmap_emc_inkscape 2019.png|thumb|right|500px|upright=3|Figure 15: The given final map]]


== Hospital Challenge==


[[File:BrokenNodesHospitalChallenge.jpeg|right|frame|100px|Figure 8: Broken links between nodes during hospital challenge]]


During the hospital challenge a list of cabinets is visited. The given order was 0, 1 and finally 3. Before this challenge a global map with coordinates of the cabinets was given. However there are doors that might be closed and unknown objects in the hospital. These objects can be either static or dynamic.  
During the hospital challenge the PICO robot must visit multiple cabinets in a hospital to pick and place medicine. One week before this challenge a global map of the hospital was given which contained the coordinates of the cabinets, walls and doors. This map is shown in figure 15. However, during the actual challenge one door was closed and extra static objects were added to the environment. At last there was also one person was walking in the hospital environment and the PICO robot had to deal with those differences compared to the map. At the beginning of the challenge the specific cabinets and the order how to visit those was announced and was equal to C0, C1 and finally C3. During the challenge the door between C0 and C1 was closed, also an extra object was added in the room of C1 and one in the hallway between the rooms.
Out of the nine groups competing in the challenge, only two groups completed the challenge and we are one of those groups. We visited all the required cabinets in just under 5 minutes. With this finish time we ended up in second place! In the video below the first part of the challenge is shown and on YouTube the full video [https://www.youtube.com/watch?v=WREYyta6aR4 Hospital challenge - video group 2 - 2019]  in high quality can be found. This video also explains the things that happen and shows how the PICO robot with our software handles those situations.  


The software for the hospital challenge is an improved version of the software used during the escape room challenge. The same structure with blocks is used and as much as possible software is reused and adjusted where necessary. Major changes are done to the path planner and task manager since the hospital challenge is much more complex for those two blocks compared to the escape room challenge.


=== Video ===
[[File:HospitalChallengeGif.gif|border|upright=4|center|600px|Hospital challenge video]]
TODO embed video.
<br>
A higher quality version of the video can be found here: [https://www.youtube.com/watch?v=WREYyta6aR4 Hospital challenge - video group 2 - 2019]


=== What went well ===
=== What went well ===
The things that went well where being able to detect the closed door and static object and being able to plan a path around the closed door and static obstacles. This is done by breaking the links between nodes, such that the path planning algorithm does not use these broken links anymore while planning a path. The broken links during the hospital challenge are visible in Figure 8, where all red lines between nodes indicate a broken link. Also did the localization work well, once it had determined the starting position correctly. This made it possible to determine the exact position of the PICO robot in the hospital during the challenge. Also was the localization robust against disturbances that were blocking it from detecting corners of rooms. This is also visible in Figure 8, where an object was blocking the top left corner of the room where PICO was in at that moment.
Since we successfully accomplished the challenge, the overall program and approach of the group worked as desired. The software was able to deal with the uncertainties such as the walking person and the unknown objects in the hospital. In particular we very proud of how the PICO robot dealt with the person walking in hospital. The PICO robot avoided the person if needed and the software was robust in the sense that the localization was still able to correctly locate the PICO robot when the person was walking. We are also proud of how the PICO robot dealt with the closed door and the object in the hallway. It tried to drive trough the closed door but quickly realized that it was closed. After that it immediately planned a new path around the closed door. On that new path the unknown object was positioned, and the PICO robot drove around it as soon as it realized that the path was blocked. In Figure 14 a snapshot of the map in purple with the planned path in green is shown. The white lines are the possible paths between the nodes and the red lines are the broken links at the location of the closed door and unknown object. This snapshot was taken during the hospital challenge when the PICO robot was standing in front of cabinet 1. The last thing that we are in particular proud of is the localization on the map during the challenge. It was robust against the several unknown objects, it was able to fit correctly at the beginning of the challenge and it was able to deal with the person walking in the hospital.


=== Improvements for the hospital challenge ===
=== Improvements for the hospital challenge ===
The things that we would improve are improving the initial localization robustness since during the first try the localization was off, resulting in PICO getting stuck in the hallway. Luckily it did work correctly after a restart and we were able to finish. Also did we slightly hit the obstacle in the hallway, which was unexpected since a protection mechanism to avoid running into objects is implemented. Why this happened has to be investigated. The last improvement is increasing the speed during driving. The driving speed was lowered during the challenge to improve the robustness of the localization. If the localization is improved, the driving speed can be increased as well to finish the hospital challenge faster. The time duration can also be decreased by removing the delay while waiting at a cabinet, since this delay was set to 5 seconds.
During the challenge we needed one restart. In the first trial the fitting of the map at the beginning of the challenge was not fully correct and the software was not able to correct it during the driving. In the second try the fitting was correct and the software made small changes during the driving to ensure the localization was correct during the whole challenge. During the second try the PICO slightly hit the obstacle in the hallway, which was unexpected since a protection mechanism to avoid running into objects is implemented. Although we extensively tested the protection mechanism, it still happened. We are not fully certain about the exact cause but we expect that the combination of the person walking around and the size of the object. If the software is used in the future this exact cause should be investigated in new testing sessions. The last improvement is reducing the total time to finish the challenge. The driving speed was lowered during the challenge to improve the accuracy of the localization, because the detected walls and doors were slightly off. If the robustness of the localization is improved, the driving speed can be increased, and the finish time will be reduced.


= Code snippets =
= Code snippets =
<div id="code snippets"></div>


* [https://gitlab.tue.nl/EMC2019/group2/snippets/126 Main]
* [https://gitlab.tue.nl/snippets/135 Main]
* [https://gitlab.tue.nl/EMC2019/group2/snippets/127 Fit lines on laserdata]
* [https://gitlab.tue.nl/snippets/133 Class definitions]
* [https://gitlab.tue.nl/snippets/134 Fit lines]
* [https://gitlab.tue.nl/snippets/139 Potential field algorithm]


= Looking back at the project =
= Looking back at the project =


== What went good ==
== What went good ==
When looking back at the project, several things went well. Firstly, a good structure was set up before we started coding. Once the whole structure was clear to everyone, parts could more easily be divided amongst the team members. Secondly, the data flow (input and output) was defined before coding of a specific part started. This ensured easy coupling of different parts. Lastly, larger algorithms and challenges are discussed amongst team members to ensure the algorithms are thought trough and work as specified. The cooperation between the group members was good and the way of structuring the process lead to successfully completing the hospital challenge.


== What could be improved ==
In the beginning of the course we mainly focused on setting up a good general structure, which is a good thing. However, it made it harder to finish the Escape room challenge because less time was devoted to making and especially testing vital functions to finish this challenge. Thus, in the beginning the focus could have been shifted towards finishing this challenge first. Next to that, not all functions made were needed in the end. Although the structure of the whole program was clear, not every detail as how to solve a specific challenge was discussed upfront. This meant that there were some redundant functions. Finally, all parts of the code are made by our own, only the EMC and a few other standard libraries (opencv, stdio, cmath, fstream, list, vector, string, cassert) are used. This meant we had a lot of control in the functionality but on the other hand does it take more time, thus it would be better if we first searched for libraries that met our requirements and only make it ourselves if no matching library could be found.


== What could be improved ==
== Overall conclusion ==
 
Although there are some things that could be improved, the overall conclusion is that the project was successful. We completed the hospital challenge and the overall software is clearly written and easy to improve and adjust. The most improvements can be made by improving the robustness of the overall software. In the beginning of the project the focus could have been more towards the escape room challenge. Nevertheless, the created software has a good structure and when implemented on the PICO robot it is able to finish the hospital challenge.
 
 
== overall feedback ==

Latest revision as of 11:10, 21 June 2019

Group members

Name Student number
Bob Clephas 1271431
Tom van de Laar 1265938
Job Meijer 1268155
Marcel van Wensveen 1253085
Anish Kumar Govada 1348701

Introduction

Figure 1: The two challenges during the course. On the left the escape room challenge where PICO must drive out of the room autonomously. On the right the hospital challenge where PICO must visit multiple cabinets autonomously

Welcome to the wiki of group 2 of the 2019 Embedded Motion Control course. During this course the group designed and implemented their own software which allows a PICO robot to complete two different challenges autonomously. The first challenge is the "Escape room challenge" where the PICO robot must drive out of the room from a given initial position inside the room. In the second challenge called the "Hospital challenge" the goal is to visit an unknown number of cabinets in a specific order, placed in different rooms. For both challenges the group designed one generic software structure that is capable of completing both challenges without changing the complete structure of the program. The example maps of both challenges are shown in Figure 1 to give a general idea about the challenges. The full details of both challenges are given on the general wiki page of the 2019 Embedded Motion Control course (link).

Escape room challenge

The main goal of the challenge was to exit the room as fast as possible, given an arbitrary initial position and orientation of PICO. The robot will encounter various constraints such as the length of the finish line from the door, width of the hall way, time constraints and accuracy. The PICO robot was first placed in a rectangular room with unknown dimensions and one door/opening towards the corridor. The corridor was perpendicular to the room with an opening on its far end as well. The initial position and orientation of the PICO robot was completely arbitrary. PICO was supposed to cross the finish line placed at a distance greater than or equal to 3 metres from the door of the corridor. The walls of the room were also not straight which posed a challenge during the mapping of the room from the laser data. The challenge is discussed in much detail about the algorithms used, implementation, program flow and results of the challenge in the following sections.

Hospital challenge

The main goal of this challenge was to visit the cabinets in a particular order given by the user as fast as possible. The global map consisting of rooms and cabinets and the starting room of PICO were mentioned beforehand. The hospital challenge contained multiple rooms with doors which can be open or closed. It tested the ability of PICO to avoid static/dynamic objects and plan an optimal path dynamically. The static objects included clutter objects/doors and the dynamic objects included human beings moving in the room while PICO was performing its task which were not specified on the given global map. Lastly, PICO was asked to visit the list of cabinets to visit in a specified order given by the user before the start of the challenge. The challenge is discussed in much detail in the following sections. The algorithms used are explained, how they are implemented is shown, the program flow is discussed, and the results of the challenge are told.


Design document

The group started by creating a design document, to arrive at a well-thought-out design of the software. This document describes the starting point of the project. The given constraints and hardware is listed in an overview and the needed requirements and specifications of the software. The last part of the design document illustrates the overall software architecture which the group wants to design. The design document provided a solid basis upon which was build during the remaining part of the project. The full document can be found here. Important specifications and requirements are listed in Table 1 and the general software architecture is explained in full detail in the chapter in the General software architecture and interface.

Table 1: Requirements and specifications

Requirements Specifications
Accomplish predefined high-level tasks 1. Find the exit during the "Escape room challenge"

2. Reach a predefined cabinet during the "Hospital Challenge"

Knowledge of the environment 1. The robot should be able to identify the following objects:
  • Location of walls
  • Location of the doors
  • Location of the cabinets
  • Location of static objects
  • Location of dynamic objects

2. The map is at 2D level

3. Overall accuracy of <0.1 meter

Knowing where the robot is in the environment 1. Know the location at 2D level

2. XY with < 0.1 meter accuracy

3. Orientation (alpha) with <10 degree accuracy

Being able to move 1. Max. 0.5 [m/s] translational speed

2. Max 1.2 [rad/sec] rotational speed

3. Able to reach the desired position with <0.1 meter accuracy

4. Able to reach the desired orientation with <0.1 radians accuracy

Avoid obstacles 1. Never bump into an object

2. Being able to drive around an object when it is partially blocking the desired path

Standing still 1. Never stand still for longer than 30 seconds
Finish as fast as possible 1. Within 5 minutes (Escape room challenge)

2. Within 10 minutes (Hospital Competition)

Coding language 1. Only allowed to write code in C++ coding language

2. GIT version control must be used

General software architecture and interface

A generic software architecture was implemented as discussed in the lectures during the course. The software architecture is split into several building blocks, Task manager, World model, Perceptor, Path planner, Drive controller. The respective data flow between these building blocks are as shown in Figure 2. The splitting of the building blocks is necessary to keep the software structured and divide functionalities of the robot.


Figure 2: Global overview of the software architecture used

The task manager helps in structuring the program flow by setting modes to a software block (block mode). Based on received statuses from the software blocks (block status) a particular task is performed. It mainly focuses on the behavior of the program and segments the challenge into a step by step process/task division taking into account the fall back scenarios. It manages high level tasks which are discussed in more detail later. The perceptor is primarily used for converting the input sensor data into usable information for the PICO. It creates a local map, fits it onto the global map and then aligns it when required. The path planner mainly focuses on planning an optimal path trough nodes, such that the PICO can reach the required destination. The drive controller focuses on driving the PICO to the required destination by giving a motor set point in terms of translational and rotational velocity. The world model is used to store all the information and acts as a medium of communication between the other blocks. Detailed explanations of each block can be found in software blocks section.

Block modes & Block Statuses

Block statuses and modes are primarily used for communication between the task manager and the other blocks. It also helps gain easy understanding of the segmentation of each task/phase. In each iteration of the software the Perceptor, Path planner and Drive controller receive a block mode from the task manager, describing the required functionality of that during that iteration. After executing the task, the software block returns a block status describing the outcome of the task. i.e. if it was successful or not. The block diagram shown below explains the transmission of block modes and statuses between the Task manager and the other software blocks. Also, the exact block modes and statuses are listed below.

Blockdiag.jpg

The block status that were defined are as follows :

  • STATE_BUSY - Set for each block in case it is still performing the task
  • STATE_DONE - Set for each block if the task is completed
  • STATE_ERROR - Set for each block if it results in an error

The block modes that were defined are as follows :

  • MODE_EXECUTE - defined for all the blocks to execute its main functionality
  • MODE_INIT - defined for all the blocks to initialize
  • MODE_IDLE - defined for all the blocks to do temporary disable the block
  • PP_MODE_ROTATE - defined for the path planner which directs the drive controller to change orientation of PICO
  • PP_MODE_PLAN - defined for the path planner to plan an optimal path
  • PP_MODE_NEXTNODE - defined for the path planner to send the next node in the path
  • PP_MODE_FOLLOW - defined for the path planner follows the path
  • PP_MODE_BREAKLINKS - defined for the path planner breaks the links between a node when
  • PP_MODE_RESETLINKS - defined for the path planner to rest the broken links between nodes which were broken due to an object between the two nodes.
  • PC_MODE_FIT - defined for the perceptor to fit the local map onto the global map

These statuses and modes are used in the execution of appropriate phases and cases discussed in the overall program flow section.

Overall program flow

Figure 3: Overall program structure with phases

The overall software behavior is divided in five clearly distinguished phases. During each phase all actions lead to one specific goal and when that goal is reached, a transition is made towards the next phase. The following five phases are identified and shown in Figure 3.

Initialization phase

The software always starts in this phase. All the inputs and outputs of the robot are initialized and checked, during this phase. Moreover, all the required variables in the software are set to their correct values. At the end of the initialization phase the software is set and ready to perform the desired tasks, the software switches to the fitting phase.

Fitting phase

During the fitting phase PICO tries to determine its initial position relative to the given global map. It determines the location with the help of the laser range finder data and tries to fit the environment around the robot to the given map. In the case that the obtained laser data is insufficient to get a good, unique fit, it will start to rotate the robot. If still no unique fit is obtained after the rotation, the robot will try to drive towards a location inside the room and will rotate again at this location. The full details on how the fitting algorithm works are described in the perceptor section of this wiki. As soon as there is an unique and good fit, the location of PICO is known and the software switched to the relocate phase.

Relocate phase

During the relocate phase the goal is to move the PICO robot to the desired cabinet. To do this, a path is calculated from the current location towards the desired cabinet in the path planner. The drive controller follows this path and avoids obstacles on its way. When it is found that the path is blocked, a new path is calculated around the blockage. As soon as the PICO robot has arrived at the desired cabinet the software switches to the cabinet action phase.

Cabinet action phase

During the cabinet action phase the PICO robot executes the required actions at the cabinet. This includes saying 'I arrived at cabinet #' and taking a snapshot of the current laser data to proof that the robot has arrived at the correct location. After performing the required actions the software determines if the PICO robot should visit another cabinet, and if so, it switches back to the relocate phase. If all the cabinets are visited the software is stopped and the challenge is successfully completed.

Error phase

The error phase is different from the other phases in the sense that it is never the desired to end up in this phase. The only situation when the software switched to the error phase is when something is different then expected. This can for example happen when a required file is missing during the initialization phase or the fitting phase went through all the fallback mechanisms and still hasn't succeed in finding a unique fit. In all cases something unforeseen has happened and the software is out of options to recover itself. If that happens, the PICO robot is switched to a safe state e.g. all motors are disabled, and then all useful information for debugging is displayed to the operator. Finally the software is terminated and the possible cause can be found and fixed.


Detailed descriptions of the different phases

For all the phases detailed flowcharts and descriptions are made. In those flowcharts all the fallback mechanisms are shown and the exact behavior is explained in full detail. During the writing of the software those flowcharts are used to create the software and to find bugs. They also helped the group to come to a clear agreement on how the software architecture should look like. Finally it also helped the group to discuss how to handle unforeseen situations and how a found solution can be fitted in the existing software.


The flowcharts with all the required details are grouped in this document.

Software blocks

World model

The world model is the block which stores all the data that needs to be transferred between the different blocks. It acts as a medium of communication between the various blocks such as perceptor, task manager, path planner and drive controller. It contains all the get and set functions of the various blocks to perform respective tasks.

The detailed overview of the input and output data from the software blocks to the world model is shown in Figure 4. A description of what each data line represents is stated in Table 3.

Figure 4: Detailed overview of software structure with data flow between the blocks


Table 3: Description of data that is transmitted

Data Description
LRF Sensor input Laser range finder data is used to create the local map objects like walls and detect corners
ODO Sensor input Odometery data gives the current position and orientation of PICO
Local map This is the map created from the LRF data
Global map This is the given map with the position of cabinets and rooms specified
Close proximity region This is the region defined around PICO in order to avoid obstacles
Current position This data stores the current position of the PICO which gets updated
Next node Next node to visit in the optimal planned path
Optimal path Shortest path created by Dijkstra algorithm
Desired destination/ next cabinet node This stores the current cabinet number that needs to be visited by the PICO
Block modes Used by every block to help PICO perform a certain action
Block status This is the status of each block such as Done/busy/error and based on which a particular block modes are set and the required task is performed


Task manager

Figure 5: Task manager state machine for the escape room challenge

The task manager functions as a finite state machine which switches between different tasks/states. It focuses mainly on the behavior of the whole program rather than the execution. It determines the next operation mode based on the current operation mode, current block statuses and counters in which the corresponding block modes of the perceptor, path planner and drive controller are set for the next task execution. It communicates with the other blocks via the World model. Since the "Escape room challenge" and the “Hospital challenge” require a complete different approach in terms of cooperation between the blocks, the task manager is rewritten with different program structure maintaining the same framework for the “Hospital challenge”.

ESCAPE ROOM CHALLENGE
In this challenge, the task manager initially sets the modes of the different blocks for PICO to find the door and drive to the door. It either chooses to stay in the same state or drives to the exit based on the received status of the blocks. If PICO is driving to a possible door or is searching for a door, the task manager chooses to stay in the same state of driving to the door. If PICO drives to the found door, then the task manager sends a command to drive PICO to the finish. If PICO is driving to a possible exit or is searching for an exit, the task manager still commands PCIO to drive to the finish. Lastly, if PICO crosses the finish line, the task sets the mode of all the blocks to Idle. The functions of the task manager for the escape room challenge are described in the state machine diagram in Figure 5

HOSPITAL ROOM CHALLENGE
In this challenge, the task manager maintains the same framework with a different program structure. It functions as a state machine and handles the behavior of the program in a well structured manner considering various fall back scenarios which can be edited whenever needed. The list of cabinets to visit are initially read by the task manager and sent to the world model. It works primarily on setting appropriate block modes, setting counter variables and changing from a particular phase and case of the program to another in order to perform a required task based on the block statuses it receives. The various phases and cases are discussed further in the overall program flow section. The functions of the task manager for the hospital challenge are described in the flowchart in figure 6 and can also be seen in description in overall program flow section.

The function description can be found here : Task manager description

Figure 6: Task manager general state machine for the Hospital challenge

Visualization

To make the debugging of the program faster and more effective, a visualization class is created. This visualization class draws an empty canvas of a given size. With one line of code the defined objects can be added to this canvas. Each loop iteration the canvas is cleared and can be filled again with the updated or new objects. The visualization class is excluded from the software architecture. There is chosen to not include this class in the software architecture, because this class has no real functionality except for debugging. Any other software block can make use of the visualization to quickly start debugging. However, the visualization is especially used in the main to keep the program structured.

The visualization makes use of the opencv library and a few of its basic functionalities such as imshow, newcanvas, Point2d, circle, line and putText. The struct "object", which is further explained in Perceptor data structures and defined in the code snippet "class definition", can be added to the visualization. For example, in the visualization can be seen if points are connected (blue) or not (white). Moreover, darker blue points indicate that the corner is concave, light blue points indicate convex corners with respect to the robot. The maps as defined in the world model can also be visualized. The map struct contains a vector of objects, such as walls, cabinets, nodes, doors, path etc. If needed the object structured can easily be extended to include new objects which can be visualized in the same manner.

In figure 7 example objects are visualized. The global map is created by the perceptor from the Json file, the walls of the global map are visualized with purple lines. The walls of the room where PICO starts in are defined by the black lines. The laser range finder data can be visualized as red dots. The close proximity region is visualized using red and green dots. A green dot indicates a direction which contains no objects, and a red dot indicates an object is within the defined close proximity region. Furthermore, arrows are used to visualize how close the object is to the robot. This region is used by the drive controller for the potential field algorithm. Using the LRF data the current scan is created. The data points from the LRF are converted into line points and visualized as red lines. The local map is stored in the world model and contains multiple current scan lines merged over time. Moreover, the local map is visualized as green lines and corner points which are colored based on their properties, such as convex, concave or not connected. Once a path is planned by the path planner block is can be added to the global map. Therefore, the planned path can also be visualized by visualizing the global map. The node set as destination for PICO is visualized dark blue. The final destination of a path, in this case always a cabinet node, is set visualized red. If needed the nodes and link from one node to another node can be visualized as well (normally white). Often only the nodes from the planned path are visualized, to prevent a chaotic canvas. If certain objects are visualized using the same color it can easily be changed.


Figure 7:The simulation objects

Perceptor


The perceptor receives all of the incoming data from the PICO robot and converts the data to useful data for the worldmodel. The incoming data exists of odometry data obtained by the wheel encoders of the PICO robot and the laserdata obtained by the laser scanner. also the JSON file containing the global map and location of the cabinets, which is provided a week before the hospital challenge, is read and processed. Moreover, the output of the perceptor to the world model consists of the global map, a local map, a current map, the current/zero position and a close proximity region. The incoming data is handled within the perceptor by the functions described below.

The inputs, functions and outputs of the perceptor are shown in the following Figure:

Perceptor.png

The data structures used in the perceptor can be viewed in the following document: Perceptor data structures.

Figure 8:Visualization of the local map fitted onto the global map (fitting)
Figure 9: Visualisation of the line fit algorithm

Each function from the perceptor is described in more detail in the upcoming section.

Init
The PICO robot saves the absolute driven distance since its startup. Therefore, when the software starts it needs to reinitialize the current position of the robot. If the robot receives its first odometry dataset it saves this position and sets it as zero position for the worldmodel. Later this zero position can be subtracted from the new odometry data, to obtain the position of PICO with respect to position where it initialized. The initialization of the odometry data only happens once in the program, when the software is started.

Create global map
A JSON file containing information of the global map is read in the perceptor once when the program is started. The JSON file is read and the information is directly transformed into the data structure used by the program. The global map contains all the walls of the hospital challenge, as well as the nodes and links between each node. Furthermore, the starting area of the robot is defined on the given map. The definition of this area is needed to make a more accurate and faster fit of the global map onto the local map.

Close proximity
Dynamic objects are not measured by the local map. To prevent collisions between the robot and another object (dynamic or static), a close proximity region is determined. This region is described as a circle with a configured radius around the robot. The function returns a vector describing the measured distances of a filtered set of laser range finder data points to the worldmodel from where the drive controller can request it when needed.

Locate
This function makes use of the zero-frame which is determined by the init function. Once the odometry data is read a transformation is used to determine the position of the robot with respect to the initialized position. Other functions need the position of the robot to navigate through the maps. Therefore, this current position is outputted to the worldmodel.

Fit map
For the PICO robot to know where it is located on the given global map, a fit function is created. This fit function is used when the perceptor receives the ‘PC_MODE_FIT’ command from the task manager which only happens in the beginning of the program as described in the task manager section. The way this function works is by picking one object (with 2 points thus basically a line) from the global map and one from the local map. Next it rotates the global map such that the two lines are parallel, this happens 2 times for one set of objects, and thereafter translates the global map such that the middle points of these lines are on top of each other. When this is done the number of lines and points ‘on top’ of each other is determined. Two lines are on top of each other when the angle difference is smaller than a defined bound and when the distance between there middle points is smaller than a defined bound. To make the function more robust, also point properties such as convex and concave are taken into account and should match. A fit score is calculated based on the combined angle and distance errors of the matched walls. If there are static objects placed inside the initial room only parts of the walls can be observed in the local map. Therefore, the shorter walls of the local map are made equal in length to the global map so they can be tested if they match the global map walls. Furthermore, the fitscore is only based on walls that are matched, in combination with the extending of the shorter walls from the local map this ensures robustness to static objects in the initial room. When the number of matching walls is higher or equal to that of another iteration then this transformation, number of matching walls and fit score is saved. The global map is transformed back to how it was, and another set of objects is tested. Once all the combinations of objects from the local map and global map are tested, the best transform is used to transform the global map. This transformation only happens if the global map matches correctly with the global map. In Figure 8 the fitting of the map can be seen. Initially the robot cannot match enough walls with a good fitscore, therefore, the robots starts rotating. Once the robot added enough walls to the local map which he is able to match the global map the transformation is saved. Lastly, this transformation is executed and the global map is transformed on the local map, even with the two static objects in the initial room.

Align
Once the global map is fitted onto the local map, the align function is used to keep the two maps on top of each other. The way this works is by checking the average angle difference and average offset between the current scan map and the global map. To do this, first it is checked which of the lines from the current scan map are ‘on top’ of the lines from the global map. In this case on top is slightly different as defined in the fit function. Here it means that 2 lines should be roughly parallel, there parallel distance should be small and they should have no gap between them. The exact definition of those terms are explained in Figure 10. When two lines are on top of each other, the angle error and parallel distance is summed up onto the total angle and total distance error. After all the all the objects are checked against each other this total error is divided by the number of matching walls and used to translate the global map to match the current scan.

Scan
To create usable information from the laser rangefinder data, a scan function is created. However, this data is in polar coordinates. Therefore, the data is first transformed to cartesian coordinates. These transformed datapoints are then used in a fit algorithm. The aim of the fit algorithm is to create wall objects of the transformed datapoints. The first step in the fit algorithm is creating a new wall object with as corner point the first and second datapoint. Secondly, a fit error is determined by measuring the projection error from all the datapoints in between the corner points of the wall object, which in the first case are none. The fit error is split in a maximum fit error and an average fit error, for the explanation only the maximum fit error is used. Thirdly, this fit error is compared with a threshold value. If the maximum fit error does not exceed the threshold, the second corner of the wall object is updated to the next datapoint. When the maximum fit error exceeds the threshold value, the wall object can be added to the current map if it is fitted through enough datapoints. The minimum number of datapoints needed can be configured. Before the wall object is added, the last wall point is set to the previously evaluated datapoint. If there are unchecked datapoints left, a new wall object will be created. Depending on the distance between the previous and currently evaluating datapoint, the wall object will start from the previous or currently evaluating datapoint. In Figure 9 the algorithm is explained graphically and a code snippet of this algorithm can be found in code snippets.


Merge
In this function the new laser data is merged with an existing map to form a more robust and complete map of the environment. Therefore, laser data in the form of a current map created by the scan function is imported. Furthermore, the previous created output of the merge function is imported as well, which is called the local map. Firstly, the previous created map is transformed to the current position of the robot. Secondly, similar walls are merged. Walls are considered to be similar if they are parallel to each other, have a small difference in angle or are split into two pieces. If two walls are close to each other but one has a smaller length they are merged as well. The merge settings are stored in the configuration file. The different merge cases can be seen in Figure 10. Once similar walls are merged, the endpoints of walls are connected to form the corners points of the room. Each point of a wall has a given radius, and if another point has a distance to this point which is smaller than its radius then the points will be connected. To improve the robustness of the local map, the location of the connected corner point is based on their relative weight. Their weight is defined based on the number of datapoints the wall if fit through. Furthermore, the wall objects that are not connected at the end of the function will be removed. Therefore, the local map will only consist of walls which are connected to each other.

Identify
The functionality of this function is to identify the property of the points in the local map. For instance, corner points can be convex or concave. This property is later used to help identify doors and improve the fit function. The position of the robot determines if the corner point is convex or concave. With this property information the map is scanned for doors. For the escape room challenge a door is identified as two convex points close to each other. A door is defined between two walls, these walls should be approximately in one line. Also, the corner points cannot be from the same wall to further increase the robustness of the map. It is unlikely that the local map immediately contains two convex points which can form a door. Therefore, a possible door is defined, so the robot can drive to the location and check if there is a real door at this position. There are multiple scenarios where a possible door can be formed. Such as, one convex point and one loose end, two loose ends or a loose end facing a wall. Concave points can never form a door and are therefore excluded. When forming a possible door, the length of the door and the orientation of the walls is important as well.


Figure 10: Line matching

Path planner

Figure 11:Visualization of an obstecale creating broken links

For planning paths the choice is made to use nodes that are placed at the following locations:
- In front of a door (one at each side)
- In front of a cabinet
- In the starting area
- Distributed over each room in order to plan around objects, eg. in the middle

Gridding the map is also considered but was not chosen because of the higher complexity and with separate nodes debugging is easier as well. The path planner determines the path for the PICO robot based on the list of available nodes and the links between these nodes. For planning the optimal path, Dijkstra's algorithm is used and the distance between the nodes is used as a cost. The choice for Dijkstra is based on the need for an algorithm that can plan the shortest route from one node to another. Also it was selected on being sufficient for this application, the extra complexity of the A* algorithm was not needed. The planned path is a set of positions that the PICO robot is going to drive towards, this set is called the ‘next set of positions’. This next set of positions is saved in the world model and used by the task manager to send destination points to the drive controller. In Figure 11 is shown how the path planner behaves in case of a closed door: the link is broken and a new path around the closed door is planned.

The path planner takes the desired destination, current position, list of all available nodes and the path planner block mode as inputs. The functions of the path planner are idling, planning a path, setting the next node of the planned path as current destination position, breaking a link between nodes, resetting all broken links and rotating the PICO robot. This results in the planned path (next set of positions), the destination position and the path planner block status. The inputs, functions and outputs of the path planner are visualized in the following Figure: PathPlanner3.png

All functions of the path planner are explained below.

Idle
When idling, all variables are set to their initial value and the path planner waits for the task manager to set the block mode.

Plan paths
In this function all possible paths to the desired destination are planned based on the list of nodes and the links between these nodes. Links between nodes can be open or broken, where broken means that it is unavailable. The links between the nodes and links are determined by hand before the challenge and placed on the global map. To plan a path, Dijkstra's algorithm is used and the distance between the nodes is used as a cost. This results in the shortest possible path from the robot to the location of the desired cabinet. The optimal path consists of a list of nodes, which are positions that the PICO robot has to drive towards in order to reach the desired cabinet. This optimal path is stored in the world model as 'next set of positions', which is used during the following phase of the program.

Set next node
Once all possible paths are planned, PICO has to follow it. To do this, the list of nodes from the created path is followed. The next node is set if the task manager changes the mode of the path planner to set next node, which happens when the last node is reached.

Break link
If PICO is not able to reach a node, the link between the current destination node and the last node must be broken. This is done when the task manager sets the block mode to break link.

Reset broken links
When there is no possible path because all links are broken, the broken links are set to open again and a path is planned. This is implemented in case a dynamic obstacle was blocking the path. This function is called when the task manager changes the block mode to reset broken links or when the plan path function cannot create a path towards the given destination.

Rotate
When PICO has to scan the room, it must be able to rotate. Using the rotate function the desired destination of PICO is changed, where the angle is adjusted by the in this function inserted value.

Drive controller

The drive controller software block ensures that the PICO robot drives to the desired location. it receives the current and desired location and automatically determines the shortest path towards the desired location. While driving it also avoids small obstacles on it's path. If the object is too large, or the path is blocked by a door for example, the drive controller signals this to the path planner which calculated an alternative path.

To avoid objects and ensure the PICO robot does not hit a wall or object, two algorithms are considered. The first algorithm takes a fixed radius around the robot and checks if each each laser data point on that radius is blocked or not. The algorithm then finds an direction that is as close as possible facing towards the actual goal with all free directions. The second algorithm that is considered is the potential field algorithm. In this algorithm the distance towards the object close to the robot is also taken into account instead of only blocked/not blocked. This second algorithm is more complex but will ensure a smoother driving of the PICO robot. Eventually the potential field algorithm is implemented because of the smoother driving behavior.

The potential field algorithm uses two types of forces which are added and used to find a free direction. The first force is an attractive force towards the desired location. Secondly all the objects (walls, static and dynamic objects) that are close to the robot have a repellent force away from them. The closer the object, the larger the repellent force. All the forces are added together and the resulting direction vector is determined. This new direction is used as a desired direction at that time instance. In the gif shown in Figure 12 the repellent forces are visualized and it can be seen that PICO uses this to drive around an object in its path. The green points are showing the free directions and in red the directions towards an object, together with an arrow visualizing the repellent force.

More details of the Drive Controller, including function descriptions and a detailed flowchart can be found in the Drive Controller functionality description document here. In the code snippet section of this wiki the actual potential field algorithm is found.

Figure 12: visualization of the PICO driving around an object with the potential field algorithm.

Challenges

Escape Room Challenge

Figure 13:Simulation of the escape room

Our strategy for the escape room challenge was to use the software structure for the hospital challenge as much as possible. Therefore, the room is scanned from its initial position. From this location a local map of the room is created by the perceptor. Including, convex or concave corner points, doors and possible doors (if it is not fully certain the door is real). Based on this local map the task manager gives commands to the drive controller and path planner to position in front of the door. Once in front of the the possible door and verified as a real door the path planner sends the next position to the world model. Which is the end of the finish line in this case, which is detected by two lose ends of the walls. Also the robot is able to detect if there are objects in front of the robot to eventually avoid them.


Simulation and testing

Multiple possible maps where created and tested. In most of the cases the robot was able to escape the room. However, in some cases such as the room in the escape room challenge the robot could not escape. The cases were analyzed but there was enough time to implement these cases. Furthermore, the software was only partly tested with the real environment at the time of the escape room challenge. However, each separate function worked, such as driving to destinations, making a local map with walls, doors and corner points, driving through a corridor and avoiding obstacles and this created a solid basis for the hospital challenge.

What went good during the escape room challenge:

The software basis was made robust, it could detect the walls even though a few walls were placed under a small angle and not straight next to each other. Furthermore, the graphical feedback in from of a local map was implemented on the “face” of the PICO robot. The PICO robot even drove to a possible door when later realizing this was not a door. Most important is that the software basis was set for the final challenge and most of the software written and tested for the escape room challenge can be used for the final challenge.

Improvements for the escape room challenge:

Doors can only be detected if it consists of convex corners, or two loose ends facing each other. In the challenge it was therefore not able to detect a possible door. The loose ends were not facing each other as can be seen in Figure 13. Furthermore, there was not back up strategy when no doors where found, other then scanning the map again. PICO should have re-positioned itself somewhere else in the room or the PO could have followed a wall. However, we are not intending to use a wall follower in the hospital challenge. Therefore, this does not correspond with our chosen strategy. Another point that can be improved is creating the walls. For now walls can only be detected with a minimal number of laser points. Therefore, in the challenge it was not able to detect the small wall next to the corridor straight away. This was done to create a robust map but therefore also excluded some essential parts of the map.


In the simulation environment the map is recreated including the roughly placed walls. As expected in this simulation of the escape room the PICO did not succeed to find the exit, the reasons are explained above.

Hospital Challenge

Figure 14: Global map during hospital challenge
Figure 15: The given final map


During the hospital challenge the PICO robot must visit multiple cabinets in a hospital to pick and place medicine. One week before this challenge a global map of the hospital was given which contained the coordinates of the cabinets, walls and doors. This map is shown in figure 15. However, during the actual challenge one door was closed and extra static objects were added to the environment. At last there was also one person was walking in the hospital environment and the PICO robot had to deal with those differences compared to the map. At the beginning of the challenge the specific cabinets and the order how to visit those was announced and was equal to C0, C1 and finally C3. During the challenge the door between C0 and C1 was closed, also an extra object was added in the room of C1 and one in the hallway between the rooms. Out of the nine groups competing in the challenge, only two groups completed the challenge and we are one of those groups. We visited all the required cabinets in just under 5 minutes. With this finish time we ended up in second place! In the video below the first part of the challenge is shown and on YouTube the full video Hospital challenge - video group 2 - 2019 in high quality can be found. This video also explains the things that happen and shows how the PICO robot with our software handles those situations.


Hospital challenge video


What went well

Since we successfully accomplished the challenge, the overall program and approach of the group worked as desired. The software was able to deal with the uncertainties such as the walking person and the unknown objects in the hospital. In particular we very proud of how the PICO robot dealt with the person walking in hospital. The PICO robot avoided the person if needed and the software was robust in the sense that the localization was still able to correctly locate the PICO robot when the person was walking. We are also proud of how the PICO robot dealt with the closed door and the object in the hallway. It tried to drive trough the closed door but quickly realized that it was closed. After that it immediately planned a new path around the closed door. On that new path the unknown object was positioned, and the PICO robot drove around it as soon as it realized that the path was blocked. In Figure 14 a snapshot of the map in purple with the planned path in green is shown. The white lines are the possible paths between the nodes and the red lines are the broken links at the location of the closed door and unknown object. This snapshot was taken during the hospital challenge when the PICO robot was standing in front of cabinet 1. The last thing that we are in particular proud of is the localization on the map during the challenge. It was robust against the several unknown objects, it was able to fit correctly at the beginning of the challenge and it was able to deal with the person walking in the hospital.

Improvements for the hospital challenge

During the challenge we needed one restart. In the first trial the fitting of the map at the beginning of the challenge was not fully correct and the software was not able to correct it during the driving. In the second try the fitting was correct and the software made small changes during the driving to ensure the localization was correct during the whole challenge. During the second try the PICO slightly hit the obstacle in the hallway, which was unexpected since a protection mechanism to avoid running into objects is implemented. Although we extensively tested the protection mechanism, it still happened. We are not fully certain about the exact cause but we expect that the combination of the person walking around and the size of the object. If the software is used in the future this exact cause should be investigated in new testing sessions. The last improvement is reducing the total time to finish the challenge. The driving speed was lowered during the challenge to improve the accuracy of the localization, because the detected walls and doors were slightly off. If the robustness of the localization is improved, the driving speed can be increased, and the finish time will be reduced.

Code snippets

Looking back at the project

What went good

When looking back at the project, several things went well. Firstly, a good structure was set up before we started coding. Once the whole structure was clear to everyone, parts could more easily be divided amongst the team members. Secondly, the data flow (input and output) was defined before coding of a specific part started. This ensured easy coupling of different parts. Lastly, larger algorithms and challenges are discussed amongst team members to ensure the algorithms are thought trough and work as specified. The cooperation between the group members was good and the way of structuring the process lead to successfully completing the hospital challenge.

What could be improved

In the beginning of the course we mainly focused on setting up a good general structure, which is a good thing. However, it made it harder to finish the Escape room challenge because less time was devoted to making and especially testing vital functions to finish this challenge. Thus, in the beginning the focus could have been shifted towards finishing this challenge first. Next to that, not all functions made were needed in the end. Although the structure of the whole program was clear, not every detail as how to solve a specific challenge was discussed upfront. This meant that there were some redundant functions. Finally, all parts of the code are made by our own, only the EMC and a few other standard libraries (opencv, stdio, cmath, fstream, list, vector, string, cassert) are used. This meant we had a lot of control in the functionality but on the other hand does it take more time, thus it would be better if we first searched for libraries that met our requirements and only make it ourselves if no matching library could be found.

Overall conclusion

Although there are some things that could be improved, the overall conclusion is that the project was successful. We completed the hospital challenge and the overall software is clearly written and easy to improve and adjust. The most improvements can be made by improving the robustness of the overall software. In the beginning of the project the focus could have been more towards the escape room challenge. Nevertheless, the created software has a good structure and when implemented on the PICO robot it is able to finish the hospital challenge.