PRE2017 3 Groep12: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(68 intermediate revisions by 5 users not shown)
Line 11: Line 11:


== Introduction ==
== Introduction ==
Globally there are about 285 million visually impaired people, of which 39 million are totally blind<ref>Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301</ref>. There is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some people are allergic to some dog species or they have a social aversion to them, which makes them outshouted to be guided by a guiding dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. Lastly, the training of guiding dogs is very difficult and only 70% of the trained dogs will eventually be qualified to guide the visually impaired<ref>Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8</ref>. Due to the shortage of guide dogs and support tools for the visually impaired, there is a need for innovative ways to support the visually impaired. Due to the fact that there are already a lot of different robots available we propose to convert an already available robot, in this case a soccer robot, into guiding robots.
Globally there are about 285 million visually impaired people, of which 39 million are totally blind<ref>Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301</ref>. There is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some people are allergic to some dog species or they have a social aversion to them, which makes them outshouted to be guided by a guiding dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. Lastly, the training of guiding dogs is very difficult and only 70% of the trained dogs will eventually be qualified to guide the visually impaired<ref>Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8</ref>. Due to the shortage of guide dogs and support tools for the visually impaired, there is a need for innovative ways to support the visually impaired. Due to the fact that there are already a lot of different robots available, we propose to convert an already available robot, in this case a soccer robot, into guiding robots.
 


== Project Goal ==
== Project Goal ==
Here is a brief explanation of what we intend to do during our project.  
Here is a brief explanation of what we intend to do during our project.  
Tech United have football robots which play actual leagues. These robots use a camera to view the world and make a model of it in their system. Our goal is to convert one of these football robots into a robotic guide dog, which assists in walking around. The robot will be directed in a certain direction where it will drive towards and provide resistance if you should not continue walking into the driving direction as there might be an obstacle there.  
Tech United has soccor robots which play actual leagues. These robots use a camera to view the surrounding environmnent and they make a model of it in their system. Our goal is to convert one of these soccer robots into a robotic guide, which assists in walking around. The robot will be directed in a certain direction, it will drive towards that direction and it will provide resistance if the user should not continue walking into that direction, for instance when there is an obstacle there.  


The robot will do this in a restricted environment, as it is only physically able to drive over level ground.
The robot will do this in a restricted environment, as it is only physically able to drive over level ground.
The robot has a pre-programmed map that resembles the static environment and can detect other soccer robots as dynamic objects.  
The robot has a pre-programmed map that resembles the static environment and can detect other soccer robots as dynamic objects.  
Furthermore, the robot is equipped with a handle such that the person can easily "push" the robot and one can select if one uses their left or right hand to operate the robot. This will be taken into account as the position of the person determines when resistance needs to be exerted.
Furthermore, the robot is equipped with a handle such that the person can easily "push" the robot and one can select if one uses their left or right hand to operate the robot. This will be taken into account as the position of the person determines when resistance needs to be exerted.
At the end of the project we will deliver an open source code that we implemented in a soccer robot. This code is used to convert the soccer robot into a guiding robot. Furthermore, we made a handle to make it able for the robot to guide a person. Lastly, we will show demos in which we show the implemented code in the robot.
== Objectives ==
=== Autonomous ===
We want to accomplish that the robot can guide a user through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence the robot has to make all decisions on itself given the user inputs.
This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself.
The robot must be able to guide itself and the user on paved, flat area. This type of area is also the kind of terrain an average person walks over the most.
The scanning setup will consist of one camera. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a height differences, like stairways.
The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person can be guided over an area safely, without hitting obstacles on its path and entering restricted area's.
This goal can be accomplished by implementing obstacle recognition software for the attached camera. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path.
By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.
This goal should be realized within 6 weeks. If the robot is not guiding autonomously at this point, there will be no time to make changes in the software and/or hardware of the robot.
=== Easy to use ===
The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user.
This goal involves both the robot and its user.
The interface of the robot consists of an object the user can hold on to. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. The feedback can consist as physical resistance when the user is planning to do an action that could lead to dangerous situations. So if the user pushes the robot into a dangerous direction, the robot will resist. Otherwise, the robot will assist.
By keeping the user interface simple, it will be realistic to implement.
This user interface that will be easy to use must be defined in week 3 and be implemented in at most week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.
=== Safety guaranteed ===
At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that most risky situations are prevented, it would have no purpose.
This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing to the left or right side of the robot, holding a handle so that the user will not be anywhere else than on that side of the robot.  The position of the user must be known to the robot.
This goal can be measured by simulating several scenarios having a dangerous situation and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.


== Target User Group ==
== Target User Group ==
The robot that we want to develop is a robot that will replace the guiding dog. This robot will not only affect the visually impaired, but also other people in the environment. There are a few different levels of users, namely primary, secondary and tertiary users;
The robot that we want to develop is a robot that will replace the guiding dog. This robot will not only affect the visually impaired, but also other people in the environment. There are a few different levels of users, namely primary, secondary and tertiary users;
* The primary users are the visually impaired.
* The primary users are the visually impaired
* The secondary users are other people in the environment
* The secondary users are other people in the environment
* The tertiary users are the developers of the robot
* The tertiary users are the developers of the robot




The users have different requirement of the robot, which are listed below.
The users have different requirements of the robot, which are listed below.


''Visually impaired (primary users)''
''Visually impaired (primary users)''
Line 41: Line 70:


''Developers (tertiary users)''
''Developers (tertiary users)''
* That it is easy to adapt the software
* The software is easy to adapt
* The guiding robot has as little maintenance as possible
* The guiding robot has as little maintenance as possible
* The guiding robot is easy to adapt to other users' needs
* The guiding robot is easy to adapt to other users' needs


== Plan ==
== Plan ==
We intend to convert a soccer robot, namely the Tech United robot<ref>Tech United, The Turtle. http://www.techunited.nl/en/turtle</ref>, into a prototype that can guide a visually impaired person in a restricted environment. Our plan is to accomplish this;
We intend to convert a soccer robot, namely the Tech United robot<ref>Tech United, The Turtle. http://www.techunited.nl/en/turtle</ref>, into a prototype that can guide a visually impaired person in a restricted environment. Our plan is to accomplish;


* Research into:
* Research into:
Line 55: Line 84:
* Ask the specifications of the Tech United robot
* Ask the specifications of the Tech United robot
* Determine what capabilities we can use that already exist in the robot
* Determine what capabilities we can use that already exist in the robot
* Determine what functionality needs to be added
* Determine what functionality need to be added
* Add the necessary basic features
* Add the necessary basic features
* Test the robot with the additional basic features
* Test the robot with the additional basic features
Line 65: Line 94:
''Environment''  
''Environment''  
* Soccer field at TU/e
* Soccer field at TU/e
* Hard coded environment boundaries (Simulating sidewalk)
* Hard coded environment boundaries (simulating sidewalk)


''Robot'' <ref> Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011 </ref>
''Robot'' <ref> Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011 </ref>
* Floor area of 500mm x 500mm
* Floor area of 12m x 8m
* Height 783 mm
* Height 783 mm
* Mass 36.3 kg
* Mass 36.3 kg
Line 76: Line 105:


''Handle''
''Handle''
* Actual guide dog handle approximately 45 cm long <ref name="HandleStats">Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283 </ref>
* Attachment height of 783 mm, at the top of the robot
* Actual guide dog attachment under an angle of approximately 45° <ref name="HandleStats"/>
* Handle aproximately 40 cm long
* Our guiding robot attachment height of 783 mm, at the top of the robot
* Handle width is 14.5 cm
* Our guiding robot handle 40 cm long
* Three screws used to attach to the robot
* Handle width is 15 cm.
* Adjustable to be easy in use for different people
* Three screws are used to attach the robot.
* The handle has to be adjustable to be easy in use for different persons.
 
We computed the length of the handle by taking the original height into account and making sure that the grip height corresponds to the original grip height.


The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds and when it will reach the border within 0.5 seconds, it will stop and thus provide a lot of resistance to the user, giving them a notion that they reach the border of the specified area.
The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds. Furtemore, when it will reach the border within 0.5 seconds, it will stop and thus provide a lot of resistance to the user, giving them a notion that they reach the border of the specified area. For safety purposes, the robot will also give some resistance when the user tries to go backwards.


== Progress ==
== Progress ==
Line 95: Line 120:


=== Tech United  ===
=== Tech United  ===
We came into contact with W.J.P. Kuijpers, who is the team leader of Tech United. A meeting was scheduled in which we discussed our plan and what we would like to do with the robot. He was quite enthusiastic about our intentions and together we came up with the first step into accomplishing our goal. This was to program a function, using the C language, that given inputs (The robot's position in x and y coordinates, the robot's orientation as an angle and the force exerted on the robot by the visually impaired as a vector) should return the resistance the robot should exert.  
We came into contact with W.J.P. Kuijpers, who is the team leader of Tech United. A meeting was scheduled in which we discussed our plan and what we would like to do with the robot. He was quite enthusiastic about our intentions and together we came up with the first step into accomplishing our goal. This was to program a function, using the C language, that given inputs (the robot's position in x and y coordinates, the robot's orientation as an angle and the force exerted on the robot by the visually impaired as a vector), should return the resistance the robot should exert.  
Additional information that the robot has are boundaries, represented as lines, which we hard coded. These boundaries represent the side walk where the robot should keep it's user between. We hard code these boundaries since in real life recognizing boundaries of where the user is able to walk is extremely difficult.
Additional information that the robot has are boundaries, represented as lines, which we hard coded. These boundaries represent the side walk where the robot should keep it's user in between. We hard code these boundaries since in real life recognizing boundaries or where the user is able to walk is extremely difficult. Eventually when an algorithm is developed for this, it can be incorporated in the robot to make it more usable and advanced.  


Once the robot is able to do is, we will extend the functionality of the robot such that it will see other Tech United robots as dynamic objects in the world and objects as static objects in the world, which is will not detect with sensors, but will hard coded obstacles which can be visualized by any object in our dimension. This means that we can use the robots to simulate other pedestrians, cyclists, cars, etc. and hard coded objects as trees, walls, lampposts, etc.. which we will visualize using objects we place on the correct places in the environment.
Once the robot is able to do is, we will extend the functionality of the robot such that it will see other Tech United robots as dynamic objects in the world and objects as static objects in the world, which it will not detect with sensors, but will hard coded obstacles which can be visualized by any object in our dimension. This means that we can use the robots to simulate other pedestrians, cyclists, cars, etc. and hard coded objects as trees, walls, lampposts, etc., which we will visualize using objects we place on the correct places in the environment.


=== Functionalities to add ===
=== Functionalities to add ===
The functionalities we would like to add are mostly explained in the Tech United section. However, to give a simple overview:
The functionalities we would like to add are mostly explained in the Tech United section. However, to give a simple overview:
* Let the robot guide between hard coded boundaries
* Let the robot guide between hard coded boundaries
* React to dynamic objects in an environment (Represented by other Tech United Robots)
* React to dynamic objects in an environment (represented by other Tech United Robots)
* React to static objects in an environment (Hard coded, but visualized using objects)
* React to static objects in an environment (hard coded, but visualized using objects)


=== The code ===
=== The code ===
Line 111: Line 136:
The algorithm we created to determine robot resistance works as follows:
The algorithm we created to determine robot resistance works as follows:


We define the border coordinates where the robot has to guide in between. We store these coordinates as doubles in <code>borderCoordinates</code>.
We define the lines that constitute the borders that the robot has to guide the user in between. We store the coordinates of the endpoints of these lines as doubles in <code>borderCoordinates</code>.
The points we define represent borderlines given two (x,y) coordinates. By defining multiple borderlines, we can create our own environment. For each <code>Borderline</code> we also define the side of the line a user should walk on.
The points we define represent borderlines given two (x,y) coordinates. By defining multiple borderlines, we can create our own environment. For each <code>Borderline</code> we also define the side of the line a user should walk on, when viewed from the perspective of the given <code>bottom</code> and <code>top</code> coordinates.


For calculating the resistance, we take as inputs the coordinates of the robot in the area, the angle it is facing and the force practiced on the robot expressed in <code>x</code> and <code>y</code> directions.
For calculating the resistance, we take as inputs the coordinates of the robot in the area, the angle it is facing and the force practiced on the robot expressed in its <code>x</code> and <code>y</code> components.
We modify the force vector to account for the rotation of the robot, such that all calculations happen in a global coordinate system.
Then, for each <code>Borderline</code> we calculate the distance from the robot to the line and from that we determine the <code>Borderline</code> that is closest to the robot. Then we check what action is desired: no resistance, increased resistance and full resistance.
Then, for each <code>Borderline</code> we calculate the distance from the robot to the line and from that we determine the <code>Borderline</code> that is closest to the robot. Then we check what action is desired: no resistance, increased resistance and full resistance.
The desired action is determined as follows:
The desired action is determined as follows:
We first determine the fraction of the line we are closest to. From this we calculate the <code>x</code> and <code>y</code> coordinate of the point on the fraction. This calculated point is the point on the borderline that the robot is closest to. Afterwards the vector of the direction the robot is pushed in is calculated.
We first determine the fraction of the line we are closest to. From this we calculate the <code>x</code> and <code>y</code> coordinate of the point on the fraction. This calculated point is the point on the borderline that the robot is closest to. Afterwards, the vector of the force that pushes the robot has its length divided by the cosine of the angle between the force vector and the vector to the closest point on the border. This ensures that this vector extends all the way to the border, instead of merely along the force.
The length of the vector is subtracted by the radius around the robot. With the vector of the direction and the vector of the force practiced on the robot, we determine the final angle in which the robot is pushed to. Based on this angle we determine whether the robot is moving left or right. Considering the <code>goodSide</code> of the <code>BorderLine</code>, we now know whether the robot is moving towards or from the <code>BorderLine</code>.
The length of the vector is subtracted by the radius around the robot. With the vector of the direction and the vector of the force practiced on the robot, we determine the final angle in which the robot is pushed to. Based on this angle we determine whether the robot is moving left or right. Considering the <code>goodSide</code> of the <code>BorderLine</code>, we now know whether the robot is moving towards or from the <code>BorderLine</code>.
If the robot is moving from the <code>BorderLine</code> the robot will not give any resistance. If the robot is approaching the <code>BorderLine</code> it will resist and if the robot is beyond the <code>BorderLine</code> and still going further, it will give full resistance.
If the robot is moving from the <code>BorderLine</code> the robot will not give any resistance. If the robot is approaching the <code>BorderLine</code> it will resist and if the robot is beyond the <code>BorderLine</code> and still going further, it will give full resistance. In case the robot happens to be beyond the <code>Borderline</code>, but it is moving back towards the <code>Borderline</code>, no resistance is given.


The only thing left is calculating the amount of resistance the robot should give when it is approaching a <code>BorderLine</code>.
The only thing left is calculating the amount of resistance the robot should give when it is approaching a <code>BorderLine</code>.
Line 125: Line 151:
Otherwise, we calculate the acceleration the robot makes towards the <code>BorderLine</code>. We assume here that the robot starts from a stationary position.
Otherwise, we calculate the acceleration the robot makes towards the <code>BorderLine</code>. We assume here that the robot starts from a stationary position.
With this acceleration we calculate the amount of time it will take before the robot reaches the <code>BorderLine</code>. The less time it takes in reaching the border, the more resistance the robot will give.
With this acceleration we calculate the amount of time it will take before the robot reaches the <code>BorderLine</code>. The less time it takes in reaching the border, the more resistance the robot will give.
The actual amount of resistance given is calculated by linearly interpolating the time to reach the border, between the so-called <code>RESISTANCE_TIME</code> and <code>STOP_TIME</code>. The value of <code>RESISTANCE_TIME</code> specifies the amount of seconds from the border that the robot should start to resist. <code>STOP_TIME</code> specifies the amount of seconds from the border that the robot should resist fully, so as to prevent moving any closer.
To account for static resistance of the robot's wheels on the ground, we subtract a value of 0.5 (which was determined by analyzing log files of the robot), and scale the resistance. This resistance is then clamped between 0 and 1, to prevent extreme values.
==== Obstacle avoidance ====
Later, the feature to avoid obstacles was added. To make use of this, a list of coordinates for all obstacles has to be provided to the <code>getResistance()</code> function. Using very similar logic to how borders are avoided, the distance along the force vector to the obstacle is calculated. The length of this vector is subtracted by the radius of the robot and the predefined radius of an obstacle. Then, the resistance to the closest obstacle is determined, and the final resistance will be the maximum value of the resistance to the closest <code>Borderline</code> and the resistance to this obstacle.
==== Keeping the user in mind ====
There are two aspects of the code that have been specifically implemented to cater to the safety and comfort of the user. Firstly, the code has a <code>USER_HANDEDNESS</code> constant, that can be either <code>LEFT</code> or <code>RIGHT</code>. Depending on what the dominant hand of the user is, we need to provide a larger buffer zone on the side that the user is walking. We determine the angle towards the border or obstacle that is considered, and if this is in the bottom right corner for right-handed users, or in the bottom-left corner for left-handed users, we subtract an additional radius from the vector to the border or obstacle.
Furthermore, to discourage the user from walking backwards with the robot, which has a risk of the robot hitting the user, we implement a minimum resistance if the force is facing backwards. This still allows the user to move backwards in cases where it is absolutely necessary, but the user will experience resistance, to indicate that he or she has to be careful.
=== Extra features ===
During the meeting with our tutors after the second testing fase, we realised that the robot needs some extra features implemented. This because the robot can just as easily walk backwards, as it can walk forwards. However, this can be quite dangerous for the user, because the robot can for instance drive over his/her toes. Therefore, we implemented a safety feature. If the robot is guided by the user to walk backwards, it now gives some resistance. The robot is thus still able to move backwards, because we didn't want to restrict the user, but it will be more safe.


=== Handle ===
=== Handle ===
The handle that is attached to the robot can be adjusted to the height of the user, in two ways. The length can be adjusted between 32 cm and 47 cm. Furthermore, the angle between the robot and the handle can be changed. The length is adjustable, because two different parts were attached to each other with 2 screws on each side. However, this isn't very easy in use because you have to disassemble all 4 screws, before you can adjust the length. When the handle will be created for real users a telescopic tube should be used for an easier adjustment.
We computed the length of the handle by taking the original height into account and making sure that the grip height corresponds to the original grip height, that is common by guiding dogs. The approximate lenght of an actual guide dog handle is 45 cm long <ref name="HandleStats">Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283 </ref>. Furthermore, it is attached under an angle of approximately 45° <ref name="HandleStats"/>. From this we calculated the lenght that the handle should aproximately be.
To make the robot more adjustable to the user, we created a handle that is adjustable in two different ways. The length can be adjusted between 32 cm and 47 cm. Furthermore, the angle between the robot and the handle can be changed. The result of the adaptability is that the robot and its handle can be used by people of different heights more easily. The length is adjustable, because two different parts were attached to each other with 2 screws on each side. However, this isn't very easy in use because you have to disassemble all 4 screws, before you can adjust the length. When the handle will be created for real users a telescopic tube should be used for an easier adjustment.


=== Testing ===
== Testing ==


=== Tuesday 2018-03-13 ===
=== Tuesday 2018-03-13 ===
During the first testing round we realized an issue within our code. This was the prediction of movement we used, which is based on the current movement of the robot and computing how long it takes for the robot to cross the boundary. A property of the robot is that it requires some force before it is able to move at all, which we did not take into account. Also, there is an unknown issue somewhere, but the robot can create logs that contain what the robot is doing and its internal state, which we are going to use to determine the errors in our software. For testing the rest of the software, we quickly changed movement prediction into distance from the boundary in order for us to test at least the procedures that compute resistance to be provided by the robot. This turned out to work, so there is an issue with movement prediction. The next step is to fix this issue.
During the first testing round we realized an issue within our code. This was the prediction of movement we used, which is based on the current movement of the robot and computing how long it takes for the robot to cross a boundary. A property of the robot is that it requires some force before it is able to move at all, which we did not take into account. Also, there is an unknown issue somewhere, but the robot can create logs that contain what the robot is doing and about its internal state, which we are going to use to determine the errors in our software. For testing the rest of the software, we quickly changed movement prediction into distance from the boundary in order for us to test at least the procedures that compute resistance to be provided by the robot. This turned out to work, so there is an issue with movement prediction. The next step is to fix this issue.


=== Monday 2018-03-19 ===
=== Monday 2018-03-19 ===
During the second testing round we implemented the new code where we fixed the issue with movement prediction. We also built a handle that can be attached to the robot. We defined two 'circuits' in the code for the robot to guide someone in.  
During the second testing round we implemented the new code, in which we fixed the issue with movement prediction. We also build a handle that can be attached to the robot. We defined two different 'circuits' in the code for the robot to guide someone in.  
We were told that the soccer robot was not allowed to rotate around its own axis, as this would disorient the robot since it used wheel movement to keep track of its location. However, after some changing of the code the robot now uses its camera to determine its location on the football field, which it was already able to do. This makes it easier to walk the robot as we originally thought we needed to push it sideways to change direction, but due to the change is how the position is determined, we can now change the orientation of the robot. However, we know need to use the orientation angle in our computation that determines how much resistance the robot needs to provide. The only parameter that we need to change is the direction of the force vector the user exerts on the robot, since this used to be robot local. This is something that we still need to do.  
Furthermore, we were told that the soccer robot was not allowed to rotate around its own axis, as this would disorient the robot, since it uses wheel movements to keep track of its location. However, after some changes in the code, the robot now uses its camera to determine its location on the football field, which it was already able to do. This makes it easier to walk the robot as we originally thought we needed to push it sideways to change direction, but due to the change in how the position is determined, we can now change the orientation of the robot. However, we now need to use the orientation angle in our computation that determines how much resistance the robot needs to provide. The only parameter that we need to change is the direction of the force vector that the user exerts on the robot, since this used to be robots local. This is something that we still need to implement in the code.  


furthermore, we now have a handle that we use, so we can determine how the robot needs to take the position of the user into account. This is the second thing that needs to be done.  
furthermore, we now have a handle that we use to 'push' the robot and now we need to determine how the robot needs to take the position of the user into account. This is the second thing that needs to be implemented in the code.  


Finally, the robot needs to detect the locations of dynamic obstacles in the environment but this is something that the team of Tech United is going implement, after which they will provide this as a parameter to the algorithm we are developing.  
Finally, the robot needs to detect the locations of dynamic obstacles in the environment, but this is something that the team of Tech United is going to implement, after which they will provide this as a parameter to the algorithm we are developing.  


To summarize, we need to:
To summarize, we need to:
* Use the orientation angle in our algorithm to compute resistance, since the robot can now rotate around its axis.
* Use the orientation angle in our algorithm to compute resistance, since the robot can now rotate around its axis.
* Include the user in resistance computation using the fact that the robot is being operated by a leftie or not.  
* Include the user in the resistance computation using the fact that the robot is being operated by a lefthanded person or not.  
* Use obstacle position coordinates, which team Tech United is going to provide us with using parameters, to avoid them.
* Use obstacle position coordinates, which Tech United is going to provide us with using parameters, to avoid them.


=== Extra features ===
=== Monday 2018-03-26 ===
During the final testing round we incorporated the new features, now the robot will take its orientation and moving obstacles in account and it will react differently when the robot is 'pulled' backwards. During the testing round, we noticed some issues that were still in the code. We changed them afterwards and we tested the code again later on the day. We noticed that when we turned the obstacle avoidance on (the robot locates and avoids the ball), the ball has to be located on the footballfield and the robot needs to 'see' it. Otherwise, it thinks that the ball is located at its own position. We fixed this by adding a line that states that the ball isn't viewed as an obstacle if the ball is located at the same position as the robot. Furthermore, we noticed an error in the allocation of the boundaries, which we also fixed. There also was a problem with the resistance that the robot gave when it was pulled backwards. This was also fixed.  Lastly, at first the robot would give resistance when the ball is at a distance of 0.5 meters from the robot. However, this turned out to be a very large distance while testing, so we decreased it to 10cm.
 
For the presentation demos were filmed.
 
== Limitations ==
Since we used an already existing robot, which was designed for a totally different purporse, there are limitations to what it can do as it is specialized in playing soccer. There are namely a few differences between the requirements for a soccer robot and a guiding robot. Therefore, the guiding robot will have some limitations due to the fact that the robot we are using is a soccer robot.
 
Firstly, the robot can only move on flat, level ground. It may be possible to move on a flat surface which has a small elevation. This elevation cannot be too high, that the robot own friction is overcome by gravity only, which will make it move. This movement then will be seen as a force exerted by the user on the robot. The downside to this is that the user might need to push harder going "uphill", since the static friction is higher as the weight of the robot is included in this friction. Going "downhill" will be a lot easier, as the user does not need to push as hard anymore, gravity is assisting here.
This can be solved by using an actual sensor that detects pushing of the robot while it can remain stationary, since now the force is derived from the movement of the robot.


== Objectives ==
Secondly, the robot is not able to detect lines other than those of the soccer field, so it is not able to dynamically "detect" its environment. Because of this, the lines that the robot should not cross are incorporated in the software. This is mainly because the robot already can detect the lines of the soccer field, but if that software is changed into something more advanced it should work in a more dynamic setting and the lines shouldn't have to be hardcoded anymore.
=== Autonomous ===
We want to accomplish that the robot can guide a user through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence the robot has to make all decisions on itself given the user inputs.
This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself.
The robot must be able to guide itself and the user on paved, flat area. This type of area is also the kind of terrain an average person walks over the most.


The scanning setup will consist of one or more camera's. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a height differences, like stairways.
Thirdly, due to the nature of the robot, it is currently only able to detect a single football as an obstacle and if other soccer robots are in its path, it will just stop and stand still. The robot does recognize other obstacles, like people, when they stand in their way, but it isn't yet able to act on this or define the obstacles.  
The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person can be guided over an area safely, without hitting obstacles on its path and entering restricted area's.
This goal can be accomplished by implementing obstacle recognition software for the attached camera's. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path.
By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.


This goal should be realized within 6 weeks. If the robot is not guiding autonomously at this point, there will be no time to make changes in the software and/or hardware of the robot.
Furthermore, it is able to detect another robot when it drives towards the robot, or a moving football and it will react to this by standing still, just like it would when there is a football laying still in its path. The robot will wait till the moving obstacle left its path, but it will not actively dodge.


=== Easy to use ===
Another limitation is that of the amount of different and specialized sensors that the robot needs in order for it and the program to work properly. This is a limitation because it makes it harder to transform an already existing robot into a guiding robot, because it needs to be equiped with all the different sensors.
The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited, and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user.
This goal involves both the robot and its user.


The interface of the robot consists of an object the user can hold on to. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. The feedback can consist at physical resistance when the user is planning to do an action that could lead to dangerous situations. So if the user pushes the robot into a dangerous direction, the robot will resist. Otherwise, the robot will assist.
== Further research / improvements ==
By keeping the user interface simple, it will be realistic to implement.
The current robot is very limited in its use for the visually impaired and there needs to be a lot of research done, or a lot of technologies implemented in the robot for it to be able to properly guide a visually impaired person. The first improvement that the user could benefit a lot from is that of auditory feedback. The only feedback the robot gives to the user at this moment is that of haptic feedback, with use of the handle. However, it would be usefull if the robot could tell the user about the obstacle, or if they have to step down (e.g. when stepping of a sidewalk). This way the user will actually know if there is an obstacle and if this is moving or not, or if they reach the borders of the sidewalk.  


This user interface that will be easy to use must be defined in week 3 and be implemented in at most week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.
Secondly, the project could benefit from the implementation of voice recognition. This will result in the ability for the robot to act on a persons commands. Eventually, when a GPS for instance is installed, the user could give the order to the robot to take him/her to the closest supermarket.


=== Safety guaranteed ===
Thirdly, if software is developed with which the robot could actively 'see' or recognize boundaries, this should be implemented. This way the software would be made more easy to apply and more usable, because the boundaries wouldn't have to be hardcoded anymore.
At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that most risky situations are prevented, it would have no purpose.
This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing to the left or right side of the robot, holding a handle so that the user will not be anywhere else than on that side of the robot.  The position of the user must be known to the robot.


This goal can be measured by simulating several scenarios having a dangerous situation, and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.
== Demo ==
We have made a demonstration video of the robot guiding someone wearing a blindfold. The Cursor has written an article about this project that includes our video.
Click [https://www.cursor.tue.nl/en/news/2018/april/week-2/football-robot-turns-into-guide-dog-for-the-blind/ here] for the article.


== Milestones ==
During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project.
Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines.


{| class="wikitable" border="1" style="border-collapse:collapse"
! style="font-weight: bold;" | Milestone
! style="font-weight: bold;" | Deadline
|-
| Research is complete
| Week 1
|-
| Hardware is available (either full robot or necessary parts)
| Week 3
|-
| Robot can give haptic feedback
| Week 3
|-
| Robot can stay between predefined borderlines
| Week 4
|-
| Robot can detect and react to predefined obstacles
| Week 6
|}


== Planning ==
== Planning ==
Line 188: Line 251:
| Discuss initial project
| Discuss initial project
| All
| All
| 1h
| 2h
| Week 1
| Week 1
|-
|-
Line 346: Line 409:
| Week 4
| Week 4
|-
|-
| Planning of project and tasks  
| Planning of project, coaching questions and tasks  
| Anne
| Anne
| 2 h
| 2 h
Line 354: Line 417:
| Thomas
| Thomas
| 2 h
| 2 h
| Week 4
|-
| Create initial code
| Jarno
| 3 h
| Week 4
| Week 4
|-
|-
Line 388: Line 456:
| Create and design a handle for the robot
| Create and design a handle for the robot
| Anne
| Anne
| 3 h
| 4 h
| Week 5
| Week 5
|-
|-
Line 418: Line 486:
| Arange 'pionnen'  
| Arange 'pionnen'  
| Thomas & Renée
| Thomas & Renée
|  
| 1 h
| Week 5
| Week 5
|-
|-
| Start preparing the presentation
| Coaching questions and wiki
| Renée & Anne
| Anne
| 2 h
| 1 h
| Week 5
| Week 5
|-
|-
Line 435: Line 503:
| 1h
| 1h
| Week 5
| Week 5
|-
|-
| Meeting
| Meeting
Line 447: Line 514:
|  
|  
|-
|-
| Meeting preparation
| Incorporate feedback on handle
| Thomas
| Anne
| 1h
| 2 h
| Week 6
|-
| Incorporate feedback on code
| Jarno
| 2 h
| Week 6
|-
| Prepare presentation
| Thomas, Renée & Anne
| 3 h
| Week 6
|-
| Adjust Wiki
| Jarno & Anne
| 2 h
| Week 6
|-
| Elaborate limitations
| Dylan
| 2 h
| Week 6
|-
| Meeting Tech United
| Jarno, Renée, Thomas & Anne
| 3h
| Week 6
| Week 6
|-
|-
| Meeting
| Creating demonstration movie
| All
| Thomas
| 1h
| 6h
| Week 6
| Week 6
|-
|-
Line 462: Line 554:
|  
|  
|-
|-
| Presentation preparation
| Presentation
| All
| All
| 20h
| 5 h
| Week 7
| Week 7
|-
|-
| Presentation
| Adjust wiki
| All
| All
| 1h
| 2 h
| Week 7
| Week 7
|}
== Milestones ==
During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project.
Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines. Note that the planning also lacks details, which will be filled in in week 2.
{| class="wikitable" border="1" style="border-collapse:collapse"
! style="font-weight: bold;" | Milestone
! style="font-weight: bold;" | Deadline
|-
|-
| Research is complete
| Final Coaching questions
| Week 1
| All
|-
| 1 h
| Hardware is available (either full robot or necessary parts)
| Week 7
| Week 3
|-
| Robot can give haptic feedback
| Week 3
|-
| Robot can stay between predefined borderlines
| Week 4
|-
| Robot can detect and react to predefined obstacles
| Week 6
|-
| Robot uses voice recognition to respond to predefined commands
| Week 7  
|}
|}
== Deliverables ==
Prototype


== References ==
== References ==
<references />
<references />

Latest revision as of 08:52, 12 April 2018

Guiding Robot

Group members:

  • Anne Kolmans
  • Dylan ter Veen
  • Jarno Brils
  • Renée van Hijfte
  • Thomas Wiepking


Coaching Questions Group 12

Introduction

Globally there are about 285 million visually impaired people, of which 39 million are totally blind[1]. There is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some people are allergic to some dog species or they have a social aversion to them, which makes them outshouted to be guided by a guiding dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. Lastly, the training of guiding dogs is very difficult and only 70% of the trained dogs will eventually be qualified to guide the visually impaired[2]. Due to the shortage of guide dogs and support tools for the visually impaired, there is a need for innovative ways to support the visually impaired. Due to the fact that there are already a lot of different robots available, we propose to convert an already available robot, in this case a soccer robot, into guiding robots.

Project Goal

Here is a brief explanation of what we intend to do during our project. Tech United has soccor robots which play actual leagues. These robots use a camera to view the surrounding environmnent and they make a model of it in their system. Our goal is to convert one of these soccer robots into a robotic guide, which assists in walking around. The robot will be directed in a certain direction, it will drive towards that direction and it will provide resistance if the user should not continue walking into that direction, for instance when there is an obstacle there.

The robot will do this in a restricted environment, as it is only physically able to drive over level ground. The robot has a pre-programmed map that resembles the static environment and can detect other soccer robots as dynamic objects. Furthermore, the robot is equipped with a handle such that the person can easily "push" the robot and one can select if one uses their left or right hand to operate the robot. This will be taken into account as the position of the person determines when resistance needs to be exerted.

At the end of the project we will deliver an open source code that we implemented in a soccer robot. This code is used to convert the soccer robot into a guiding robot. Furthermore, we made a handle to make it able for the robot to guide a person. Lastly, we will show demos in which we show the implemented code in the robot.

Objectives

Autonomous

We want to accomplish that the robot can guide a user through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence the robot has to make all decisions on itself given the user inputs. This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself. The robot must be able to guide itself and the user on paved, flat area. This type of area is also the kind of terrain an average person walks over the most.

The scanning setup will consist of one camera. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a height differences, like stairways. The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person can be guided over an area safely, without hitting obstacles on its path and entering restricted area's. This goal can be accomplished by implementing obstacle recognition software for the attached camera. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path. By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.

This goal should be realized within 6 weeks. If the robot is not guiding autonomously at this point, there will be no time to make changes in the software and/or hardware of the robot.

Easy to use

The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user. This goal involves both the robot and its user.

The interface of the robot consists of an object the user can hold on to. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. The feedback can consist as physical resistance when the user is planning to do an action that could lead to dangerous situations. So if the user pushes the robot into a dangerous direction, the robot will resist. Otherwise, the robot will assist. By keeping the user interface simple, it will be realistic to implement.

This user interface that will be easy to use must be defined in week 3 and be implemented in at most week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.

Safety guaranteed

At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that most risky situations are prevented, it would have no purpose. This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing to the left or right side of the robot, holding a handle so that the user will not be anywhere else than on that side of the robot. The position of the user must be known to the robot.

This goal can be measured by simulating several scenarios having a dangerous situation and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.

Target User Group

The robot that we want to develop is a robot that will replace the guiding dog. This robot will not only affect the visually impaired, but also other people in the environment. There are a few different levels of users, namely primary, secondary and tertiary users;

  • The primary users are the visually impaired
  • The secondary users are other people in the environment
  • The tertiary users are the developers of the robot


The users have different requirements of the robot, which are listed below.

Visually impaired (primary users)

  • The guiding robot makes it safe to walk
  • The guiding robot increases freedom of movement

The surrounding (secondary users)

  • The guiding robot detects cars, bicycles and pedestrians
  • The guiding robot guides the user around obstacles
  • The guiding robot walks where it is allowed and safe to walk

Developers (tertiary users)

  • The software is easy to adapt
  • The guiding robot has as little maintenance as possible
  • The guiding robot is easy to adapt to other users' needs

Plan

We intend to convert a soccer robot, namely the Tech United robot[3], into a prototype that can guide a visually impaired person in a restricted environment. Our plan is to accomplish;

  • Research into:
    • Guiding dogs and how they perform their tasks
    • Different ways of environment perception and object avoidance
    • Tech United robot
  • Get into contact with Tech United
  • Ask the specifications of the Tech United robot
  • Determine what capabilities we can use that already exist in the robot
  • Determine what functionality need to be added
  • Add the necessary basic features
  • Test the robot with the additional basic features
  • Determining possibilities for extra features (for example voice recognition) and possibly incorporate them

Specifications

Here we will provide specifications of different elements of our project.

Environment

  • Soccer field at TU/e
  • Hard coded environment boundaries (simulating sidewalk)

Robot [4]

  • Floor area of 12m x 8m
  • Height 783 mm
  • Mass 36.3 kg
  • Knows location in environment (x and y coordinates in local axis frame)
  • Knows orientation in environment (in same local axis frame by means of an angle theta)
  • Knows force that is being exerted on robot by user (using a force vector)

Handle

  • Attachment height of 783 mm, at the top of the robot
  • Handle aproximately 40 cm long
  • Handle width is 14.5 cm
  • Three screws used to attach to the robot
  • Adjustable to be easy in use for different people

The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds. Furtemore, when it will reach the border within 0.5 seconds, it will stop and thus provide a lot of resistance to the user, giving them a notion that they reach the border of the specified area. For safety purposes, the robot will also give some resistance when the user tries to go backwards.

Progress

In this section we will explain how the project is coming along.

Research (State of the Art)

The State of the Art can be found here.

Tech United

We came into contact with W.J.P. Kuijpers, who is the team leader of Tech United. A meeting was scheduled in which we discussed our plan and what we would like to do with the robot. He was quite enthusiastic about our intentions and together we came up with the first step into accomplishing our goal. This was to program a function, using the C language, that given inputs (the robot's position in x and y coordinates, the robot's orientation as an angle and the force exerted on the robot by the visually impaired as a vector), should return the resistance the robot should exert. Additional information that the robot has are boundaries, represented as lines, which we hard coded. These boundaries represent the side walk where the robot should keep it's user in between. We hard code these boundaries since in real life recognizing boundaries or where the user is able to walk is extremely difficult. Eventually when an algorithm is developed for this, it can be incorporated in the robot to make it more usable and advanced.

Once the robot is able to do is, we will extend the functionality of the robot such that it will see other Tech United robots as dynamic objects in the world and objects as static objects in the world, which it will not detect with sensors, but will hard coded obstacles which can be visualized by any object in our dimension. This means that we can use the robots to simulate other pedestrians, cyclists, cars, etc. and hard coded objects as trees, walls, lampposts, etc., which we will visualize using objects we place on the correct places in the environment.

Functionalities to add

The functionalities we would like to add are mostly explained in the Tech United section. However, to give a simple overview:

  • Let the robot guide between hard coded boundaries
  • React to dynamic objects in an environment (represented by other Tech United Robots)
  • React to static objects in an environment (hard coded, but visualized using objects)

The code

The code is open sourced and available on GitHub under an Apache version 2.0 license.

The algorithm we created to determine robot resistance works as follows:

We define the lines that constitute the borders that the robot has to guide the user in between. We store the coordinates of the endpoints of these lines as doubles in borderCoordinates. The points we define represent borderlines given two (x,y) coordinates. By defining multiple borderlines, we can create our own environment. For each Borderline we also define the side of the line a user should walk on, when viewed from the perspective of the given bottom and top coordinates.

For calculating the resistance, we take as inputs the coordinates of the robot in the area, the angle it is facing and the force practiced on the robot expressed in its x and y components. We modify the force vector to account for the rotation of the robot, such that all calculations happen in a global coordinate system. Then, for each Borderline we calculate the distance from the robot to the line and from that we determine the Borderline that is closest to the robot. Then we check what action is desired: no resistance, increased resistance and full resistance. The desired action is determined as follows: We first determine the fraction of the line we are closest to. From this we calculate the x and y coordinate of the point on the fraction. This calculated point is the point on the borderline that the robot is closest to. Afterwards, the vector of the force that pushes the robot has its length divided by the cosine of the angle between the force vector and the vector to the closest point on the border. This ensures that this vector extends all the way to the border, instead of merely along the force. The length of the vector is subtracted by the radius around the robot. With the vector of the direction and the vector of the force practiced on the robot, we determine the final angle in which the robot is pushed to. Based on this angle we determine whether the robot is moving left or right. Considering the goodSide of the BorderLine, we now know whether the robot is moving towards or from the BorderLine. If the robot is moving from the BorderLine the robot will not give any resistance. If the robot is approaching the BorderLine it will resist and if the robot is beyond the BorderLine and still going further, it will give full resistance. In case the robot happens to be beyond the Borderline, but it is moving back towards the Borderline, no resistance is given.

The only thing left is calculating the amount of resistance the robot should give when it is approaching a BorderLine. Obviously, when the robot is on or has passed the BorderLine, it gives full resistance. Otherwise, we calculate the acceleration the robot makes towards the BorderLine. We assume here that the robot starts from a stationary position. With this acceleration we calculate the amount of time it will take before the robot reaches the BorderLine. The less time it takes in reaching the border, the more resistance the robot will give.

The actual amount of resistance given is calculated by linearly interpolating the time to reach the border, between the so-called RESISTANCE_TIME and STOP_TIME. The value of RESISTANCE_TIME specifies the amount of seconds from the border that the robot should start to resist. STOP_TIME specifies the amount of seconds from the border that the robot should resist fully, so as to prevent moving any closer. To account for static resistance of the robot's wheels on the ground, we subtract a value of 0.5 (which was determined by analyzing log files of the robot), and scale the resistance. This resistance is then clamped between 0 and 1, to prevent extreme values.

Obstacle avoidance

Later, the feature to avoid obstacles was added. To make use of this, a list of coordinates for all obstacles has to be provided to the getResistance() function. Using very similar logic to how borders are avoided, the distance along the force vector to the obstacle is calculated. The length of this vector is subtracted by the radius of the robot and the predefined radius of an obstacle. Then, the resistance to the closest obstacle is determined, and the final resistance will be the maximum value of the resistance to the closest Borderline and the resistance to this obstacle.

Keeping the user in mind

There are two aspects of the code that have been specifically implemented to cater to the safety and comfort of the user. Firstly, the code has a USER_HANDEDNESS constant, that can be either LEFT or RIGHT. Depending on what the dominant hand of the user is, we need to provide a larger buffer zone on the side that the user is walking. We determine the angle towards the border or obstacle that is considered, and if this is in the bottom right corner for right-handed users, or in the bottom-left corner for left-handed users, we subtract an additional radius from the vector to the border or obstacle.

Furthermore, to discourage the user from walking backwards with the robot, which has a risk of the robot hitting the user, we implement a minimum resistance if the force is facing backwards. This still allows the user to move backwards in cases where it is absolutely necessary, but the user will experience resistance, to indicate that he or she has to be careful.

Extra features

During the meeting with our tutors after the second testing fase, we realised that the robot needs some extra features implemented. This because the robot can just as easily walk backwards, as it can walk forwards. However, this can be quite dangerous for the user, because the robot can for instance drive over his/her toes. Therefore, we implemented a safety feature. If the robot is guided by the user to walk backwards, it now gives some resistance. The robot is thus still able to move backwards, because we didn't want to restrict the user, but it will be more safe.

Handle

We computed the length of the handle by taking the original height into account and making sure that the grip height corresponds to the original grip height, that is common by guiding dogs. The approximate lenght of an actual guide dog handle is 45 cm long [5]. Furthermore, it is attached under an angle of approximately 45° [5]. From this we calculated the lenght that the handle should aproximately be. To make the robot more adjustable to the user, we created a handle that is adjustable in two different ways. The length can be adjusted between 32 cm and 47 cm. Furthermore, the angle between the robot and the handle can be changed. The result of the adaptability is that the robot and its handle can be used by people of different heights more easily. The length is adjustable, because two different parts were attached to each other with 2 screws on each side. However, this isn't very easy in use because you have to disassemble all 4 screws, before you can adjust the length. When the handle will be created for real users a telescopic tube should be used for an easier adjustment.

Testing

Tuesday 2018-03-13

During the first testing round we realized an issue within our code. This was the prediction of movement we used, which is based on the current movement of the robot and computing how long it takes for the robot to cross a boundary. A property of the robot is that it requires some force before it is able to move at all, which we did not take into account. Also, there is an unknown issue somewhere, but the robot can create logs that contain what the robot is doing and about its internal state, which we are going to use to determine the errors in our software. For testing the rest of the software, we quickly changed movement prediction into distance from the boundary in order for us to test at least the procedures that compute resistance to be provided by the robot. This turned out to work, so there is an issue with movement prediction. The next step is to fix this issue.

Monday 2018-03-19

During the second testing round we implemented the new code, in which we fixed the issue with movement prediction. We also build a handle that can be attached to the robot. We defined two different 'circuits' in the code for the robot to guide someone in. Furthermore, we were told that the soccer robot was not allowed to rotate around its own axis, as this would disorient the robot, since it uses wheel movements to keep track of its location. However, after some changes in the code, the robot now uses its camera to determine its location on the football field, which it was already able to do. This makes it easier to walk the robot as we originally thought we needed to push it sideways to change direction, but due to the change in how the position is determined, we can now change the orientation of the robot. However, we now need to use the orientation angle in our computation that determines how much resistance the robot needs to provide. The only parameter that we need to change is the direction of the force vector that the user exerts on the robot, since this used to be robots local. This is something that we still need to implement in the code.

furthermore, we now have a handle that we use to 'push' the robot and now we need to determine how the robot needs to take the position of the user into account. This is the second thing that needs to be implemented in the code.

Finally, the robot needs to detect the locations of dynamic obstacles in the environment, but this is something that the team of Tech United is going to implement, after which they will provide this as a parameter to the algorithm we are developing.

To summarize, we need to:

  • Use the orientation angle in our algorithm to compute resistance, since the robot can now rotate around its axis.
  • Include the user in the resistance computation using the fact that the robot is being operated by a lefthanded person or not.
  • Use obstacle position coordinates, which Tech United is going to provide us with using parameters, to avoid them.

Monday 2018-03-26

During the final testing round we incorporated the new features, now the robot will take its orientation and moving obstacles in account and it will react differently when the robot is 'pulled' backwards. During the testing round, we noticed some issues that were still in the code. We changed them afterwards and we tested the code again later on the day. We noticed that when we turned the obstacle avoidance on (the robot locates and avoids the ball), the ball has to be located on the footballfield and the robot needs to 'see' it. Otherwise, it thinks that the ball is located at its own position. We fixed this by adding a line that states that the ball isn't viewed as an obstacle if the ball is located at the same position as the robot. Furthermore, we noticed an error in the allocation of the boundaries, which we also fixed. There also was a problem with the resistance that the robot gave when it was pulled backwards. This was also fixed. Lastly, at first the robot would give resistance when the ball is at a distance of 0.5 meters from the robot. However, this turned out to be a very large distance while testing, so we decreased it to 10cm.

For the presentation demos were filmed.

Limitations

Since we used an already existing robot, which was designed for a totally different purporse, there are limitations to what it can do as it is specialized in playing soccer. There are namely a few differences between the requirements for a soccer robot and a guiding robot. Therefore, the guiding robot will have some limitations due to the fact that the robot we are using is a soccer robot.

Firstly, the robot can only move on flat, level ground. It may be possible to move on a flat surface which has a small elevation. This elevation cannot be too high, that the robot own friction is overcome by gravity only, which will make it move. This movement then will be seen as a force exerted by the user on the robot. The downside to this is that the user might need to push harder going "uphill", since the static friction is higher as the weight of the robot is included in this friction. Going "downhill" will be a lot easier, as the user does not need to push as hard anymore, gravity is assisting here. This can be solved by using an actual sensor that detects pushing of the robot while it can remain stationary, since now the force is derived from the movement of the robot.

Secondly, the robot is not able to detect lines other than those of the soccer field, so it is not able to dynamically "detect" its environment. Because of this, the lines that the robot should not cross are incorporated in the software. This is mainly because the robot already can detect the lines of the soccer field, but if that software is changed into something more advanced it should work in a more dynamic setting and the lines shouldn't have to be hardcoded anymore.

Thirdly, due to the nature of the robot, it is currently only able to detect a single football as an obstacle and if other soccer robots are in its path, it will just stop and stand still. The robot does recognize other obstacles, like people, when they stand in their way, but it isn't yet able to act on this or define the obstacles.

Furthermore, it is able to detect another robot when it drives towards the robot, or a moving football and it will react to this by standing still, just like it would when there is a football laying still in its path. The robot will wait till the moving obstacle left its path, but it will not actively dodge.

Another limitation is that of the amount of different and specialized sensors that the robot needs in order for it and the program to work properly. This is a limitation because it makes it harder to transform an already existing robot into a guiding robot, because it needs to be equiped with all the different sensors.

Further research / improvements

The current robot is very limited in its use for the visually impaired and there needs to be a lot of research done, or a lot of technologies implemented in the robot for it to be able to properly guide a visually impaired person. The first improvement that the user could benefit a lot from is that of auditory feedback. The only feedback the robot gives to the user at this moment is that of haptic feedback, with use of the handle. However, it would be usefull if the robot could tell the user about the obstacle, or if they have to step down (e.g. when stepping of a sidewalk). This way the user will actually know if there is an obstacle and if this is moving or not, or if they reach the borders of the sidewalk.

Secondly, the project could benefit from the implementation of voice recognition. This will result in the ability for the robot to act on a persons commands. Eventually, when a GPS for instance is installed, the user could give the order to the robot to take him/her to the closest supermarket.

Thirdly, if software is developed with which the robot could actively 'see' or recognize boundaries, this should be implemented. This way the software would be made more easy to apply and more usable, because the boundaries wouldn't have to be hardcoded anymore.

Demo

We have made a demonstration video of the robot guiding someone wearing a blindfold. The Cursor has written an article about this project that includes our video. Click here for the article.

Milestones

During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project. Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines.

Milestone Deadline
Research is complete Week 1
Hardware is available (either full robot or necessary parts) Week 3
Robot can give haptic feedback Week 3
Robot can stay between predefined borderlines Week 4
Robot can detect and react to predefined obstacles Week 6

Planning

Task Who Duration When
Discuss initial project All 2h Week 1
Research All 10h Week 1
Subject (wiki) Anne 2h Week 1
Users (wiki) Renée 2h Week 1
SMART objectives (wiki) Thomas 3h Week 1
Approach (wiki) Dylan 1h Week 1
Deliverables (wiki) Dylan 1h Week 1
Milestones (wiki) Jarno 1h Week 1
Planning (wiki) Jarno 1h Week 1
Discuss week 1 tasks All 2h Week 1
State of the art (wiki)
- Perceiving the environment Dylan 2h Week 1
- Obstacle avoidance Renée 2h Week 1
- GPS navigation and voice recognition Thomas 2h Week 1
- Robotic design Jarno 2h Week 1
- Guiding dogs Anne 2h Week 1
Meeting preparation Thomas 1h Week 1
Meeting All 1h Week 1
Determine specific deliverables All 4h Week 2
Add details to planning All 3h Week 2
Meeting preparation Jarno 1h Week 2
Meeting All 1h Week 2
Discussing scenario Renée & Dylan 1h Week 3
Updating planning Jarno 1h Week 3
Contacting relevant organizations/persons Thomas 1h Week 3
Meeting Tech United Thomas, Anne, Renée & Jarno 1h Week 3
Meeting preparation Renée 1h Week 3
Meeting All 1h Week 3
Adapt Wiki Renée & Dylan 2 h Week 4
Planning of project, coaching questions and tasks Anne 2 h Week 4
SoTa Voice recognition Thomas 2 h Week 4
Create initial code Jarno 3 h Week 4
SoTa Turtle Jarno 2 h Week 4
Meeting Tech United Thomas, Renée, Jarno & Dylan 1 h Week 4
Meeting preparation Anne 1h Week 4
Meeting All 1h Week 4
Incorporate feedback on code Jarno 2 h Week 5
Create and design a handle for the robot Anne 4 h Week 5
Movement prediction and clear bugs in code Jarno 3 h Week 5
Adjust objectives Thomas 1 h Week 5
Pseudocode Thomas & Renée 2 h Week 5
Individualize code and Wiki Dylan 2 h Week 5
Milestones & Deliverables Thomas & Renée 1 h Week 5
Arange 'pionnen' Thomas & Renée 1 h Week 5
Coaching questions and wiki Anne 1 h Week 5
Meeting with Tech United All 1 h Week 5
Meeting preparation Dylan 1h Week 5
Meeting All 1h Week 5
Incorporate feedback on handle Anne 2 h Week 6
Incorporate feedback on code Jarno 2 h Week 6
Prepare presentation Thomas, Renée & Anne 3 h Week 6
Adjust Wiki Jarno & Anne 2 h Week 6
Elaborate limitations Dylan 2 h Week 6
Meeting Tech United Jarno, Renée, Thomas & Anne 3h Week 6
Creating demonstration movie Thomas 6h Week 6
Presentation All 5 h Week 7
Adjust wiki All 2 h Week 7
Final Coaching questions All 1 h Week 7

References

  1. Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301
  2. Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8
  3. Tech United, The Turtle. http://www.techunited.nl/en/turtle
  4. Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011
  5. 5.0 5.1 Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283