PRE2017 3 Groep12: Difference between revisions
Line 451: | Line 451: | ||
|- | |- | ||
| Prepare presentation | | Prepare presentation | ||
| | |||
| | |||
| Week 6 | |||
|- | |||
| Adjust Wiki | |||
| | | | ||
| | | |
Revision as of 14:21, 22 March 2018
Guiding Robot
Group members:
- Anne Kolmans
- Dylan ter Veen
- Jarno Brils
- Renée van Hijfte
- Thomas Wiepking
Introduction
Globally there are about 285 million visually impaired people, of which 39 million are totally blind[1]. There is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some people are allergic to some dog species or they have a social aversion to them, which makes them outshouted to be guided by a guiding dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. Lastly, the training of guiding dogs is very difficult and only 70% of the trained dogs will eventually be qualified to guide the visually impaired[2]. Due to the shortage of guide dogs and support tools for the visually impaired, there is a need for innovative ways to support the visually impaired. Due to the fact that there are already a lot of different robots available we propose to convert an already available robot, in this case a soccer robot, into guiding robots.
Project Goal
Here is a brief explanation of what we intend to do during our project. Tech United has football robots which play actual leagues. These robots use a camera to view the surrounding environmnent and make a model of it in their system. Our goal is to convert one of these football robots into a robotic guide dog, which assists in walking around. The robot will be directed in a certain direction, where it will drive towards and provide resistance if you should not continue walking into that direction as there might be an obstacle there.
The robot will do this in a restricted environment, as it is only physically able to drive over level ground. The robot has a pre-programmed map that resembles the static environment and can detect other soccer robots as dynamic objects. Furthermore, the robot is equipped with a handle such that the person can easily "push" the robot and one can select if one uses their left or right hand to operate the robot. This will be taken into account as the position of the person determines when resistance needs to be exerted.
Target User Group
The robot that we want to develop is a robot that will replace the guiding dog. This robot will not only affect the visually impaired, but also other people in the environment. There are a few different levels of users, namely primary, secondary and tertiary users;
- The primary users are the visually impaired.
- The secondary users are other people in the environment
- The tertiary users are the developers of the robot
The users have different requirement of the robot, which are listed below.
Visually impaired (primary users)
- The guiding robot makes it safe to walk
- The guiding robot increases freedom of movement
The surrounding (secondary users)
- The guiding robot detects cars, bicycles and pedestrians
- The guiding robot guides the user around obstacles
- The guiding robot walks where it is allowed and safe to walk
Developers (tertiary users)
- That it is easy to adapt the software
- The guiding robot has as little maintenance as possible
- The guiding robot is easy to adapt to other users' needs
Plan
We intend to convert a soccer robot, namely the Tech United robot[3], into a prototype that can guide a visually impaired person in a restricted environment. Our plan is to accomplish this;
- Research into:
- Guiding dogs and how they perform their tasks
- Different ways of environment perception and object avoidance
- Tech United robot
- Get into contact with Tech United
- Ask the specifications of the Tech United robot
- Determine what capabilities we can use that already exist in the robot
- Determine what functionality needs to be added
- Add the necessary basic features
- Test the robot with the additional basic features
- Determining possibilities for extra features (for example voice recognition) and possibly incorporate them
Specifications
Here we will provide specifications of different elements of our project.
Environment
- Soccer field at TU/e
- Hard coded environment boundaries (Simulating sidewalk)
Robot [4]
- Floor area of 500mm x 500mm
- Height 783 mm
- Mass 36.3 kg
- Knows location in environment (x and y coordinates in local axis frame)
- Knows orientation in environment (in same local axis frame by means of an angle theta)
- Knows force that is being exerted on robot by user (using a force vector)
Handle
- Actual guide dog handle approximately 45 cm long [5]
- Actual guide dog attachment under an angle of approximately 45° [5]
- Our guiding robot attachment height of 783 mm, at the top of the robot
- Our guiding robot handle 40 cm long
- Handle width is 15 cm.
- Three screws are used to attach the robot.
- The handle has to be adjustable to be easy in use for different persons.
We computed the length of the handle by taking the original height into account and making sure that the grip height corresponds to the original grip height.
The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds and when it will reach the border within 0.5 seconds, it will stop and thus provide a lot of resistance to the user, giving them a notion that they reach the border of the specified area.
Progress
In this section we will explain how the project is coming along.
Research (State of the Art)
The State of the Art can be found here.
Tech United
We came into contact with W.J.P. Kuijpers, who is the team leader of Tech United. A meeting was scheduled in which we discussed our plan and what we would like to do with the robot. He was quite enthusiastic about our intentions and together we came up with the first step into accomplishing our goal. This was to program a function, using the C language, that given inputs (The robot's position in x and y coordinates, the robot's orientation as an angle and the force exerted on the robot by the visually impaired as a vector) should return the resistance the robot should exert. Additional information that the robot has are boundaries, represented as lines, which we hard coded. These boundaries represent the side walk where the robot should keep it's user between. We hard code these boundaries since in real life recognizing boundaries of where the user is able to walk is extremely difficult.
Once the robot is able to do is, we will extend the functionality of the robot such that it will see other Tech United robots as dynamic objects in the world and objects as static objects in the world, which is will not detect with sensors, but will hard coded obstacles which can be visualized by any object in our dimension. This means that we can use the robots to simulate other pedestrians, cyclists, cars, etc. and hard coded objects as trees, walls, lampposts, etc.. which we will visualize using objects we place on the correct places in the environment.
Functionalities to add
The functionalities we would like to add are mostly explained in the Tech United section. However, to give a simple overview:
- Let the robot guide between hard coded boundaries
- React to dynamic objects in an environment (Represented by other Tech United Robots)
- React to static objects in an environment (Hard coded, but visualized using objects)
The code
The code is open sourced and available on GitHub under an Apache version 2.0 license.
The algorithm we created to determine robot resistance works as follows:
We define the border coordinates where the robot has to guide in between. We store these coordinates as doubles in borderCoordinates
.
The points we define represent borderlines given two (x,y) coordinates. By defining multiple borderlines, we can create our own environment. For each Borderline
we also define the side of the line a user should walk on.
For calculating the resistance, we take as inputs the coordinates of the robot in the area, the angle it is facing and the force practiced on the robot expressed in x
and y
directions.
Then, for each Borderline
we calculate the distance from the robot to the line and from that we determine the Borderline
that is closest to the robot. Then we check what action is desired: no resistance, increased resistance and full resistance.
The desired action is determined as follows:
We first determine the fraction of the line we are closest to. From this we calculate the x
and y
coordinate of the point on the fraction. This calculated point is the point on the borderline that the robot is closest to. Afterwards the vector of the direction the robot is pushed in is calculated.
The length of the vector is subtracted by the radius around the robot. With the vector of the direction and the vector of the force practiced on the robot, we determine the final angle in which the robot is pushed to. Based on this angle we determine whether the robot is moving left or right. Considering the goodSide
of the BorderLine
, we now know whether the robot is moving towards or from the BorderLine
.
If the robot is moving from the BorderLine
the robot will not give any resistance. If the robot is approaching the BorderLine
it will resist and if the robot is beyond the BorderLine
and still going further, it will give full resistance.
The only thing left is calculating the amount of resistance the robot should give when it is approaching a BorderLine
.
Obviously, when the robot is on or has passed the BorderLine
, it gives full resistance.
Otherwise, we calculate the acceleration the robot makes towards the BorderLine
. We assume here that the robot starts from a stationary position.
With this acceleration we calculate the amount of time it will take before the robot reaches the BorderLine
. The less time it takes in reaching the border, the more resistance the robot will give.
Handle
The handle that is attached to the robot can be adjusted to the height of the user, in two ways. The length can be adjusted between 32 cm and 47 cm. Furthermore, the angle between the robot and the handle can be changed. The length is adjustable, because two different parts were attached to each other with 2 screws on each side. However, this isn't very easy in use because you have to disassemble all 4 screws, before you can adjust the length. When the handle will be created for real users a telescopic tube should be used for an easier adjustment.
Testing
Tuesday 2018-03-13
During the first testing round we realized an issue within our code. This was the prediction of movement we used, which is based on the current movement of the robot and computing how long it takes for the robot to cross the boundary. A property of the robot is that it requires some force before it is able to move at all, which we did not take into account. Also, there is an unknown issue somewhere, but the robot can create logs that contain what the robot is doing and its internal state, which we are going to use to determine the errors in our software. For testing the rest of the software, we quickly changed movement prediction into distance from the boundary in order for us to test at least the procedures that compute resistance to be provided by the robot. This turned out to work, so there is an issue with movement prediction. The next step is to fix this issue.
Monday 2018-03-19
During the second testing round we implemented the new code where we fixed the issue with movement prediction. We also built a handle that can be attached to the robot. We defined two 'circuits' in the code for the robot to guide someone in. We were told that the soccer robot was not allowed to rotate around its own axis, as this would disorient the robot since it used wheel movement to keep track of its location. However, after some changing of the code the robot now uses its camera to determine its location on the football field, which it was already able to do. This makes it easier to walk the robot as we originally thought we needed to push it sideways to change direction, but due to the change in how the position is determined, we can now change the orientation of the robot. However, we know need to use the orientation angle in our computation that determines how much resistance the robot needs to provide. The only parameter that we need to change is the direction of the force vector the user exerts on the robot, since this used to be robots local. This is something that we still need to do.
furthermore, we now have a handle that we use, so we can determine how the robot needs to take the position of the user into account. This is the second thing that needs to be implemented in the code.
Finally, the robot needs to detect the locations of dynamic obstacles in the environment but this is something that the team of Tech United is going to implement, after which they will provide this as a parameter to the algorithm we are developing.
To summarize, we need to:
- Use the orientation angle in our algorithm to compute resistance, since the robot can now rotate around its axis.
- Include the user in resistance computation using the fact that the robot is being operated by a lefthanded person or not.
- Use obstacle position coordinates, which Tech United is going to provide us with using parameters, to avoid them.
Extra features
Objectives
Autonomous
We want to accomplish that the robot can guide a user through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence the robot has to make all decisions on itself given the user inputs. This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself. The robot must be able to guide itself and the user on paved, flat area. This type of area is also the kind of terrain an average person walks over the most.
The scanning setup will consist of one camera. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a height differences, like stairways. The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person can be guided over an area safely, without hitting obstacles on its path and entering restricted area's. This goal can be accomplished by implementing obstacle recognition software for the attached camera. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path. By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.
This goal should be realized within 6 weeks. If the robot is not guiding autonomously at this point, there will be no time to make changes in the software and/or hardware of the robot.
Easy to use
The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user. This goal involves both the robot and its user.
The interface of the robot consists of an object the user can hold on to. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. The feedback can consist as physical resistance when the user is planning to do an action that could lead to dangerous situations. So if the user pushes the robot into a dangerous direction, the robot will resist. Otherwise, the robot will assist. By keeping the user interface simple, it will be realistic to implement.
This user interface that will be easy to use must be defined in week 3 and be implemented in at most week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.
Safety guaranteed
At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that most risky situations are prevented, it would have no purpose. This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing to the left or right side of the robot, holding a handle so that the user will not be anywhere else than on that side of the robot. The position of the user must be known to the robot.
This goal can be measured by simulating several scenarios having a dangerous situation and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.
Planning
Task | Who | Duration | When |
---|---|---|---|
Discuss initial project | All | 1h | Week 1 |
Research | All | 10h | Week 1 |
Subject (wiki) | Anne | 2h | Week 1 |
Users (wiki) | Renée | 2h | Week 1 |
SMART objectives (wiki) | Thomas | 3h | Week 1 |
Approach (wiki) | Dylan | 1h | Week 1 |
Deliverables (wiki) | Dylan | 1h | Week 1 |
Milestones (wiki) | Jarno | 1h | Week 1 |
Planning (wiki) | Jarno | 1h | Week 1 |
Discuss week 1 tasks | All | 2h | Week 1 |
State of the art (wiki) | |||
- Perceiving the environment | Dylan | 2h | Week 1 |
- Obstacle avoidance | Renée | 2h | Week 1 |
- GPS navigation and voice recognition | Thomas | 2h | Week 1 |
- Robotic design | Jarno | 2h | Week 1 |
- Guiding dogs | Anne | 2h | Week 1 |
Meeting preparation | Thomas | 1h | Week 1 |
Meeting | All | 1h | Week 1 |
Determine specific deliverables | All | 4h | Week 2 |
Add details to planning | All | 3h | Week 2 |
Meeting preparation | Jarno | 1h | Week 2 |
Meeting | All | 1h | Week 2 |
Discussing scenario | Renée & Dylan | 1h | Week 3 |
Updating planning | Jarno | 1h | Week 3 |
Contacting relevant organizations/persons | Thomas | 1h | Week 3 |
Meeting Tech United | Thomas, Anne, Renée & Jarno | 1h | Week 3 |
Meeting preparation | Renée | 1h | Week 3 |
Meeting | All | 1h | Week 3 |
Adapt Wiki | Renée & Dylan | 2 h | Week 4 |
Planning of project and tasks | Anne | 2 h | Week 4 |
SoTa Voice recognition | Thomas | 2 h | Week 4 |
SoTa Turtle | Jarno | 2 h | Week 4 |
Meeting Tech United | Thomas, Renée, Jarno & Dylan | 1 h | Week 4 |
Meeting preparation | Anne | 1h | Week 4 |
Meeting | All | 1h | Week 4 |
Incorporate feedback on code | Jarno | 2 h | Week 5 |
Create and design a handle for the robot | Anne | 3 h | Week 5 |
Movement prediction and clear bugs in code | Jarno | 3 h | Week 5 |
Adjust objectives | Thomas | 1 h | Week 5 |
Pseudocode | Thomas & Renée | 2 h | Week 5 |
Individualize code and Wiki | Dylan | 2 h | Week 5 |
Milestones & Deliverables | Thomas & Renée | 1 h | Week 5 |
Arange 'pionnen' | Thomas & Renée | Week 5 | |
Start preparing the presentation | Renée & Anne | 2 h | Week 5 |
Meeting with Tech United | All | 1 h | Week 5 |
Meeting preparation | Dylan | 1h | Week 5 |
Meeting | All | 1h | Week 5 |
Incorporate feedback on handle | Anne | 1 h | Week 6 |
Incorporate feedback on code | 2 h | Week 6 | |
Prepare presentation | Week 6 | ||
Adjust Wiki | Week 6 | ||
Meeting preparation | Thomas | 1h | Week 6 |
Meeting | All | 1h | Week 6 |
Presentation preparation | All | 20h | Week 7 |
Presentation | All | 1h | Week 7 |
Milestones
During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project. Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines. Note that the planning also lacks details, which will be filled in in week 2.
Milestone | Deadline |
---|---|
Research is complete | Week 1 |
Hardware is available (either full robot or necessary parts) | Week 3 |
Robot can give haptic feedback | Week 3 |
Robot can stay between predefined borderlines | Week 4 |
Robot can detect and react to predefined obstacles | Week 6 |
Robot uses voice recognition to respond to predefined commands | Week 7 |
Deliverables
Prototype
References
- ↑ Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301
- ↑ Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8
- ↑ Tech United, The Turtle. http://www.techunited.nl/en/turtle
- ↑ Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011
- ↑ 5.0 5.1 Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283