PRE2017 3 Groep12: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 77: Line 77:
* Attachment under an angle of approximately 45° <ref name="HandleStats"/>
* Attachment under an angle of approximately 45° <ref name="HandleStats"/>
* Handle attachment height 700 mm
* Handle attachment height 700 mm
*  
* Length of 40 cm






The handle for bigger dogs has to be about 45 cm and the angle is about 45°  <ref> Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283 </ref>. A guide dog has an average height of about 60 cm. The soccer robot that is used in this project has a height of 80 cm <ref> Tech United. (n.d.). Voetbalrobots. Retrieved March 6, 2018, from http://www.techunited.nl/nl/voetbalrobots </ref>. We will not attach the handle at the highest point, so for calculating the required length of the handle we will attach the handle at a height of about 70 cm. This will induce that we need a handle that is 40 cm.
The handle for bigger dogs has to be about 45 cm and the angle is about 45°  <ref> Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283 </ref>. A guide dog has an average height of about 60 cm. The soccer robot that is used in this project has a height of 80 cm <ref> Tech United. (n.d.). Voetbalrobots. Retrieved March 6, 2018, from http://www.techunited.nl/nl/voetbalrobots </ref>. We will not attach the handle at the highest point, so for calculating the required length of the handle we will attach the handle at a height of about 70 cm. This will induce that we need a handle that is 40 cm.
The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds and when it will reach the border within 0.5 seconds, it will stop and thus providing a lot of resistance to the user, giving them a notion that they reach the border of the specified area.
The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds and when it will reach the border within 0.5 seconds, it will stop and thus providing a lot of resistance to the user, giving them a notion that they reach the border of the specified area.
 
The robot we will be using is a turtle used by Tech United, this turtle will cover a floor area of size 500mm * 500mm, has a height of 783mm and has an approximate mass of 36,3 kg <ref> Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011 </ref>.
 
We will use 2 lines to indicate the area in which the robot may walk and to simulate a sidewalk. Those two lines will be coded with their starting and endpoints. Furthermore, we are given the robots x and y positions and its rotation point. We are also given the x and y component of the force that is exerted by the user on the robot.


== Objectives ==
== Objectives ==

Revision as of 12:12, 12 March 2018

Guiding Robot

Group members:

  • Anne Kolmans
  • Dylan ter Veen
  • Jarno Brils
  • Renée van Hijfte
  • Thomas Wiepking


Coaching Questions Group 12

Introduction

Globally there are about 285 million visually impaired people, of which 39 million are totally blind[1]. There is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some people are allergic to some dog species or they have a social aversion to them, which makes them outshouted to be guided by a guiding dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. Lastly, the training of guiding dogs is very difficult and only 70% of the trained dogs will eventually be qualified to guide the visually impaired[2]. Due to the shortage of guide dogs and support tools for the visually impaired, there is a need for innovative ways to support the visually impaired. Due to the fact that there are already a lot of different robots available we propose to convert an already available robot, in this case a soccer robot, into guiding robots.

Plan

We intend to convert a soccer robot, namely the Tech United robot[3], into a prototype that can guide a visually impaired person in a restricted environment. Our plan to accomplish this;

  • Research into:
    • Guiding dogs and how they perform their tasks
    • Different ways of environment perception and object avoidance
    • Tech United robot
  • Get into contact with Tech United
  • Ask the specifications of the Tech United robot
  • Determine what capabilities we can use that already exist in the robot
  • Determine what functionality needs to be added
  • Add the necessary basic features
  • Test the robot with the additional basic features
  • Determining possibilities for extra features (for example voice recognition) and possibly incorporate them

Progress

In this section we will explain how the project is coming along.

Research (State of the Art)

The State of the Art can be found here.

Tech United

We came into contact with W.J.P. Kuijpers, who is the team leader of Tech United. A meeting was scheduled in which we discussed our plan and what we would like to do with the robot. He was quite enthusiastic about our intentions and together we came up with the first step into accomplishing our goal. This was to program a function, using the C language, that given inputs (The robot's position in x and y coordinates, the robot's orientation as an angle and the force exerted on the robot by the visually impaired as a vector) should return the resistance the robot should exert. Additional information that the robot has are boundaries, represented as lines, which we hard coded. These boundaries represent the side walk where the robot should keep it's user between. We hard code these boundaries since in real life recognizing boundaries of where the user is able to walk is extremely difficult.

Once the robot is able to do is, we will extend the functionality of the robot such that it will see other Tech United robots as dynamic objects in the world and footballs as static objects in the world. This means that we can use the robots to simulate other pedestrians, cyclists, cars, etc.. and footballs as trees, walls, lampposts, etc...

Once we implemented this, we will use the entire soccer field to simulate a supermarket through which the robot will guide it's user.

Functionalities to add

The functionalities we would like to add are mostly explained in the Tech United section. However, to give a simple overview:

  • Let the robot guide between hard coded boundaries
  • React to dynamic objects in an environment (Represented by other Tech United Robots)
  • React to static objects in an environment (Represented by footballs)

Testing

Extra features

Specifications

Here we will provide specifications of different elements of our project.


Environment

  • Soccer field at TU/e
  • Hard coded environment boundaries (Simulating sidewalk)


Robot [4]

  • Floor area of 500mm x 500mm
  • Height 783 mm
  • Mass 36.3 kg
  • Knows location in environment (x and y coordinates in local axis frame)
  • Knows orientation in environment (in same local axis frame by means of an angle theta)
  • Knows force that is being exerted on robot by user (using a force vector)


Handle

  • Approximately 45 cm long [5]
  • Attachment under an angle of approximately 45° [5]
  • Handle attachment height 700 mm
  • Length of 40 cm


The handle for bigger dogs has to be about 45 cm and the angle is about 45° [6]. A guide dog has an average height of about 60 cm. The soccer robot that is used in this project has a height of 80 cm [7]. We will not attach the handle at the highest point, so for calculating the required length of the handle we will attach the handle at a height of about 70 cm. This will induce that we need a handle that is 40 cm. The haptic feedback will be delivered with the use of resistance that the robot gives back to the person via the handle. The robot will start to give resistance when it will reach its borders within 3 seconds and when it will reach the border within 0.5 seconds, it will stop and thus providing a lot of resistance to the user, giving them a notion that they reach the border of the specified area.

Objectives

Autonomous

We want to accomplish that the robot can navigate itself through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence without the input of any user during the navigation process. This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself. The robot must be able to navigate itself and the user on paved area. This type of area is also the kind of terrain an average person walks over the most.

The scanning setup will consist of one or more camera's. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a height differences, like stairways. The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person gets from point A to B safely, e.g. without hitting obstacles on its path. This goal can be accomplished by implementing obstacle recognition software for the attached camera's. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path. By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.

This goal should be realized within 6 weeks. If the robot is not completely autonomous at this point, there will be no time to make changes in the software and/or hardware of the robot.

Easy to use

The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited, and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user. This goal involves both the robot and its user.

The interface of the robot consists of an object the user can hold on to. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. The feedback can consist at resistance when the user is planning to do an action that could lead to dangerous situations. So if the user pushes the robot into a dangerous direction, the robot will resist. Otherwise, the robot will assist. At all times, it must be clear to the user what the robot is doing and what its status is (e.g. battery status, or any other kind of error). By keeping the user interface simple, it will be realistic to implement.

This user interface that will be easy to use must be defined in week 3 and be implemented in week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.

Safety guaranteed

At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that most risky situations are prevented, it would have no purpose. This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing behind a robot, holding a handle so that the user will not be anywhere else than behind the robot. Also, the robot must stop and give a signal when the user releases the handle attached to the robot so dangerous situations can be prevented.

This goal can be measured by simulating several scenarios having a dangerous situation, and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.

Of course, not all possibly dangerous situations can be modeled and tested. Therefore we limit this goal to prevention of a list of dangerous situations created by us, that covers the most common scenarios in our restricted area. This limitation is required to make this goal realizable for this course. This list of dangerous scenarios and how to tackle them must be finished in week 3, and implemented in week 7.

Users

The robot that we want to develop and are describing in this wiki is a robot that will replace the guiding dog. This robot will be used by the visually impaired the way they are now using guiding dogs. This means that they will use the guiding robot to walk inside, outside and to find there way to a store, bus stop etc. So the robot will not only have an impact on the visually impaired that is directly using the guiding robot, it will also have an impact on the people in the direct surroundings of the visually impaired. For example when a visually impaired person, who is using the guiding robot wants to cross a street, the drivers of the cars must rely on the guiding robot to not cross the street if this isn’t possible. So this gives us the visually impaired who are the primary users. The people in the surrounding of a visually impaired person, who is guided by the guiding robot as secondary users. And then we have the developers who are the tertiary users.

Users

The users and there different needs of the guiding robot

Visually impaired (primary users)

  • Making it save to walk outside
  • Making it easier to make a walk outside
  • Navigate them to different destinations

The surrounding (secondary users)

  • That the guiding robots detects cars, bicycles and pedestrians
  • The guiding robot walks around obstacles
  • The guiding robot walks where it's aloud to walks and is save to walk

Developers (tertiary users)

  • The guiding robot is better and more reliable then guiding dogs
  • That it's easy to adapt the software
  • That the guiding robot has as little maintenance as possible

State of the Art

Group 12: State of the Art

Approach

  • Research
  • Which robot we will use for the project. We will use the turtle soccor robot of Tech united.
  • First, we will encode two lines in the robot and the robot will need to stay between those two lines. We will write a code using C, to define the resistance the robot has to give to the user via the handle. The robot should give more resistance, the closer it gets to the lines, because that is that is the feedback to the user that it should go the other way. The two lines will simulate a sidewalk and the robot should be able to stay between the two lines. We will programme the coordinates of the lines, because at this moment the robot only knows the footbalfield and it isn't able to scann the environment for new lines. It is infeasible for us to program this in such a short notice. Once there is a way for scanning the environment, that works with the state of the art software of the robot, this should be implemented.
  • Once the robot is able to stay between the two lines, we will use other soccer robots and footballs to simulate objects and tresspassing persons. We will use the soccer robots to ride from one side to the other side of the footballfield, crossing the way of the robot and this will need to simulate different persons walking in front of the robot and blocking its way. We will use the soccer robots because the robot is already familiar with them and it can already recognize and avoid them. For obstacles in its way we will use footballs, because the robot also can recognize these on the field and they can represent for instance trashcans.
  • Then we would like to enlarge the environment of the robot to the whole footballfield. A blindfolded person needs to let the robot guide him/her while walking to any part of the field, for instance crossing it diagonally. While doing so, we would like to let for instance 3 different robots to simulate other pedestrians and we will use balls to simulate walls (a few balls next to each other), which it has to avoid and some will represent obstacles. We would like to see how the robot works in such a restrained but still complex environment.
  • The next step is to add some extra functions to the robot, that the guiding dog hasn't. We would like to incorporate some kind of voice recognition to provide another way of feedback. With this the user could give the robot some instructions.

Furthermore, while doing the intermediate experiments we will need to define some more specifications for the robots use and its interaction with the user. For instance, we will look at if it is better to let the user walk behind the robot, like a rollator or if it is better if the robot is more at the side of the user, like with a guiding dog. It is important to check this not only what feels better for the user and thus makes the interaction more pleasant, but also if it is feasable with the current software. This because we were given some software that detects the pressure that is executed on the robot and the robot also feels from which way the pressure comes.


The code

The code is open sourced and available on GitHub under an Apache version 2.0 license.

Planning

Task Who Duration When
Discuss initial project All 1h Week 1
Research All 10h Week 1
Subject (wiki) Anne 2h Week 1
Users (wiki) Renée 2h Week 1
SMART objectives (wiki) Thomas 3h Week 1
Approach (wiki) Dylan 1h Week 1
Deliverables (wiki) Dylan 1h Week 1
Milestones (wiki) Jarno 1h Week 1
Planning (wiki) Jarno 1h Week 1
Discuss week 1 tasks All 2h Week 1
State of the art (wiki)
- Perceiving the environment Dylan 2h Week 1
- Obstacle avoidance Renée 2h Week 1
- GPS navigation and voice recognition Thomas 2h Week 1
- Robotic design Jarno 2h Week 1
- Guiding dogs Anne 2h Week 1
Meeting preparation Thomas 1h Week 1
Meeting All 1h Week 1
Determine specific deliverables All 4h Week 2
Add details to planning All 3h Week 2
Meeting preparation Jarno 1h Week 2
Meeting All 1h Week 2
Discussing scenario Renée & Dylan 1h Week 3
Updating planning Jarno 1h Week 3
Contacting relevant organizations/persons Thomas 1h Week 3
Meeting Tech United Thomas, Anne, Renée & Jarno 1h Week 3
Meeting preparation Renée 1h Week 3
Meeting All 1h Week 3
Meeting preparation Anne 1h Week 4
Meeting All 1h Week 4
Meeting preparation Dylan 1h Week 5
Meeting All 1h Week 5
Meeting preparation Thomas 1h Week 6
Meeting All 1h Week 6
Presentation preparation All 20h Week 7
Presentation All 1h Week 7

Milestones

During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project. Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines. Note that the planning also lacks details, which will be filled in in week 2.

Milestone Deadline
Research is complete Week 1
Hardware is available (either full robot or necessary parts) Week 3
Robot can scan the environment Week 5
Robot can keep the user in mind Week 6
Robot is fully autonomous Week 6
Robot can guide the user in a restricted area Week 8


Deliverables

Prototype

References

  1. Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301
  2. Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8
  3. Tech United, The Turtle. http://www.techunited.nl/en/turtle
  4. Alaerds, R. (2010). Generation 2011. Retrieved March 7, from http://www.techunited.nl/wiki/index.php?title=Generation_2011
  5. 5.0 5.1 Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283
  6. Mijnhulphond. (n.d.). Flexibele beugel – vernieuwd! Retrieved March 6, 2018, from https://mijnhulphond.nl/product/beugel-voor-tuig/?v=796834e7a283
  7. Tech United. (n.d.). Voetbalrobots. Retrieved March 6, 2018, from http://www.techunited.nl/nl/voetbalrobots