PRE2017 1 Groep3

From Control Systems Technology Group
Revision as of 01:05, 31 October 2017 by S157797 (talk | contribs) (→‎Problem Definition & Approach)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Members of group 3
Karlijn van Rijen0956798
Gijs Derks0940505
Tjacco Koskamp0905569
Luka Smeets0934530
Jeroen Hagman0917201


The technology of robotics is an unavoidable rapidly evolving technology which could bring a lot of improvements for the modern world as we know it nowaday. The challenge is however to invest in the kind of robotics that will make its investments worthwhile, instead of investing in research that will never be able to pay its investments back. This report is going to investigate a robotics technology with the goal of solving the initial problem statement. This chapter will describe the problem that is chosen, the objective of our project and the approach to show how the solution will take its form.

Problem Definition & Approach

When you travel by train on a regular basis you might have noticed that when people in a wheelchair need to exit or enter the train it goes rather slow. Before they can get on or off the train, the train personnel is needed first to get some sort of ramp to let the disabled people board or exit the train. When someone in a wheelchair wants to exit or wants to board the train, the train might even be delayed because of this. As we know, trains in the Netherlands tend to be late sometimes and therefore every obstacle that is getting in the way of the schedule, should be taken care of. The boarding wheelchair is definitely one of the obstacles, because they tend to cause delays. But the perspective of the handicapped person is also important. For them the feeling of constantly being dependent on others is the worst part of living with a handicap. This dependence raises the threshold for these people to travel by train. The disabled in general lose a part of their long distance mobility when they stop using the train. This might have an impact on their social well-being (Oishi, 2010); it might be a cause for loneliness or depression, as disabled are not able to sustain distant relations (Steptoe et al., 2013). In a survey conducted by the SP 154 handicapped persons shared their complaints. Laurens Ivens and Agnes Kant translated these complaints into thirteen recommendations. Among these recommendations they state that the height difference between the train and platform should be reduced or bridged easier. They state that there should be a travel tracking system for handicapped and at last the accessibility has to be increased (Ivens and Kant, 2004). This project will research improvements for disabled people in wheelchairs travelling by train in the Netherlands.

This project will first determine the problems wheelchair-bound people face when travelling by train. Then, we look at different stakeholders and possible solutions. After that, questionnaires will be held with stakeholders to determine the needs. After this, a final design for our helping robot will be made and a prototype will show some of the working principles that need to be proven in order to give credibility to the final design.

Team motto:

Veni, vidi, wheelie!


To get a better view of the design criteria the design should comply with, the USE aspect of the problem statement and the objective will be considered in this section. Moreover, the topic will be discussed from the perspective of several other stakeholders, such as society and and train companies (i.e. the Nederlandse Spoorwegen).

Who are the users?

The first step is determining who the users are. The main users are, of course, disabled people who travel by train and will actually use the robot. This project will focus on the user needs by means of a questionnaire. To avoid a technology-push, it is very important to get a thorough view on the user's perspective on the current situation, and the needs of the user. This questionnaire focusses on the way they are helped right now and what the advantages/disadvantages of the current state are. Also their initial feeling with being helped by a robot in the future is an important part and in what way they would prefer to be helped. Questions like those are incorporated in the questionnaire.

What is the NS' perspective?

The NS is a very important stakeholder in this project, they are the ones that will eventually need to pay for the research and manufacturing of the robot. Moreover, the NS staff is also the current user; in the current situation, they help the disabled people board the train by means of a ramp. It is therefore very important to get a better view of what the operating NS staff thinks of our idea. For this purpose, a questionnaire for the NS personnel is made. In this questionnaire the view of the train personnel is asked regarding the current assistance and what they think that could/should be improved. Also their view on the idea of a robot helping the disabled people is important.

The questionnaires

Summarizing the above, there are multiple reasons for which the questionnaires are designed:

  • To gain insight into the current situation with regard to traveling by train when being disabled. How does the current system work? How do people experience the current system?
  • In order to finetune our RPC’s for the robot, the aim is to gain insight in the wants and needs of the actual users: NS staff and disabled people. What do they believe is necessary in order for the system to work efficiently? What do they miss in the current situation?
  • To improve the system for all its users, not necessarily only disabled people. Therefore we are also curious to know how operating NS staff experiences the system? What can be improved for them to improve their work efficiency and pleasure?

To find participants for this study, several steps were taken: a call on Facebook was posted for the target groups, personal networks were contacted and NS staff on Eindhoven station were approached in person. The questionnaires could be filled in online or on paper. We aimed to get 5 participants for the disabled target group, and 2 to 3 for the NS staff. These amounts are based on what is reasonable for the scope of this project; due to time constraints it is not an option to find large groups of participants. The fact that it is hard to find wheelchair bound people that actually use the train was further confirmed by the low response rate on our call for questionnaires. The questionnaires were written in Dutch and answered in Dutch.

Link to paper questionnaire for the disabled: Media: Enquete mindervaliden.pdf

URL to online questionnaire for the disabled:

Link to paper questionnaire for NS staff: Media: enquete staff.pdf

URL to online questionnaire for NS staff:

The questionnaire for the disabled

In this section all aspects of the designed questionnaire will be discussed. The questionnaire was designed to get insight into several aspects of the project:

The first two questions explore the current situation:

  • 1. How often do you travel by train?
  • 2. How much time does it take you to plan your train journey?

After this, several questions test the subjective experience of the current situation.

  • 3. How do you experience the current NS travel assistance service?
  • 4. What do you think could be improved in the current situation?
  • 5. Are you capable of getting on to the ramp without help?
  • 6. Do you experience difficulties in planning your train journey with regard to the NS travel assistance service?
  • 7. How much time do you generally need when changing trains?
  • 8. How do you experience travelling by train from 1 to 10, with 1 not pleasant and 10 very pleasant
  • 9. Can you clarify your answer for question 8?

After this set of questions, the new concept is introduced and tested:

  • 10. For this project, we aim to develop an automated system that functions as a ramp. By pressing a button on the platform, the robot will drive towards the train entrance and fold out to form a ramp. What would you think, of being aided with entering the train by a robot or automated system?
  • 11. What are important aspect of good service for you?
  • 12. What type of help with boarding the train would you appreciate most? (For example: ramp, lift, etc.)

In the final question we leave space for the participant to write down remarks or tips:

  • 13. Do you have any tips or remarks with regard to the current or new system?

The questionnaire for NS staff

This questionnaire focuses more on the specific experience of NS staff working with the system and how it could be improved in their view.

The first questions explore the current situation:

  • 1. What is your specific function at the NS?
  • 2. How often do you help disabled people boarding or leaving the train? (1x per week, 1x per month, 1x per year, etc.)
  • 3. In what way do you help disabled people boarding and leaving the train?

The next set of questions explores the subjective experience with the current system.

  • 4. What are the advantages of the current system?
  • 5. What are the disadvantages of the current system?
  • 6. How would you rate the system with regard to NS travel assistance from 1 to 10, with 1 very negative and 10 very positive?
  • 7. What could be improved in the current situation, to make your working experience more pleasant?

The next questions introduce the new concept.

  • 8. For this project, we aim to develop an automated system that functions as a ramp. By pressing a button on the platform, the robot will drive towards the train entrance and fold out to form a ramp. What is your first reaction to a system like this?
  • 9. How do you experience the current time needed to help a disabled person board or leave the train? Too long/too short?

The final question leave room for the participants to write down any thoughts on the topic:

  • 10. Do you have any tips or remarks with regard to the current or new system?

Thematic analysis of the results of the questionnaires

In the thematic analysis, we gather all codes from both questionnaires, and by combining the codes we will create themes.

An overview of the codes used can be found here: Media: Overview of codes used.pdf

An overview of the filled-in questionnaires and their coding can be found here:

Questionnaires for disabled people: Media: Enquete ingevuld disabled.pdf

Questionnaires for NS staff: Media: Enquete ingevuld staff.pdf


As described above, participants for this study were found by employing social media, contacting our personal networks and approaching people at train stations.

After the questionnaires were filled in, they were coded. These codes were subsequently evaluated to create themes.



Combining codes from the questionnaires related to improvements for the current system yielded the following: first of all, one of the disabled people wanted it to be possible to change trains faster. Moreover, better accessibility of all stations, at every time of the day, is desirable. The staff wanted a better operability of the current bridge, and a better communication with the taxi company.

(In the current situation, NS travel assistance is only possible at the larger, manned stations. If a disabled person wants to travel to a smaller station, he has to contact an NS-connected taxi company. Those taxi drivers namely have access to the bridge, and are appointed to help disabled people get off of the train. Unfortunately, especially in case of disruptions in the train schedule, clear communication with the taxi company is lacking.)

Current situation

This theme encompasses all codes related to the current situation; from the questionnaires we yielded the info that there are multiple parties involved: the disabled person, NS service staff, NS conductors, and, like mentioned above, appointed taxi companies. From the codes we have established the service staff escorts the disabled person on the bigger stations, taxi companies on the smaller stations, and conductors are mainly involved in maintaining safety at all times. The conductor mentioned helping a disabled person about 3 times a day, whereas the service staff helps over 20 times a day.

Advantages and disadvantages of current system

In this theme we can merge all codes related to advantages and disadvantages of the current system. Staff reported as advantages the following: the time needed for reporting is short, the time needed for helping is short, localization of the disabled person in the train is clearly communicated. A disabled person reported arriving on time at the desired station as an advantage.

The disabled people reported having little room in the train as a disadvantage. Staff reported the system is failing in case of train disruptions, and it takes too much time.

New concept

By aggregating all codes, in this theme we can take a look at the opinions on the new concept and important aspects of the system. Disabled people reacted positive, whereas NS was generally negative towards the concept. The staff made clear they thought they might be losing their job to the robot, and they considered it impossible for an automated system to work because of crowdedness. The disabled people mentioned an extending shelf and lift as possible idea. Aspects that the system should have according to them are: it should be fast, it should give the disabled person influence on the situation and it should be suitable for different users with different disabilities. Of course, the above could be combined with the improvements for the current system.


The above themes can be aggregated and examined to discover new relations between themes. The main result of the questionnaires is a better view on the current situation, and the identification of user requirements for the new concept.

Current situation

In the theme Current Situation we have established how exactly the current situation works. New information for us is the involvement of taxi companies.

User requirements

Multiple themes above can be used to identify user requirements. The new concept should namely build on the Advantages of the current system (it should in the least not take away those advantages), it should avoid the Disadvantages of the current system, it should incorporate the Improvements mentioned, and it should be aware of the aspects mentioned in New concept. If we combine this, we can make a list of all user requirements:

  • The new concept should make changing trains as a disabled person faster
  • The new concept should grant accessibility to all train stations and all platforms at all times
  • The operability of the new system should be good for staff
  • The system should communicate with appointed taxi companies, depending on what their role is (in the ideal situation taxi drivers no longer need to operate the bridge)
  • The new system should not have a longer reporting time than in the current situation
  • The new system should not take longer in helping the disabled person than in the current situation
  • It should be clear where in the train the disabled person is located
  • The new system should have enough room for the disabled person to sit within the train
  • The system should work at all times, also in case of train disruptions
  • The system should work even in very crowded situations
  • The new system should work as fast as possible
  • The new system should grant the disabled person influence on the situation
  • The system should be suitable to all types of disabled people with different disabilities

Other stakeholders

It is important to consider all other stakeholders in this project while designing. Other non wheelchair-bound train passengers are also stakeholders, they should not be disadvantaged by the new wheelchair assistant. Therefore, it is undesirable that the new robotic assistant will cause any (more) train delay. Moreover, it is important that there is still room for the other passengers to board and stay on the train. Also, boarding and exiting of this stakeholder needs to be taken into account. This should be considered to see if the new design has an impact on the other passengers.

The government is also a stakeholder, as they are the institution responsible for making society as handi-friendly as possible. They may therefore be involved in partial funding of the project. In more detail, the Ministry of Infrastructure and Environment is involved as a stakeholder.

Conclusions of the questionnaires

From the questionnaires we identified a specific user need: he/she mentioned he wanted more influence on the process. Since we are designing in an iterative manner, our concept got updated after the questionnaire result was known. It was decided to involve the concept of shared control in our autonomous robot. To close the loop, in an ideal situation we would want to test this interpretation of the user need with the user. However, due to time constraints and a lack of interest from disabled people to answer questions regarding the topic, we were unable to reach out to check this interpretation. However, a literature research can be performed to find out more on the topic of disabled people, shared control and a lack of influence. This can be found below in this Wiki.

Jobs of the train staff

As was already mentioned in the results of the questionnaire for the NS staff, the NS service staff is afraid to lose their jobs when helping the disabled people is automated. Their concerns are true because when the robot functions as it is desired to function no staff is needed for helping the disabled entering and exiting the train. It is important to think about the consequences of introducing a robotic technology and what impact that it might have on people their jobs. The NS assistance is only available at 100 of the 400 trainstations in the Netherlands, more information about this can be found in the current situation chapter. When the robot is ready to be implemented in the trainstations it could be beneficial to first start at stations where the NS assistance is not available. This way disabled people are able to travel to more locations and the current NS service staff can still operate at the current trainstation. When the implementation to the other trainstation is successful it can be extended to the other, bigger, stations as well. The staff that is currently working at these stations can be used in a different way. For example for checking if there are no problems with the robots and if everyone understands how to use them properly. They can also fulfill other jobs at the NS for example the other service jobs like informing people on how to get to their destination. Of course this will result in less jobs but since the robotic concept is designed to make it easier for disabled people to travel by train this is not the primary concern.

Current Situation

This section will take a look into the current situation of train traveling for the disabled.

The current model

In the following section you will be guided through the current process of boarding a train when you are disabled.

  • First, you have to contact the NS to apply for NS travel assistance. This can be done in two ways: by telephone or online.
    • In case you want to do it online, you have to one-time register with NS, after which you should plan your trip. Then, you can ask for travel assistance.
    • By telephone you simply have to pass on your trip as planned, after which the NS can provide travel assistance.
  • In both cases the disabled person has to contact the NS at least one hour before travelling. You cannot travel anywhere if you are disabled; travel assistance is only possible at about 100 of the 400 train stations in the Netherlands.
  • After applying, you have to be at a pre-set meeting point at least 15 minutes before your train departs.
  • The travel assistant will then take you to the right platform and help you enter the train.
  • This happens as follows: the travel assistant, often with another NS employee, takes the ramp (which is on wheels) and rolls it towards the desirable train entrance.
  • Then, they fold out the ramp and align it correctly with the train’s height. This happens after all other passengers have entered the train, ideally at a train entrance where there is plenty of room for the disabled person to stay during the trip.
  • After docking the ramp, the disabled person either drives up the ramp himself or the NS travel assistant helps.
  • As soon as the person is inside the train, the NS staff begins to fold up the ramp again and they bring it back to the original position on the platform.
  • The NS then contacts other staff at the destination station and pass on in which train compartment the disabled person is.
  • Then, at the destination station NS staff can take the ramp again and simply have to wait for the person to arrive.
  • As soon as the train arrives, they help the person leave the train in a similar way as they help them with boarding.
  • Often, the NS travel assistant helps the person during the entire process, which means he will only stop helping as soon as the person is off the platform and ready to continue its journey in the city of destination.

Other Countries

A modern wheelchair lift

Most railway companies in other European countries are bound by law to accommodate disabled people onto their train. Trains like the Eurostar have dedicated spaces inside trains in the 1st class cars, and allow for an additional passenger to come with the wheelchair bound customer. Most railways companies work like the NS system: you have to plan your trip ahead of time (online or through customer service) so the railway employees can help you along your trip. However, not all trips are allowed because railway companies like Deutsche Bahn have a specific time that they need to make sure you can transfer between trains, thus some passengers have to wait for the next train because a 10 minute transfer between trains is not possible.

Either ramps or mobile wheelchair lifts are used. These are stored on the platform and chained to a pole or wall and the railway employee will put this ramp in place for you. Then, when it’s connected to the train door the railway employee will push you on board or place you on the mobile wheelchair lift. When you are on the lift both sides are closed and the employee presses a button to align the height with the train door. Once it’s done lifting, the front ramp will go down and you can ride on the train on your own. It’s also possible for trains to have a ramp inside the train floor that goes out when a button is pressed. Companies that use the wheelchair lift are: VIA Canada, TGV (France), SBB (Switzerland), Trenitalia (Italy).

Starting in 2013, a test was done in Den Bosch with led lights on the platform to show where the train will stop and where doors are located, including which wagons are full and which are empty. The results of this experiment have been included in the NS app, which now shows how long trains are and how busy certain trains are.

Robotic Solution Specifications

In this section the requirements, preferences and constraints of the complete solution are stated. Consequently the new solution is described and the concepts we used to get there.


The requirements, preferences and constraints.


  • Completely safe to use for the disabled person but also completely safe to other passengers on the train.
  • Able to use continuously, if not it will cause delay for the train or the person misses the train.
  • Easy to use, disabled or elderly people have to be able to operate it.
  • Completely autonomous, this means that the disabled person can enter and exit the train all by their self.
  • The solution should not be the main cause for delay of the train.
  • The solution should take care of faster boarding and deboarding than the current approach.
  • The solution must be resistible to weather conditions and aging.


  • Let the person board and deboard as fast as possible.
  • A solution that is as cheap as possible for both research costs and manufacturing costs.
  • As comfortable as possible to user.


  • The solution has to fit for every different train, think about the width of the doors and the height of the entrance.
  • The solution has to fit on the waiting platform
  • The solution needs some powersource
  • Time available to board the train equals *4* minutes

Conceptual designs

In order to get to a solution for our problem statement, five different conceptual designs were formed. On the basis of the RPCs, the best conceptual designs are chosen to come to a preliminary design. All these designs are answering part of the problem and are products of individual brainstorm sessions.

Design 1

Design 1 involves an autonomous driving vehicle which can automatically drive to a certain location at the platform. The car only drives in a straight line parallel to the railway and therefore one robotic vehicle is needed per platform. The robot has wheels and an extendable shelf that can be attached to the train when the doors are opened. When someone wants to use the robot to board a train one simply walks up to the robot and pushes a button. The robot will be positioned at the end or front part of the platform depending on where the nearest elevator is located. When the train has arrived the robot will move to the door that is nearest to its location. This is either at the rear of the train or at the front (depending on which direction the train travels).

The robot will be positioned using sensors in the doors to let it know where the doors are located exactly. When the doors are opened the robot will unfold its ramp and the person can board the training. By the use of a pressure sensor in the shelf the robot knows whether the person has entered the train. After the person has entered the train the robot will lift the shelf up again and then drives back to its original position. When the person inside the train wants to exit the train at a certain station the (not yet existing) extension of the NS app can be used. The app shares the information with the robot and the robot can know in advance that someone wants to exit the train. The robot can move in place when the train arrives (it can start moving when the door sensor is within its reach).

When the doors open the shelf will be put in place again and the person can exit the train. When the person has left the shelf and is on the platform the robot will again lift the shelf and go back in its original position. To make sure the robot has enough power there will be a power station at the beginning position of the robot. The robot can attach to the power station and charge his batteries (same way as the lawnmower robot).

Design 2

Design 2 uses a crane to lift wheelchairs and moves them on or off the train. With this design there is no need for a car on the platform. There will be designated doors for people in wheelchairs where the crane positioned on the train. The crane has a lifting cable with four universal clamps which can be locked on the wheels of the wheelchair. The advantage of this concept is that it does not need anything on the platform which can cause obstructions for other persons. Getting off the train is just as easy as getting on. You will not have to worry if whether the crane is on the right platform at the right door and on the right time when you arrive, because the crane moves with you in the train. The disadvantage of this design is that you need to attach the clamps to the wheelchair yourself.

It does not work autonomously. If you are incapable of operating it yourself, you still need someone to help you. The second disadvantage is that all the trains need to be adjusted, which takes a lot of time and will probably cost a lot of money.

Design 3

Design 3 is in many ways similar to the current ramp that is being used at NS stations. It involves two ramps that are folded upwards, and when one wants to use the ramp both sides flip down and level with the desirable height. At one side of the ramp this equals the height of the entrance of the train, and on the other side this equals the height of the platform. In this way, a person in a wheelchair can simply drive upward or downward if one wants to enter or respectively leave the train. When the ramp is folded upward, a simple user interface could be installed. The screen would allow interaction between user and platform; the user could enter an ‘order’ after which the robot can perform its duty. The robot is driven by two large wheels, one on each side, which allow for easy rotation within the platform environment. The robot autonomously navigates in this environment. The robot is stationed at one single spot per platform, where it can recharge itself after serving. The ramp has raised edges, to avoid someone falling off of the ramp.

Design 4

This design will focus on the docking problem for an autonomous robot. The wheelchair boarding system, as mentioned, has three main stages. The alert, dock and board stage. In this design the vehicle will use wireless network and latency to triangulate the position. There will be beacons in the platform, this could also be in the docking station, but that is probably less accurate. The robot has two sender/receiver combinations. One on the front and one on the back. They will send signals to the beacons, the beacons resend them. With the latency information the robot will be able to triangulate its position and orientation. At the same time there is a send/receive combination module under the stair of the train. This will also ping the beacons. The beacons then again triangulate the position and send this information to the robot. So at this moment the robot knows where it is and there is also a goal. In order to move to the goal, there are at least three things required:


  • The motion is to be planned within the kinematic constraints of the robot. A Quintic polynomial could be used to control start values of position, velocity and acceleration. The problem is that the robot is constrained in its movement. So the orientation matters. We could describe the path as a series of robotic links, making constraints between the links. In a way the robot can always go from one to another.
  • The motion should be tracked by suppressing disturbances. This could be done using the kinematic equations of motion represented in a state space.
  • It has to move around obstacles, human and inhuman. This could be done by planning a path around it. Proximity sensors make a map of the nearest obstacles. Just a thought on this problems allows to fantasize a solution where the controller tracks the path, but starts deviating from the path as the sensors pick up obstacles. So instead of thriving for zero error, the error could increase with sensor input. The human obstacles are mobile, which means that they could move aside if urged to.

The solutions posed for the problems are made up using current knowledge. In order to find smart solutions, we might look into "Truck Docking". This is investigated by truck companies and shows a similar problem.

Design 5

Design 5 is an autonomous mobile lifting robot on four wheels that will help the disabled person board the train without any assistance from railway employees. The robot is stationed at a charging hub on each platform and has to be activated through the NS app and physical interaction with the OV-chipcard. Once the trip is planned and you "log in" on the robot with your chipcard, the robot will move itself to where the train will stop, ideally already aligned with a train door. Then, the train arrives and the robot will autonomously align itself with the door and open the back gate so the disabled can ride into the lifting platform.

Then, the backdoor closes for safety and the person is lifted. When at the right height, the front door is brought down so the wheelchair can move over it into the train. When the person has left the robot, it should detect this and return to its original position and then move back to the charging hub as soon as possible so other passenger can use the door. This design will be connected to wifi to get the trip information from the NS app and accurate train arrival times. In order to be ready to dock to the train when it arrives, the disabled person has to activate the robot to move to the correct position 5-10 minutes before arrival of the train. It will have to avoid passengers and bags on the ground on its own, but ideally this problem is limited by either introducing a "wheelchair robot path" on the ground so people know where to avoid placing bags or it has sensors in front that enable it to manouvre around these objects.

Because it has accurate trip information through the NS app the robot will know train arrivals in which a wheelchair is, so it needs to be ready to help this person get out of the train completely on its own, without any physical "log in" with the NS app.

Preliminary design

The Preliminary design is basically a combination of design 1 and 5. The design will be an autonomously driving vehicle that can be placed at each platform. The vehicle has 4 wheels and uses a horizontal plate that can be lifted up and down to be able to reach the right height to enter the train. It will be placed at one end of the platform which will be called its homing position.

Chairliftdown.JPGWheelchairfinal.png Chairliftup.JPG

At its homing position a power station will be placed. The robot will always come back to the homing position and attach itself to the power station. The robot has to be equipped with different kinds of sensors. For example the robot should be able to sense obstacles in its driving path. When the robot senses something is in its way it should stop and give some kind of signal to let its surroundings know that something is blocking the robot. Another design challenge is to find out how the robot can locate a door where the person can enter or exit the train. The first idea for a solution to this is to equip every train with a sensor at the very first and last door of the train, these doors will then be used as an entrance for disabled people. An advantage of this solution is that the robot can always choose the door which is nearest to its homing position and therefore less people will walk in its driveway and the time to arrive at the door will be short.

Idealized solution

The idealized solution has to fulfill every requirement, preference and constraint. The biggest goal is that disabled people are able to travel all by themselves. This means that they can reach the platform and use the automated assistance system to exit and enter the train without any staff being involved.

  • Before using the robot, the disabled person has to use the app (see information below) to enter his trip and to reserve the robot at the platform of departure and arrival.
  • When someone arrives at a train station the first thing they need to do is to get to the right platform with the use of the elevators that are already present at every station.
  • The disabled person needs to check in like every other person that uses the train. The people that need assistance when entering or exiting the train have a special OV-card which can be used at the robot touchscreen to activate the automated assistance. Since the user has entered his trip in the app, the robot knows which side of the platform he may have to drive to, in case he should drive autonomously.
  • The disabled person enters the robot with his/her wheelchair. He/she can select on a touchscreen whether or not to drive themselves, or let the robot drive.
  • If you want to drive yourself, you can use the joystick to navigate towards the train doors. Where one should position itself with the robot before docking is depicted in the figure below. The picture is explained in detail in the next chapter.
  • With the use of shared control, he/she can already drive itself towards the train, and in case the person navigates too close near an obstacle, the system will take over and redirect the robot, passing the obstacle. The person can choose if he wants to pass the object over the left or right side. When the train ultimately arrives, the robot will autonomously dock.
  • If the disabled person does not want to drive, the robot will autonomously drive towards its docking position. During the movement of the robot the robot should always pick the shortest but also the safest path to the door. With safest is meant that the robot should never hit any obstacles or any passengers. To realize this the robot needs to be aware of its own position and the position of obstacles and people. The robot should therefore be able to constantly adapt its driving path to avoid moving obstacles as efficient as possible. This is a very important aspect, which will be elaborated on later in this wiki.
  • After docking, the person can enter the train.
  • In the mean time, at the desired destination, the robot is stationed already at the door where the person wants to exit the train (this information is transmitted through the OV pole)
  • When the person then arrives, he can leave the train immediately.
  • There may be modifications in the train's timetable, causing the train to arrive at a different platform. Since the disabled person's trip is checked in with the NS, they know where the person is heading and will use the robot at the 'old platform'. In case of such a change of platform, the system can automatically reschedule the robot, to make sure there is a robot at the other platform available. As there are two robots on every platform, it is impossible the robot is already taken at that platform; after all, if the train can enter at that platform, no other train is at that platform, which equals one free robot. There may be a disabled person on the other side of the platform using a robot, but there are two robots, and only one disabled person can travel per train per time frame.

Safety regulations & Patent Check

The concept needs to comply with safety regulations for autonomous driving vehicles:

For autonomous driving vehicles there are up to today no universal laws or regulations. A congress(Arc, n.d.) near the end of this year should shed some light on this issue. For now we think it suffices to make the vehicle as safe as possible, so the risk of getting in a hazard while driving is minimal. The people around the vehicle should be aware it is driving and have to move out of the way, this can be achieved with an alarm and floodlights.

Safety regulations for lifting people:

There are a lot of rules and regulations for lifting a person, but most of them are simple like: the lift should be designed to minimize the risk of getting in to a hazard. The full list of rules and regulations can be found here: (HSE,2008). The important thing we should take in mind regarding these safety regulations is that in no case the vehicle can flip over while lifting a person or the person can drive off the lift while going up or down.

There are no existing patents regarding the basic idea of autonomous train assistance. This patent check was done by inserting the terms wheelchair, train and wheelchair, lifting into the US Patent & Trademark Office search tool which includes international trademarks. Thus, the robotic solution does not need to take patent law into account regarding the robotic wheelchair lift concept.

Comparing Current and New Solution

In this section, we are comparing the current and new solution and check whether the new solution complies with a number of the RPC's.

  • Completely safe to use for the disabled person but also completely safe to other passengers on the train.

This requirement is guaranteed for the current solution, since train staff is involved it is completely safe to use.

  • Able to use continuously, if not it will cause delay for the train or the person misses the train.

Since only one is available per platform it could cause delay when two trains arrive at the same time at the same platform. This requirement is therefore not applicable for the current solution.

  • Easy to use, disabled or elderly people have to be able to operate it.

Currently the ramp is not operated by the disabled people but by the trainstaff, therefore it can not be compared.

  • Completely autonomous, this means that the disabled person can enter and exit the train all by their self.

As said before the ramp is operated by the train staff and therefore the current solution does not apply with this requirement.

  • The solution should not cause delay for other people who want to board the train.

The current solution is in most cases not causing delay for other people since they wait till everyone else has entered the train before they help the disabled person with entering the train.

Robotic Solution Concept Explained

In this chapter the new solution is discussed in detail. First the update on the current application is explained. Subsequently the interface of the robot. Then the interaction with the surroundings are discussed. Eventually the robot docking is discussed. These first parts are mainly about ethics, surroundings and experience. The robot can also move on its own, so the autonomous driving chapter will discuss the capabilities of the robot, without humans. In the end the concept of shared control is discussed, as a result of the user questionnaire.

NS App integration

The disabled person uses the app to enter his/her trip. This can easily be implemented in the existing app. In the first picture you can see the app where one can plan his trip. In the second picture you can select in which time frame you want to travel, and a wheelchair-logo signals which trip is possible, i.e. if the robot at a particular time frame is not already reserved. In the third picture you can see the button at the top where you can subsequently reserve the robot for your travel.

Picture 1 Picture 2 Picture 3

As we all know the trains do not always function as they should do. To deal with delayed and canceled trains the robot needs to be aware of this. Imagine that the location of where the train arrives at a trainstation then the robot needs to be aware of this in order get to the right door. To solve these problems the NS app will be used, this app will send the real time updated information to the robot to let it know when trains are delayed or moved to a different platform. With this information the robot is always able to get to the right location in time. The people that need to use the robot can activate it with this same app in order to let the robot know when it has to drive to a certain location.

Interface on robot and check-in

At the height of an average person in a wheelchair, special armrests are appended which the person can use to interact with the system. The picture below shows the right side armrest that holds the joystick. The person can control the wheelchair robot with this joystick.

ArmrestOV-checkin touchscreen touchscreen

On the left side of the wheelchair, an integrated touchscreen is visible to the user. This touchscreen acts as a panel to activate the robot with the OV-chipcard (picture 1) and as a navigation tool. Users can toggle between joystick-mode and autonomous mode with this control. Also, when in autonomous mode navigation options are shown when objects are encountered.

Robot-surrounding interaction

Personal distance

When approaching and passing other people at the train station, the robot should take into consideration the concept of personal space. Moreover, based on past research on the matter, we should devise an ideal way of approaching other people. Research by Brandl et al. (2016) has looked at the design of the phase when a personal-service robot approaches a human being. Although in our case the robot merely passes other people, multiple important insights from this research should be incorporated in our design:

  • In human-human interaction, Hall(1966) roughly distinguished between 5 zones, which were later described by Walters et al.(2008).


  • Research by Koay, Syrdal et al.(2007) found that a mechanoid robot is allowed to come closer to humans than a humanoid robot. Our robot is mechanoid, which decreases the amount of personal space required.
  • Butler and Agah(2001) found that a fast approach by a robot(1 m/s) made participants feel less comfortable than a slow approach(0.25 or 0.38 m/s). We should therefore be aware that we cannot unlimitedly increase the robot’s speed to fasten the process; apparently a slower approach is more human-friendly.
  • Zlotwoski et al.(2012) performed research on the approach direction of walking humans, which is also highly relevant for this project. They found that human prefer to be approached from a front-left direction or a front-right direction rather than from the front. However, as our robot will be dealing with many people in a highly dynamic environment, the extent to which this ideal angle can be achieved is limited.
  • Brandl et al.(2016) performed research on the distances that were accepted of a robot approaching, while standing, sitting and lying at three different speeds of approaching. V1 = 0.25 m/s, V2 = 0.5 m/s and V3 = 0.75 m/s.


  • As the graph shows, at a speed of 0.5 m/s, the mean of distances that were accepted is about 1 meter. This implies our robot should not be closer to people than 1 meter. However, as we are dealing with highly crowded and dynamic situations, this is not a realistic option. We should therefore employ other techniques to decrease the amount of personal space that is desired, if we want to set this bar lower than recommended in this study.
  • A study by Koay et al. (2014) researched whether the usage of LED display colours to signal movements would decrease the amount of personal space needed. This hypothesis was not supported, suggesting this would not make a difference.

Several conclusions can be drawn from the information above:

  • The ideal minimal distance for our robot is 1 meter at a speed of 0.5 m/s. Our robot will drive at an average speed of 0.5 m/s. At a lower speed of 0.25 m/s, the distance drops to about 0.8 meter. 1 meter is not feasible in a busy train station environment. We will lower this, but our anchor is at 1 meter and we must beware to not lower this distance too much. A reasonable distance may be 0.6 meter; according to Hall’s research(1966) this falls within the personal zone, without intruding the intimate zone.
  • Since our robot is mechanoid, it is allowed to come closer to humans than humanoid robots.

Traversing the platform

While traversing the platform, two requirements should be kept in mind:

  • If the robot is driving (autonomously), it should be clear at all times what its direction is. This should be communicated to the other people at the train station, to avoid collision. The robot should make clear what its future actions are. To illustrate the situation: if the robot will be turning left in less than 3 seconds, arrows should be pointing to the left. There is therefore an advance of about 3 seconds.
  • If the robot is driving, other people at the station should not be walking in front of it or closely behind it.

To find out the best way of fulfilling these requirements, we first take a look at the state-of-the-art, which we may draw inspiration from.

Vodafone Smart Jacket

The Vodafone smart jacket is a ‘smart’ jacket, intended to increase visibility of cyclists in traffic in the dark and to improve traffic safety. The jacket is connected to your smartphone, and before cycling you plan your trip on your phone. By actively tracking your location during the trip, the jacket indicates your future direction by an illuminated red arrow on the back: see Figure below. So, if the cyclist aims to turn right in the near future (about 30 meter), other traffic users know his intentions. This is still a prototype and has not been implemented in society, hence we cannot draw many conclusions of the effectiveness of the concept. However, the arrow indicating the cyclist’s direction serves as an inspiration for our robot.

Picture 1

Nao robot

The infamous Nao robot indicates its direction most often by looking towards the direction. This perception of gaze direction (of the robot) is crucial for this to work. A study by Torta(2014) has indicated that a ‘3D head is needed for mimicking gaze direction’, and that ‘head orientation is sufficient to elicit eye contact’. In the case of our robotic system, a 3D head is not applicable, which is why we cannot draw inspiration from this. Moreover, Nao walks generally slow, which limits the risks of collision while walking. Although this may limit the risks of collision, the technique is not very effective for our robotic system as we aim to transport the disabled person as fast as possible.

Picture 1


A major example of an electronic system indicating a vehicle’s direction is obviously a car. The car is by far the most well-known among people. A car’s headlights are white/yellow, while a car’s rear lights are red. Since cars are extremely common in everyday life, people are most likely to associate white light with the front of a vehicle, and red lights with the back of a vehicle. This can be incorporated in our project.

Our design

Summarizing the above, we can design the following system to indicate the robot’s direction to bystanders at the train station: the car will have 4 lights indicating its direction. 2 led arrows will be displayed on the floor, on the back and front. 2 other led arrows will be displayed on the robot itself; since many people do not watch the floor while they are walking, those additional arrows on eye-level will increase visibility of the robot. Arrows on the front and arrows on the back floor will illuminate the direction. This system indicates direction in multiple ways:

  • The arrow points in the direction the robot is moving
  • If the robot is moving backward, the arrows will flip and turn backward too.
  • The colors on the ground will attract attention and people will see the robot moving
  • The colors on the ground will prevent people from walking in this illuminated space

The right picture below shows how lights are used in the design.

Picture 1 Picture 2


The color of the lights is chosen to be red in the direction where the robot is heading towards (in front of the robot) and green on the back of the robot. The use of these colors will indicate that people are allowed to walk behin dthe robot but should avoid walking too close in front of the robot. With the lights people will avoid the front section of the robot which makes it easier to drive through a crowded area.


- When people are not paying attention to the lights and get too close to the robot the robot sounds an alarm. This way people are alerted when they are standing in its way. The alarm cannot cause panic on the waiting platform. Therefore it should not make a similar sound to that of police, ambulance, fireguard or other emergency institutions. In fact it might even be a good idea to play some familiar music. Maybe even piano music. This is already used in Taiwan with garbage collection. The garbage trucks play "Fur Elise" from Beethoven. The people know this and go out to bring their garbage to the truck. It is draws attention and it is not especially agitating. Thus it might help to make the waiting platform slightly more friendly. The robot should indicate its approach by a sound. For the scope of this project it is not a priority to find the actual sound; however, we can identify the requirements for this sound:

  • It should be loud enough to be heard by everyone within a radius of 10 meter around the robot, even for people that are hard of hearing people and people using headphones. It should however not be too loud; it should not cause a hearing impairment, or cause annoyance or possible scare people.
  • The sound should not be very ‘alarm-like’, as this may cause a scare, and more importantly, may cause people to panic or think of an emergency. The robot passing is obviously not an emergency, and the sound should therefore indicate the robots passing in a serene, but hearable manner. Another option would be for a robot voice to signal its passing, e.g. 'Please move!'.
  • To enhance pleasure in use, we could choose for a song to indicate the robot’s passing. As the sound is meant to beware people of the robot passing through, we could for example use ‘Go your own way’ from Fleetwood Mac.

Autonomous Driving

In this chapter the autonomous function of the robot will be presented. The background and techniques that we like to use in a hardware implementation should be used as a starting point for the actual implementation. First the localization and orientation of the robot are discussed, because it is a starting point of planning and acting in the real world (Russel, S. and Norvig, P., 2014). Then the planning and acting on the platform will be discussed on fundamental level. The following chapter is mainly based on the book "Artificial Intelligence, A Modern Approach" written by Stuart Russel and Peter Norvig and published in 2014. In the eventual implementation it is advised to read their book in addition to this chapter.

Robot Hardware


Although perception appears to be effortless for humans, it requires a significant amount of sophisticated computation. The goal of vision is to extract information needed for tasks such as manipulation, navigation, and object recognition. The robot traversing the platform will have different methods to face different obstacles, but in order to choose the right branch of action the robot first has to know what is happening. Perception in our robot will be done with active sensing. The robot should combine ultrasound with laser or camera vision. Visual observations are extraordinarily rich, both the detail they can reveal and in the sheer amount of data they produce. The extra problem to face with the wheelchair robot is then to determine which aspects of the rich visual stimulus should be considered to help make good choices, and which aspects to ignore. The ultrasound sensors will mainly determine the world map and notice when objects are moving. The visual observations should help the robot distinguish between different dynamic objects (e.g. human and cat). Together with the previous Norvig and Russel also mention that visual object recognition in its full generality is a very hard problem, but with simple feature-based approaches our robot should be able to know enough to take action. Perception can also help our robot measure its own motion using accelerometers and gyroscopes.


Effectors are the means by which robots move and change the shape of their bodies (Russel and Norvig, 2015). Our robot will have a differential drive for locomotion. This imposes three degrees of freedom on our robot (x,y position and xy orientation) that can be obtained with the marvelmind beacons discussed before. Only two degrees of freedom are controllable and hence the robot is nonholonomic. This makes the robot harder to control as it cannot move sideways. Yet the choice to only pick two wheels on the side of a disk makes the kinematic model of our robot a lot simpler.

Robotic perception and path planning

Earlier on the perception criteria for the robot where discussed, here the hardware/software implementation is discussed. Russel and Norvig give the following definition to perception: "Perception is the process by which robots map sensor measurements into internal representations of the environment." Perception is hard because the environment is partially observable, unpredictable, and dynamic for our robot. In addition the sensors are noisy. In all cases the robot should filter the good information and make a state estimation that contains enough information to make good decisions.


In the algorithm that enables the robot to find the goal, there are positions and orientations requested. This section will elaborate on the triangulation of positions and orientations. To triangulate the position there are three beacons required [math]\displaystyle{ (A, B, C) }[/math]. These beacons have positions [math]\displaystyle{ [A, B, C] = [(0,0), (0,B), (C,0) }[/math] in which [math]\displaystyle{ B }[/math] & [math]\displaystyle{ C }[/math] are constant values since the beacons do not move. Next we calculate the distance to all the beacons from the sender/receiver on the robot. We use the timestamp -[math]\displaystyle{ t_{X,i} }[/math]- and the speed of the signal - now assumed to be light speed - [math]\displaystyle{ C }[/math]. The [math]\displaystyle{ i }[/math] refers to the sender receiver node to which the distance applies. In other words the [math]\displaystyle{ i }[/math] refers to the corresponding coordinate system.

[math]\displaystyle{ r_{X,i} = (t_{X,i} \cdot C)^2 }[/math] with [math]\displaystyle{ X \in [A, B, C] }[/math]

These distances and the rule of cosines are used to calculate the x and y position:

[math]\displaystyle{ Y_i = \frac{B^2 + r_{A,i}^2 - r_{B,i}^2}{2B} }[/math]

[math]\displaystyle{ X_i = \frac{C^2 + r_{A,i}^2 - r_{C,i}^2}{2C} }[/math]

The angle is calculated using the two positions on the robot

[math]\displaystyle{ \theta = Atan(\frac{Y_2 - Y_1}{X_2 - Y_1}) }[/math]

Beacons of Marvelous Minds

After some research we found out that the drones team uses a Marvelmind Robotics beacon system. The system can be used with an arduino. The company provides - among other - the following information:

"Marvelmind Indoor Navigation System is off-the-shelf indoor navigation system designed for providing precise (+-2cm) location data to autonomous robots, vehicles (AGV) and copters.

The navigation system is based on stationary ultrasonic beacons united by radio interface in license-free band. Location of a mobile beacon installed on a robot (vehicle, copter, human, VR) is calculated based on the propagation delay of ultrasonic signal (Time-Of-Flight or TOF) to a set of stationary ultrasonic beacons using trilateration. Stationary beacons form the map automatically. No manual entering of coordinates or distance measurement is required. If stationary beacons are not moved, the map is built only once and then the system is ready to function after 7-10 seconds after the modem is powered

The system needs an unobstructed sight by a mobile beacon of two stationary or more stationary beacons simultaneously – for 2D (X,Y) tracking. The distance between beacons cannot exceed 30 m."

This system should be used in our robot in order to keep a clear reference to the real world and know where the robot is itself.


To be able to plan a path first a map is needed and just knowing where you are is only part of the map creation. The robot knows where it is via the beacons and local measurement hardware. The next step is to determine where the obstacles are relative to the robot and putting them in a map. To determine where the obstacles are the robot uses ultrasound range sensors. These give a certain range and along with that the object gets a landmark and a position in the map. Russel and Norvig describe the Kalman filter and the extended Kalman filter. The difference is the approximation of the sensors and robot. The normal Kalman uses only linear models for motion and sensors. These filters are used in the so called Monte Carlo localization. In our robot solution we give the robot a map of the waiting platform before it has to traverse that. Along that the beacons are used to acquire location. This means that the robot is not required to simultaneously localize and map. In other words we do not see SLAM as needed nor desired for localization.

A greater problem is to find out what kind of object is in front of us. In other words we would like to identify the landmarks. This is problematic, because we would like to use camera vision to address this problem. However, perception is complicated and this report merely describes some solutions and gives a step-up to some background.

In addition the planning should be able to address uncertainties. To solve this the robot needs to re-plan the path continuously and keep asking for information when it faces high uncertainty.

Determining location of the door

For the robot to be able to help a person exit the train the location of the person in the train is needed. The trains in the Netherlands do not stop at exactly the same place on a platform every time. This will cause a problem when a robot is being used. In Den Bosch they are experimenting real time updated led lights to show the people were a door will be located when the train stops. This information can be useful for determining the location of the person on board of the train. When the location of the door for handicapped persons is known the system can send this information to the robot. Then the robot knows where someone is located and can drive to that location.


Planning to move

The robot will use point-to-point motion to deliver the robot to the end location. The original 3D space is turned into the configuration space. This space is continuous and can be solved by either cell decomposition or skeletonization. They both reduce the continuous path-planning problem to a discrete graph-search problem. In addition the robot should be able to do compliant motion. This means that the robot is in physical contact with an obstacle and it is necessary to add this to the robot as people on the waiting platform can start to push the robot or have other forms of contact.


A path found by a search algorithm can be executed by using the path as the reference trajectory for a PID controller. The controller is necessary for our robot as path planning alone is usually insufficient. In our demonstration we simply specified a robot controller directly. So rather than deriving a path from a specific model of the world, our controller just switches state in a finite state machine, as it encounters problems. This implementation is a lot easier and gives us a nice demonstration for the robot we build. The finite state machine also allows for easy feedback towards the user. The different states give a good indication of the actions that will be taken. However the higher level path planning is necessary for autonomously traversing the platform.

Obstacle Avoidance

In order to make it more suitable for the train platform case, we would like to make use of the dynamics that govern the people moving on the platform. In order to achieve this one should first make a distinction between static and dynamic objects. Otherwise it is not possible to determine which actions to take. In order to distinct static and dynamic objects, we want to use a coarse 2D grid (cell decomposition). All benches and static static objects are hard coded inside the grid. Then we use the sensors to detect objects. They will give an indication of the tile in which the object is detected. Then it will evaluate the map and decide if it knows the object or not. If it knows the object it can just follow the path that was already planned around the object, otherwise it should find a way to get past the object. In this case it could be a static-dynamic object like a case, which can be moved. Or a dynamic object that can move itself.

Docking and entering the train

The robot will position itself using local light sensitive sensors and reflectors beneath the train. This only happens in the last moment. The robot is already close enough to the train so that no people can interfere with the light sensors. Also for these sensors, probability filters will have to be used to filter noise out and get reliable results. The docking of the robot will be left to the robot itself and cannot be done manually to prevent any accidents.

Picture 1

When docking to the train the other passengers might block the robot. As mentioned the train has information about the position of the disabled person in the train. If the disabled scans his OV-card inside the train a beacon starts, but it also activates a red light on the door that indicates that a disabled will leave the train. The same can be applied as the disabled scans his OV-chipcard at the dock. The robot connects to the beacon at the door of the train, and the established connection will also trigger a red light on the outside of the door. This will make the end phase of the docking less troublesome, because it will hopefully have the effect of deterrence at the train doors for other passengers where the robot will board.

Picture 1 Picture 1

The handicapped will board the train using a lift system. When the robot is docked to the train the plate will go up, when the plate is at the right hight the lift stops and the person can easily get in or off the train.

Shared Control

Dallaway and Tollyfield (1990) acknowledge the importance of control for disabled people in their article Task-specific control of a robotic aid for disabled people: “The psychological importance of access to and control of one’s surroundings is obvious to the disabled and is increasingly recognized by those working in the social services. Help in these areas is commonly being provided by a combination of human assistants and a limited range of environmental control systems. Robotic aids, while only providing restricted integration with the surroundings, give a degree of versatility not possible with other forms of environmental control.” In the context of this journal article, control is referring to control of one’s surroundings. This confirms our interpretation of wanting more control; since we are designing shared control we are literally giving the disabled person more control of his surroundings.

Another interesting paper was written by Petry et al. in 2010. This states the following: ”Shared control initiatives take advantage of the user’s intelligence and assist the driver in the navigation process when dangerous situations are detected, extending and complementing user capabilities.” Moreover, an important aspect of shared control, which is also mentioned in this article, is the fact that shared control can “reduce the navigation complexity”. This is obviously highly beneficial when designing such a robotic system. This specific article did research on intelligent wheelchairs which share control with the user, avoiding risky situations. After testing with volunteers, they were presented a questionnaire which tested their perception of safety with and without the assistance of shared control. In the first case, the user manually controlled the wheelchair, decreasing safety perception and increasing collisions. The results are shown in figure 8. Clearly, the safety perception with shared control is much better. This is an important finding for our project; although handing the disabled person manual control may give them more influence in the situation, shared control will work better and increase the perception of safety.


Another research that serves as an inspiration for this project was performed by Connell and Viola(1990). They made a striking comparison to riding a horse and driving a car. A horse will not crash at high speed, and “if you fall asleep in the saddle, a horse will continue to follow the path it is on.” This illustrates the added value of shared control very well. In this article, the robot works as follows: the operator(the disabled person) is free to drive the robot in any direction, but the robot will refuse to continue its path if it detects an obstacle. This is similar to the way we are designing our robot. Shared control is beneficial in two cases: if the robot is too cautious (for example in a very busy environment), the disabled person can gain complete control to increase efficiency. On the other hand, when the person is either unable to drive, or tired, he can fully hand over all power to the robot.

Because there is no information available with respect to shared control in wheelchair lifting devices, when the system encounters an object or the end of the platform, it needs the user to take over the driving mechanism. But what is the best way to indicate to the user that it needs to act? Obviously, it needs to signal the user with information on what is happening, and when it desires the user to take over the control and move the machine himself. “Shared control beteen human and machine: using haptic steering wheel to aid in land vehicle guidance” (Steele et al, 2001) concludes that incorporating haptic feedback into the control device (so in our case the joystick) improves the alertness of users. A haptic feedback is for example a vibration signal to the user when the machine encounters a problem and needs to give control to the user. This alerts the user to immediately take control. “Haptic shared control: smoothly shifting control authority?” (Abbink et al, 2011) concludes that haptic shared control can lead to short-term performance benefits (faster and more accurate vehicle control, lower levels of control effort). Thus, it would be wise to incorporate a force feedback (haptic control) into the feedback system to the user. Much like the autopilot of Tesla (see reference list), which requires the driver to place its hands on/near the steering wheel and has haptic feedback when it needs to user to act, our machine could require the user to place its hand on the joystick.

When the wheelchair lifting device encounters an obstacle, the end of the platform, or an error, it needs to signal to the user what is needed of him. Because a screen is already incorporated in the device due to the checking in system of the OV-chipcard, it makes sense to give this also the purpose of indicating signals when the user needs to take control. The user also needs to have the option to take control himself, without the obstacle encountering a problem. This shared control can be displayed on the screen, and it can be made touchscreen so the user can press a button and take control of the machine. This however brings a problem, because according to “Visual-haptic feedback interaction in automotive touchscreens” (Pitts et al, 2012) touchscreens in the automotive industry take away user awareness of the surroundings (because it adds a visual workload to the user). However, it also concludes that incorporating haptic feedback counters this and improves the overall situational awareness. This research suggests that is it a good idea to provide information on the screen alerting the user that he needs to take control (by pressing the button on the touchscreen), while also alerting the user with force feedback (vibration) to inform him he needs to take an action.


Another problem the user might encounter is that it has limited visibility directly in front of the machine, because a ramp is attached to the front. Because our lifting device will know its location and the end of the platform, the feedback device (touchscreen) could indicate how far away you are from the end of the platform or how far obstacles are directly in front of the machine.

The results of the questionnaire, combined with the above research, has several implications for our design:

  • Although we are unable to test our interpretation of the questionnaires, the literature research above nevertheless confirms shared control is an added value to this robotic system.
  • The way of implementing shared control will be similar to the ‘Mister Ed’ robot by Connell and Viola(1990): the operator is free to drive in any direction, but the robot will refuse to continue its path when detecting an obstacle. The robot will then pass the obstacle, over the left or right side (this can be chosen by the user). The robot is effectively looking over the disabled person’s shoulder to remain safe at all times.
  • Besides implementing the principle of shared control, the concept itself already gives the disabled person more influence on the process, as he or she now does not have to contact the NS long beforehand, is independent of NS travel assistants and can use the robot without any help.


Prototype schematic.jpeg

What problem does the prototype solve?

The prototype is a demonstration of the shared control concept discussed earlier. This prototype will show the possibility to aid a disabled person to get from A to B. So when a person wants to drive forward into an obstacle the robot will drive around it and let the driver know it is interfering. The project itself would yield a lot of literature and little practice if it weren't for the demonstrator so it also enables us to experience the difficulties that hardware implies on the theoretical heaven we sometimes tend to experience.

RPC's prototype


  • Drive in straight lines and make turns.
  • Be controllable by the user, through a laptop and Arduino.
  • Avoid hitting obstacles, therefore the robot needs to be able to sense objects in its surrounding that are at least within 1 meter of its own position. When an obstacle is too close the robot has to stop and let the user and surroundings know that something is blocking its path.
  • The prototype should take over control if the user wants to move into and obstacle. (shared control)


  • Give feedback to the user to let him know what the robot is up to.
  • Be able to sense an object in its surrounding and in case the object is in his pathway be able to alter its path to navigate around it.
  • The prototype should be as cheap as possible.


  • The prototype cannot cost more than €100,-
  • The prototype should have dimensions around 30X30 cm

Prototype specification

First the hardware of the solution is presented and thereafter the software implementation of the shared control is explained.


List of parts
  • 2 wheels
  • 2 DC motors
  • Swivel wheel
  • Arduino
  • Plate as chassis
  • Hinge
  • Ultrasonic distance sensor
  • Wires
  • Powerbank
  • 9V battery
  • Power amplifier
  • Breadboard
  • LED-lights (green & red)

A wooden plate is used as chassis for the prototype, the motors are attached to the plate which are attached to the wheels. The sensor is attached to a hinge that is situated at the front of the prototype. The hinge is fixed in one certain position to make sure that the sensor is fixed. The rest of the equipment is attached to the plate of the prototype. The sensor is an ultrasonic distance sensor, this sensor can measure a distance between 0.1 and 4.5 m. Other sensors were considered as well but this type of sensor seemed to fit our project best. Infrared sensors or vision systems had the disadvantage of being too expensive and not practical for our prototype. In the image on the right the schematic view of the electronical hardware is given.

Schematic prototype.png


We used our basic knowledge of programming to construct a state based algorithm. The algorithms main objective is avoid hitting obstacles. When the robots sensors' sense an obstacle, it switches state to avoid collision and maneuver around it, in the meantime it shares information with the driver. The software first defined actions the robot can take. These are:

  • leftturn makes the robot turn left
  • rightturn makes the robot turn left
  • forward makes the robot move forward
  • slow down makes the robot slow down
  • pause makes the robot stop
  • measure returns the distance measured by the ultrasound sensor.
  • input Checks for input from the user(laptop)

The Arduino code is shown in the pdf below:


Software explained

The main task of our code is that it keeps constant track of objects in front of the vehicle, and an operator must have the ability to take control over the vehicle to stop or turn at all time. After a few rules to set up, the code goes in a loop continuously measuring the distance to objects and waiting for an input. The input is given by numbers: “5”: stop; “6”: turn right; “4”: turn left: “8”: straight forward. The first three of those inputs are direct tasks, which means when someone is to press for example “5”, “6” or “4” the vehicle will respectively immediately stop, turn right or left. The last one, “8” is where the shared control comes in. The first thing what happens if someone is to press “8”, is the sensor will measure if an object is close in front of it. If this is not the case the vehicle will drive forward. If an object is closer than 70cm away from the vehicle, the vehicle will continue forward in a slower pace until it comes in a range of 40cm from the object. Then the vehicle will turn about 90 degrees left and move forward, then it will turn right again and move forward for 2.4 seconds. Then it will turn right move forward to get to its old trajectory and turn left again. While the vehicle was evading the obstacle it was continuously measuring for new objects in front of it, when it occurs that a second object comes close to it in a range of 40cm, it will evade that object in the same manner as the first object.

As can be seen in the code, we wanted to add a second sensor which measures the distance to the ground. This is done so the vehicle knows when it has reached the end of the platform and it would not drive blindly of the platform. Due to that the second sensor had a defect, we had to cancel this. This part of the code is still in the file, but the distance to the ground is fixed to 7cm, so it will not cause problems.

Results & conclusion of the prototype

In the end the robot did avoid obstacles, but it could not detect the edge of the platform. The sensor used to detect the edge of the platform failed and left us with just one sensor. In the future the single sensor could be on top of a servo. That way the view angle is much bigger. In case of our robot it sometimes missed obstacles but drove into them, because the robot itself is wider than the projected vision with the angle and distance of the object. Below a video in which the driver just wants to move forward is shown. And it can be seen that the robot moves around neatly.



The results of the prototype are looking very promising. The vehicle was able to operate on its own in case there was an object in front of it. With more sensors this can be finetuned to make it more safe. The should be adjusted for multiple sensors, so that the vehicle always chooses the shortest way to its destination. Another thing we could not reach with our prototype, is giving the robot knowledge of its orientation and position. To make the vehicle more realistic to a real robot which can autonomously operate on a platform this is essential, but it did not fit in the scope and time of this project. The idea behind the prototype showed a lot of potential, but in order to make a definite conclusion about the feasibility of the robot more resources and time is needed.

Conclusion of the project

The project succesfully addresses the main goal of the project, which was to design a robot that enables wheelchair bound people in public transit. The robot is a complete solution from start to end, adressing most if not all problems that are part of the current solution. A questionnaire was used to identify the user needs of disabled people and NS-personnel.

The current reservation system (calling the helpdesk) is replaced by integrating the reservation into the NS-app, enabling disabled people to plan their trip much closer to the departure time. The OV-chipcard is used to activate the robot, which will then trigger the navigation. We have proven the concept of shared control with our prototype, but since it is expected of the robot to be able to drive autonomously from the get-go, this is potentially not neccesary in the final design. Without the help of NS-staff, the self-sufficiency of the disabled is improved. Because this robot is able to be implemented on all NS-stations, there can be an improvement in routes that are wheelchair accessible. The lights and alarm on the robot, even if fully autonomous, are a literature-proven concept that will improve its presence on the platform. Furthermore, the actual lifting process was already proven by the existence of mobile wheelchair lifting devices.

The project proposes base guidelines for personal space, lighting, sound, check-in, reservation app integration, shared control navigation, autonomous driving and docking.

There are a couple of complications that need to be taken into account. Firstly, there needs to be a solution for the NS staff that are impacted by the robotic system taking over this aspect of their job. Secondly, we have only proven the shared control concept of the robot, not the autonomous driving concept. Also, we came to the conclusion that if it is expected of the robot to be able to drive autonomously to the train door, there is no need for the disabled person to board the robot at the start, but can board it when it is docked to the train door. We used the shared control concept as a solution to accomodate the needs of the disabled people asking to get more control and to get a working prototype.


This chapter will give some recommendations on continuation of the project.

In order to improve the solution the robot first has to be able to autonomously board the train. In our project we only described a concept in which the robot should autonomously board the train. For the solution is useful it should be very robust and function for 99 percent of the time. Furthermore in case of break-down or failure the robot should have a manual override accessible by the conductor of a train. This way the robot can at least be decoupled from the train so the train can go on.

In the report we have seen that the railway staff is not, to say, supporting the new solution. We believe that in further advancements the NS should be incorporated more into the solution. That way we may find a solution that gives disabled more mobility and control while the service staff of the NS is satisfied as well.

In our solution we used the shared control concept to give disabled more control in their travelling. Also because we had difficulties with the prototype and were unable to let it drive autonomously. The disabled also needs to get off the train and therefore the robot has to find its own way to the train anyway. If the robot does not have to drive each disabled towards the train and back to the charge station, the capacity will be higher. Say for instance three disabled want to board the train. It would not seem logical to let the robot dock the train three times. In conclusions there should be other ways to give the disabled more control. We already gave indications for an app, but during the boarding process they could have some control. For instance the lift could be controlled by the disabled. This is something to continue with.

At the moment we use a visual interface (the app) to communicate with the user. In the end the solution could help other handicaps rather then just disabled to travel by train. So further development of the interface in other senses - say voice control (sound) - could make the solution more versatile.

As a final recommendation the economic feasibility should be investigated, as this report lacks any economic analysis.

Collaboration process

In this section we will discuss the team process and how the team collaborates. Every week a short update is given on what was done during the week and what was discussed in meetings.

4 September

This day our team was formed. We immediately established each other’s strengths, depending on our background. We discussed some ideas and concluded our main idea would revolve around the train environment. Throughout the week, we communicated who would take on what role in terms of the presentation of 11-9. On Wednesday, part of the group met up again to further refine the main concept. It was then decided we would focus on the boarding of a train by disabled people. Karlijn started working on the presentation, and wrote about the subject, objectives, users and approach. Luka maintained the wiki, while Gijs created an elaborate planning by means of a Gantt chart. Tjacco defined the milestones and deliverables. Throughout the week, a new group member, Jeroen, joined. He started creating the questionnaires we are going to use further in this project. On Sunday, we defined the group roles for the coming few weeks: Luka will maintain the wiki in terms of design process and help with the prototype, Karlijn will do qualitative research on the user requirements by means of the questionnaires and maintain the wiki in terms of collaboration process, Jeroen will do literature research on the state-of-the-art in the field, and Tjacco and Gijs will work on the prototype.

11 September

This day we presented our idea. We received some substantive feedback which we immediately incorporated in the planning: this week we will clearly define the RPC’s, after which we will all create a concept. Moreover, we finish all questionnaires, which enables us to start distributing the questionnaires from Tuesday. On Wednesday we meet again to compare the concepts, and refine our idea. We also received feedback saying we should be clear about the scope of the boarding process we would focus on. Due to that, Gijs started working on a block diagram which would map the entire process from start to finish, to gain clarity. We decided we wanted to focus on every part of the process. Jeroen will in this week start doing literature research, to gain insight in the current situation at the NS. All team members are very involved in the process and all work is divided among the group. Clear deadlines are set and processed in the planning.

18 September

In the section above you can read about the plans that were made for week 2. In the following section, the results of that week will be described we will pinpoint the following steps.

• In week 2 a list of general RPC’s was made; that is, a list that contains the RPC’s that the hypothetical system in real life should adhere to. We decided however that those RPC’s are not applicable to the prototype that we are aiming to build. This week, Luka will define an additional list of RPC’s specific to the prototype.

• On Wednesday, the team met up, and after discussing the concepts we concluded our idea would be largely based on Jeroen’s concept: a lift.

• In addition to the block diagram that Gijs made, the scope of this project will be further defined this week. Moreover, we have picked a specific part of the boarding process that we will focus on, namely, the docking of the robot near the train. This will be further described by Luka. Moreover, Luka will describe the ideal process. Karlijn will describe the process as is.

• Last week, Jeroen has indeed started doing literature research and found out that Canada and France already use the lift system that we have come up with. It does not however autonomously navigate. This Monday, we discussed this in a meeting with the teachers, and came to the conclusion that this is positive: since the lifting part of the system already exists, we can focus on the docking. Moreover, the existing systems can serve as an inspiration and can be a starting point from whereon to start developing our system.

• Karlijn and Tjacco finalized the questionnaires, after which Karlijn started handing out the questionnaires at NS stations. Moreover, questionnaires were distributed via personal networks and Facebook. This has however not yielded as many response as we had hoped for. Because of this, we have decided to extend the deadline one week and we will start collecting all answers and interpreting the data in week 3.

• This week, Gijs and Tjacco will do further research on motion planning, motion tracking and obstacle avoidance.

• This week, Jeroen will focus perform more practical work; besides finishing his literature research he will inform at the Innovation Space what options there are with regard to material and Arduinos and he will update the planning.

• This Monday, when meeting with the teachers, multiple conclusions were drawn with regard to the team. We openly discussed the collaboration to this day. Karlijn, and other team members, missed a leading figure within the group that has an overview of what everybody is doing during the week and if everybody is meeting their deadlines. Therefore, from now on, every week another group member will be that week’s group leader. He or she will check during the week if all is going as planned, and if everybody can finish his or her work before Sunday afternoon. This week Karlijn is group leader. Moreover, we missed clear deadlines last week and a large part of the group only started working on Sunday. This week, the entire group is therefore expected to finish his/her part before Friday. Friday, we will meet to discuss our work and define some final tasks that can be finished in the weekend. This will allow wiki maintainer Luka to upload all sections before Sunday.

22 September

On the 22nd of September we had a group meeting again. We concluded that from that week on, we were going to meet every week on Monday and Friday, rather than Monday and Wednesday. This gives more room to finish all work during the week and gives the possibility to discuss one’s work with one another. Moreover, on Friday we can determine what should be done in the weekend.

We discussed the work everybody had done until that day, mainly Gijs en Tjacco’s work on position determination by using beacons. Moreover, we discussed the materials we think we are going to need and aim to have a list with all needed material next week. Jeroen has checked with 4WBB0 if there are any Arduinos left, which he will hear back from later. Gijs en Tjacco also started coding for the Arduino, which we aim to have checked next Monday in the panel. Jeroen will work on specifying the protoype RPC list with specific measurements and he will elaborate on his patent check. Gijs will summarize all literature he has found so far, while Tjacco will do further research on the beacons. Karlijn will process the questionnaires before Wednesday and re-write the USE part of the wiki and the process description.

29 September

This week, we had a meeting on Monday with the teachers in which we discussed our progress. We received a lot of feedback, which we dealt with this week.

  • We have contacted Michiel van Gorp of Engineering Design and can next week pick up an Arduino to use for this project.
  • Feedback we got involved the question: what does our prototype show with regard to the problem? We discussed this in the group, and concluded that in the current situation we might not be solving the most important/interesting/challenging part of the problem. We are therefore from now on focusing on the detection in a dynamic environment. This is a problem that is namely not only related to this specific domain of trains, it is a more general problem that pertains to the entire social robot domain. We are focusing on how to create a world model and, especially, how to detect static ánd dynamic objects in the environment and how to differentiate between those two. This week we have already done more research in this area, which yielded several topics:
    • Beacons: by using beacons we can real-time track where the robot is located. Gijs and Tjacco have visited mr. Duarte and from this it became clear we can use the beacons for this project. This means however we have to test the prototype specifically at Duarte.
    • World model: we intend to pre-program static objects in the train environment (e.g. benches). In this way, the robot knows what to avoid.
    • Object detection: the prototype should be able to detect an obstacle. In the prototype it would be to advanced to give objects a label - like person or suitcase. So the prototype goal is to simply avoid a given obstacle even when it faces one.
    • Encounter humans: In the hypothetical solution we should be able to distinguish humans from a suitcase, so when we encounter different object we would like the robot to make different choices. We imagine a human to move aside as we give sound and light indications hopefully deterring them from our path. If they keep standing still we assume they won't move and will move around them.
  • We also received the following feedback: we should know what the requirements are for an autonomous system in order for it to fully replace humans. We have this week further specified our list of RPC’s, and the results of the questionnaires have yielded additional user requirements.
  • From the meeting with the teachers we received the info that at the train station in Den Bosch sensors indicate where the train is going to be when arriving, and where the doors are. This implies that there is an information system at the NS which knows where a train will stop exactly. This may have a slight margin. This is extremely useful for the project, as this means we can assume the robot can access that information system and use that info for where it should go. For the final centimeters, it can use the beacons located in the doors to dock.
  • We have discussed with our group what the role of the disabled person should be. We have unanimously decided not to give this person a guiding role in the robot, as we can never assume the skills of the person beforehand. What his role will actually be on the other hand, is specified in the wiki under ‘Idealized solution’.

A short overview of what everybody has done this week:

  • Tjacco was group leader, which implies he led the meeting on Friday, and discussed everybody’s progress during the week.
  • Tjacco, Gijs and Luka met up with Ruud van den Bogaert to discuss the possibilities with regard to sensors. He was initially enthusiastic and offered to lend a hand, but later this week he emailed us saying he did not have time to help us and shared a few websites which we could look at for more info on ultrasonic sensors.
  • Gijs and Tjacco like mentioned also visited mr. Duarte to discuss the use of beacons.
  • Karlijn collected all results of the questionnaires and did a thematic analysis on the results, which yielded several user requirements. Moreover, Karlijn updated the collaboration process in the wiki and finalized the process description and user and enterprise analysis.
  • Jeroen and Luka both updated the wiki with all the work that was done this week, and incorporated the feedback in the work on the wiki.
  • Jeroen has also checked the wiki for consistency.
  • Gijs and Tjacco have updated their code for the Arduino.

6 October

This Monday, we again had a meeting with the teachers. In this meeting, we have discussed the results of the questionnaires. An important result was the desired increase of influence on the process by disabled people. This can be interpreted in multiple ways, as is described in our User Analysis above. Together with the teachers we have come up with the concept of shared control. This is an interesting new approach to the robotic system, which takes into account the results of the questionnaires. It is important for a project like this that what we are demonstrating at the end is supported by the results of the questionnaires, which is why we are implementing shared control in our prototype. This has multiple implications for our project, as we need to redesign the concept and codes.

This week, everybody has done the following:

  • Luka has assembled the hardware of the prototype and mounted everything on (wheels, motor, etc.)
  • Tjacco, Gijs and Luka have searched for parts for the prototype in all rest parts of 4WBB0.
  • Gijs has looked at the Arduino and code with Tjacco while Jeroen has researched how to code shared control.
  • On Friday, the team has tested Gijs' and Tjacco's code on the prototype which failed; the motors did not work. This is something we need to look at next week.
  • Karlijn has looked at the user analysis and implemented the results of the questionnaire in the new concept. A literature research was performed on shared control to check our interpretation.

13 October

This week, Gijs was group leader, which meant he made notes of every meeting and arranged a planning for this week. On Monday, we had a meeting with the teachers, in which we refined our project. Several questions were asked, which we all answered in the wiki this week. We have further refined our final prototype and concept and the requirements of the prototype. Moreover, we have discussed what the prototype will show with regard to this project's problem statement.

The tasks of everyone this week were as follows:

  • Tjacco has looked further into the connection between the Arduino and laptop. Moreover, he has refined the communication between the Arduino and the sensors.
  • Luka has finished building the hardware; he placed the sensors, LED lights, a power amp, the wheels and a transistor.
  • Gijs has looked into playing music from the Arduino, has coded the lights for the Arduino and has helped Tjacco with his tasks.
  • Jeroen did research on the communication between user and robot in case the robot takes over the steering.
  • Karlijn has performed research on how to communicate to bystanders the passing of the robot, in terms of light and music. Moreover, she has written about what happens in case the robot collides with anything, and what happens in case of modifications in the train timetable. Besides that, she has updated the group collaboration process.

23 October

The past two weeks, we have worked on finalizing the project. In week 7, we received the following feedback:

  • The concept of personal space should be considered in the wiki -> this was solved by Karljn: she performed a literature research on the human-robot interaction and based on that, devised an ideal design of the interaction between our robot and its surroundings.
  • What is the function of the LED arrows, how should they work? -> this was fixed by Jeroen, Gijs and Karlijn, who further researched the topic and worked out the topic in more detail in the wiki
  • What will the robot do with objects it approaches? -> after a group meeting, we decided we wanted the robot to drive past an object in case it approaches an object, after which it hands back the control to its user again. This is worked out in the wiki by Tjacco.
  • Our sensors did not work at the time, Luka has worked on the prototype to fix this.
  • Jeroen has prepared the presentation for week 8 and paid attention to the touchscreen display.
  • Karlijn has checked the entire wiki and commented on parts that needed to be fixed by other group members.

After the presentation on Monday of week 8, we again sat down with the group and set some final deadlines

  • In week 8, we all finalized the wiki with the latest information
  • In week 8, we will be peer-reviewing one another.

Peer Review

After a meeting on Monday, we had an open conversation on the teamwork of the past 8 weeks. Everyone agreed that the collaboration went fluently in multiple ways; deadlines were met, after week 2 group structure was clear and the quality of work was high. This resulted in the same individual grade for the entire group: an 8.


Autonomous regulations congress. (n.d.). Retrieved September 21, 2017, from:

Autopilot. (n.d.). Retrieved October 27, 2017, from

Beantwoord: Bevindingen proef station 's Hertogenbosch. (n.d.). Retrieved October 27, 2017, from

Brandl, C., Mertens, A. and Schlick, C. M. (2016), Human-Robot Interaction in Assisted Personal Services: Factors Influencing Distances That Humans Will Accept between Themselves and an Approaching Service Robot. Hum. Factors Man., 26: 713–727. doi:10.1002/hfm.20675

Butler, J. T., & Agah, A. (2001). Psychological effects of behavior patterns of a mobile personal robot. Autonomous Robots, 10, 185–202.

Connell, J., & Viola, P. (1990). Cooperative control of a semi-autonomous mobile robot. In Proceedings IEEE International Conference on Robotics and Automation (Vol. 2, pp. 1118–1121).

Dallaway, J. L., & Tollyfield, A. J. (1990). Task-specific people control of a robotic aid for disabled. Journal of Microcomputer Applications, 321–335.

E.V., D. Z. (2017, May 23). Retrieved October 27, 2017, from

Hall, E. T. (1966). The hidden dimension: Man's use of space in public and private. London: The Bodley Head.

Health and safety executive. (2008). Retrieved September 21, 2017, from:

Ivens, L. and Kant, A. (2004). Ontspoord, Gehandicapten bij de NS. Tweede-Kamerfractie SP.

Koay, K.L., Syrdal, D.S., Ashgari-Oskoei, M. et al. Int J of Soc Robotics (2014) 6: 469.

Koay, K. L., Syrdal, D. S., Walters, M. L., & Dautenhahn, K. (2007). Living with robots: Investigating the habituation effect in participants' preferences during a longitudinal human-robot interaction study (pp. 564–569). Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, August 26–29, 2007, Jeju.

Oishi, S. (2010). The psychology of residential mobility: Implications for the self, social relationships, and well-being. Perspectives on Psychological Science, 5(1), 5-21.

Patent check. Retrieved October 27, 2017, from

Petry, M. R., Moreira, A. P., Braga, R. A. M., & Reis, L. P. (2010). Shared control for obstacle avoidance in intelligent wheelchairs. In 2010 IEEE Conference on Robotics, Automation and Mechatronics, RAM 2010 (pp. 182–187).

Russell, Stuart J., and Peter Norvig. Artificial Intelligence: a Modern Approach. Pearson, 2014.

SPECIAL TRAVEL NEEDS. (n.d.). Retrieved October 27, 2017, from

Steptoe, A., Shankar, A., Demakakos, P., & Wardle, J. (2013). Social isolation, loneliness, and all-cause mortality in older men and women. Proceedings of the National Academy of Sciences, 110(15), 5797-5801

Walters, M. L., Syrdal, D. S., Koay, K. L., Dautenhahn, K., & te Boekhorst, R. (2008). Human approach distances to a mechanical-looking robot with different robot voice styles (pp. 707–712). Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, August 1–3, 2008, Munich.

Złotowski, J. A., Weiss, A., & Tscheligi, M. (2012). Navigating in public space: Participants' evaluation of a robot's approach behaviour (pp. 283–284). Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, March 5–8, 2012, Boston, MA.