PRE2022 3 Group5: Difference between revisions
(start of state-of-art and relevant techniques) Tag: 2017 source edit |
(→Simulation: Results: slight adjustments to the simulation conclusion) |
||
(159 intermediate revisions by 6 users not shown) | |||
Line 5: | Line 5: | ||
!Role | !Role | ||
|- | |- | ||
|Vincent van Haaren|| | |Vincent van Haaren||1626736 | ||
|Human Interaction Specialist | |Human Interaction Specialist | ||
|- | |- | ||
Line 24: | Line 24: | ||
|} | |} | ||
<br /> | |||
==Introduction== | ==Introduction== | ||
In this project | In this project we have been allowed to pursue a self-defined project. Of course, the focus should be on USE; user, society, and Enterprise. Our chosen project is the design of a product. Taking inspiration from our personal experiences we’ve chosen to find a solution to solve the navigation problems we encounter in the campus buildings in the TU/e. After some research about the topic and contacting TU/e Real Estate department we found out that guidance robots for people with visual impairment had demand. This was thus chosen as our topic. More specifically defined, the problem statement is: ‘Visually impaired people have ineffective means of navigating through the, at times, confusing pathways of campus buildings.’. When researching state-of-the-art electronic travel aids, we found 3 distinct solutions: Robotic Navigation Aids, Smartphone solutions, wearable attachments. The pros and cons are described in the table below: | ||
{| class="wikitable" | {| class="wikitable" | ||
|+ | |+ | ||
Line 46: | Line 47: | ||
|Robotic guide dog/mobile robot | |Robotic guide dog/mobile robot | ||
|The system gives room for larger hardware, as it does not require a user to carry it | |The system gives room for larger hardware, as it does not require a user to carry it | ||
|Complicated mechanicals while | |Complicated mechanicals while manoeuvering through stairs and terrain | ||
|- | |- | ||
|Robotic Navigation Aids | |Robotic Navigation Aids | ||
|Robotic Wheelchair | |Robotic Wheelchair | ||
|Suitable for the elderly and people who have a physical limitation provides navigation and mobility assistance for elderly visually impaired who cannot walk on their own, multi-handicapped, or people who have more than one disabling condition | |Suitable for the elderly and people who have a physical limitation provides navigation and mobility assistance for elderly visually impaired who cannot walk on their own, multi-handicapped, or people who have more than one disabling condition | ||
|Safety remains an issue as user mobility fully depends on the robotic wheelchair navigation, road-crossing and stair climbing are difficult circumstances where the reliability of the wheelchair is of extreme necessity | |Safety remains an issue as user mobility fully depends on the robotic wheelchair navigation, road-crossing and stair climbing are difficult circumstances where the reliability of the wheelchair is of extreme necessity | ||
|- | |- | ||
|Smartphone solutions | |Smartphone solutions | ||
Line 76: | Line 77: | ||
These devices are intrusive as they cover ears and involve the use of hands users are burdened with the system’s weight. | These devices are intrusive as they cover ears and involve the use of hands users are burdened with the system’s weight. | ||
Requires | Requires an extended period of training | ||
|} | |} | ||
Sourced from https:// | Sourced from: <ref>Romlay, M. R. M., Toha, S. F., Ibrahim, A. M., & Venkat, I. (2021). Methodologies and evaluation of electronic travel aids for the visually impaired people: a review. ''Bulletin of Electrical Engineering and Informatics'', ''10''(3), 1747–1758. <nowiki>https://doi.org/10.11591/eei.v10i3.3055</nowiki></ref> | ||
Furthermore, another state-of-the-art solution for guiding devices was found; a device which would use electronic waypoints installed in the building, to localise the user and relay directions and information about the surroundings<ref>(1) (PDF) Guiding visually impaired people in the exhibition (researchgate.net)</ref>. | Furthermore, another state-of-the-art solution for guiding devices was found; a device which would use electronic waypoints installed in the building, to localise the user and relay directions and information about the surroundings<ref>(1) (PDF) Guiding visually impaired people in the exhibition (researchgate.net)</ref>. | ||
A previous attempt was made at the TU/e (our case study) to use this method. But because it required infrastructure to be created in all the buildings in which it would work, it was never implemented. Therefore we’ve decided to discard all solutions which would require such infrastructure. | A previous attempt was made at the TU/e (our case study) to use this method. But because it required infrastructure to be created in all the buildings in which it would work, it was never implemented. Therefore, we’ve decided to discard all solutions which would require such infrastructure. | ||
Wearable attachments have been discarded as it is inherently invasive meaning the user will have to equip it themselves. Furthermore larger attachments with many sensors are made impossible due to weight-limits, and lastly wearing such a device in extended meetings is impractical. Any such device will require some prior knowledge on how to operate it. Due to all these reasons we’ve chosen not to pursue wearable attachments. | Wearable attachments have been discarded as it is inherently invasive meaning the user will have to equip it themselves. Furthermore, larger attachments with many sensors are made impossible due to weight-limits, and lastly wearing such a device in extended meetings is impractical. Any such device will require some prior knowledge on how to operate it. Due to all these reasons, we’ve chosen not to pursue wearable attachments. | ||
We’ve decided against smartphone solutions because it would be difficult to make a one-size-fits-all solution due to differing phones and sensors. A slightly more | We’ve decided against smartphone solutions because it would be difficult to make a one-size-fits-all solution due to differing phones and sensors. A slightly more biased reason is that half of our group members are not at all adept at creating such applications and have no interest in the field. We also worried that we would struggle creating a practical app due to the limitations of the phone hardware. | ||
Robotic wheelchair was decided against due to its invasive nature and concerns for the user’s autonomy. Furthermore this solution would be very bulky which makes it unsuited for crowded spaces. The user base which will most likely consist of furthermore well-abled students which do not need such support and might feel uncomfortable using such a device. | Robotic wheelchair was decided against due to its invasive nature and concerns for the user’s autonomy. Furthermore, this solution would be very bulky which makes it unsuited for crowded spaces. The user base which will most likely consist of furthermore well-abled students which do not need such support and might feel uncomfortable using such a device. | ||
A Smart Cane is not well-suited to guide the user due to the small form factor and weight requirement which would make inside-out localisation difficult. | A Smart Cane is not well-suited to guide the user due to the small form factor and weight requirement which would make inside-out localisation difficult. | ||
The mobile platform guide robot has a few problems besides its price. The most important one is that it has trouble navigating stairs and rough terrain. Luckily the robot will (for now) only be operating indoors in TU/e buildings. The presented use case of the TU/e campus | The mobile platform guide robot has a few problems besides its price. The most important one is that it has trouble navigating stairs and rough terrain. Luckily, the robot will (for now) only be operating indoors in TU/e buildings. The presented use case of the TU/e campus has walk bridges connecting buildings and elevators in (almost) all buildings which mitigates most of the solution’s downsides. These factors make it the perfect place to implement such a guidance robot. | ||
In summary we chose a robotic guide due to its user accessibility and potential for future improvements. It is a good way for people (with visual impairment or not) to be navigated through buildings. | In summary we chose a robotic guide due to its user accessibility and potential for future improvements. It is a good way for people (with visual impairment or not) to be navigated through buildings. | ||
==State of the art== | |||
It is commonly known that the most common tools used by visually impaired people are the white cane and the guide dog. The white cane is used to navigate and identify. With its help these people get tactile information about their environment, allowing the visually impaired to explore their surroundings and detect obstacles. However, the use of this can be cumbersome, as it can get stuck in cracks, or tiny spaces. Its efficiency is also limited in the event of bad weather conditions or a crowd.<ref>''What are the problems that the visually impaired face with the white cane?'' (n.d.). Quora. <nowiki>https://www.quora.com/What-are-the-problems-that-the-visually-impaired-face-with-the-white-cane</nowiki></ref> The guide dog on the other hand can guide the user through familiar paths, while also avoiding obstacles. They can also assist with locating steps, curbs and even elevator buttons. They can also keep their user centred, when crossing sidewalks for example.<ref>Healthdirect Australia. (n.d.). ''Guide dogs''. healthdirect. <nowiki>https://www.healthdirect.gov.au/guide-dogs#:~:text=Guide%20dogs%20help%20people%20who,city%20centres%20to%20quiet%20parks</nowiki>.</ref><ref>''What A Guide Dog Does''. (n.d.). Guide Dogs Site. <nowiki>https://www.guidedogs.org.uk/getting-support/guide-dogs/what-a-guide-dog-does/</nowiki></ref> There are a couple of issues with guide dogs however. They can only work for 6 to 8 years and have a very high cost of training.<ref>''Guide Dogs Vs. White Canes: The Comprehensive Comparison – Clovernook''. (2020, 18 September). <nowiki>https://clovernook.org/2020/09/18/guide-dogs-vs-white-canes-the-comprehensive-comparison/</nowiki></ref> They also require constant work on maintaining that training. The dog can also get sick. Another potential issue is bystanders that pet or take interest in the dog while it is working, which is a detriment to the handler.<ref>''Guide Dog Etiquette: What you should and shouldn’t do – Clovernook''. (2020, 10 September). <nowiki>https://clovernook.org/2020/09/10/guide-dog-etiquette/</nowiki></ref> | |||
None of these tools can efficiently assist the person in navigating to a specific landmark in an unknown environment.<ref>Guide Dogs for the Blind. (2020, 1 July). ''Guide Dog Training''. <nowiki>https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times</nowiki>.</ref> That is why currently a human assistant is preferred/needed to perform such a task, for example when walking in a museum.<ref>Guide Dogs for the Blind. (2020b, July 1). ''Guide Dog Training''. <nowiki>https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times</nowiki>.</ref> In regards to the technological means there is currently no robot that is capable of efficiently performing such a task, especially is the environment is a crowded building. However, there are multiple robots that have implemented parts of this function. In the following paragraph we have divided them into their own sections for ease of reading. | |||
===Tour-guide robots=== | |||
We first begin with the tour-guide robots. These robots are used in places such as museums, university campuses, workplaces and more. The objective of these robots is to guide a user to a destination. Once at the destination these robots will most often relay information about the object, exhibition or room of the destination. In terms of implementation, these robots use a predefined map of the environment, where digital beacons are placed to mark the landmarks and points of interest. These robots also often make use of ways to detect and avoid obstacles such as using laser scanners (such as LiDAR), RGB cameras, kinetic cameras or sonars. This research paper <ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref> goes in depth on the advances in this field in the recent 20 years, the most notable of which are "Cate", "Konard and Suse". As our goal is to guide visually impaired people throughout the TU/e campus, this field of robotics is of upmost interest for the navigation system of a guidance robot. | |||
====Aid technology for the visually impaired==== | |||
This section is split into two. First, we cover guidance robots for the visually impaired, after which we cover other technological aids that have been created for this user group. | |||
=====Guidance robots===== | |||
Guidance robots for the visually impaired are very similar to the tour guide robots. They often use much the same technology to navigate through the environment (predefined map with landmarks and obstacle detection and avoidance). What differentiates these robots from the tour-guide robots is the adaptation of the shape and functionality of the robots to better suit the needs of the visually impaired. The robots have handles, or leashes, which the visually impaired can hold, much the same as a guide dog or a white cane. As the user cannot see, the designs incorporate ways of communicating the intent of the robot to the user as well as ways of guiding the user around obstacles together with the robot. Examples of such designs are the Cabot<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People"> João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771 </ref>- a suitcase shaped robot, that stands in front of the user. It uses a LiDAR to analyse its environment and incorporates haptic feedback to inform the user of its intended movement pattern. Another possible design is the quadruped robot guide dog<ref name="The Fuzzy Control Approach for a Quadruped Robot Guide Dog">https://link.springer.com/article/10.1007/s40815-020-01046-x utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref>, which based on Spot could be used as a robotic guide dog, given some adjustments. Finally there is also this design of a portable indoor guide robot<ref name="Design of a Portable Indoor Guide Robot for Blind People">https://ieeexplore.ieee.org/document/9536077</ref>, which is a low-cost guidance robot, which also alerts the user of obstacles in the air. | |||
====Crowd-navigation robots==== | |||
As our design has the objective of guiding the user through a university campus it is reasonable to expect that there will be crowds of students at certain times of the day. For our design to be helpful, it needs to handle such situations in an efficient way. Thus, we took inspiration from the minor field of crowd-navigation of robotics. The goal of these robots is exactly that- enabling the robot to continue moving through a crowd, rather than freeze up, every time there is an obstacle in front of it. Some relevant research are these papers "Unfreezing the Robot: Navigation in Dense, Interacting Crowds"<ref name="Unfreezing the Robot: Navigation in Dense, Interacting Crowds">Unfreezing the Robot: Navigation in Dense, Interacting Crowds, Peter Trautman and Andreas Krause, 2010 https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5654369&casa_token=3UPVOvK4kjwAAAAA:IjkyGh3f-uh_x-01jDPtspxLX--eSCBTrZEGTwtVEXc8hU9D2oLLEuOCTCz6OdGHWmy76bX3JA&tag=1</ref>, a robot that can navigate crowds with deep reinforcement learning<ref name="Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning">Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning, Changan Chen, Yuejiang Liu, Sven Kreiss and Alexandre Alahi, 2019, https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8794134&casa_token=neBCeEpBndIAAAAA:wZuGoZYF-YCscI-kJGi5ljIIGkUFpzejSTaxySxytUbIUKeV4sUZze6lZN32gw2DmKwbw-G6ZA</ref>. | |||
==User scenarios== | |||
To get a better feeling of the problem, and the possible solutions two user scenarios are made that show the impact of the guide robot on visually impaired people that want to move through unknown crowded spaces. The design mentioned in these stories are both not what we ended up making, but the intended goal is the same; these stories and the solution we ended up making both try to expand the navigational tools a guidance robot has in crowded spaces. It is important to note that some parts of the robot here described fall out of the scope of the exact problem solved. | |||
===Physical contact through crowded spaces=== | |||
Jack is partially sighted and can see only a small part of what is in front of him. He has recently been helping out fellow students with their field tests which tests a robot guide. Last month he worked with a robot called Visior which helps steer him through his surroundings. Visior is a robot which is inspired and shares its physical features with CaBot. | |||
When Jack used Visior to get to the library to pick up a print request he had to pass through a mediumly-crowded Atlas building since there was an event going on. This went mostly as expected; not too fast and having to stop semi-periodically because of people walking or stopping in front of Visior. The robot was strictly disallowed to purposely make physical contact with other humans. Jack knows this so he learned to step up in these situations and try to kindly ask for the people in front to make way. This used to happen less when he used his white cane since people would easily identify him and his needs. After Jack arrived at printing room in MetaForum he picked up his print request. He handily put his batch of paper on top of his guiding robot, so he didn’t have to carry it himself. | |||
On his way back he almost fell over his guiding robot when it suddenly stopped when a hurried student ran by. Luckily, he did not get hurt. When Jack came home after this errand he crashed on his couch after an exhausting trip of anticipating the robot’s quirky behaviour. | |||
The next day the researchers and developers of Visior came to ask about his experiences. Jack told them about his experience with Visior and their trip to the library. The developers thanked him for his feedback and started working on improving Visior. | |||
This week they came back with the now new and improved Visior-robot. This version has been installed with a softer exterior and now rides in front of Jack instead of by his side. The developers have made it capable of safely bumping into people without causing harm. They also made it capable of communicating with Jack if it thinks it might have to stop suddenly to make Jack a bit more at ease when traveling together. | |||
The next day Jack used it to again make a trip to the printing space in MetaForum to compare the experience. When passing through the crowded Atlas again (there somehow always seems to be an event there) he was pleasantly surprised. He found it easier to trust Visior now that it was able to communicate the points in the trip where Visior thought they might have to stop or bump into other pedestrians. For example, when they came across a slightly more crowded space Visior had guided Jack to walk alongside a flow of other pedestrians. Jack was made aware of the slightly unknown nature of their surroundings by Visior. Then when a student suddenly tried to cross their path without looking, Visior had unfortunately bumped into their side. Visior gradually slowed their pace down to a halt. Jack obviously felt the bump but was easily able to stay stable due to the prior warning and the less drastic decrease in speed. The student who was now naturally aware of the something moving in their blind spot immediately stepped out of the way and looked at Jack and Visior; seeing the sticker stating that Jack was visually impaired. Jack asked them if they were alright, to which they responded with saying they were fine after which they both went on their way. After picking up his print he went back home. On his way back he had to pass through the small bridge between MetaForum and Atlas in which a group of people were now talking, blocking a large part of the walking space. Visior guided Jack to a small traversable path open besides the group; taking the risk that the person there would slightly move and come onto their path. Visior and Jack could luckily squeeze by without any trouble and their way back home was further uneventful. | |||
When the Developers of Visior came back the next day to check up on him Jack told them the experience was leagues better then before. He told them he found walking with Visior less exhausting than it had been before and found the behaviour of it more human-like making it easier to work with. | |||
===Familiar guidance advantage=== | |||
Meet Mark from Croatia | |||
He is a Minor Student following Mathematics courses, and lives on (or near) campus | |||
Mark is severely near-sighted, being born with the condition he has never seen very well. Mark is optimistic but chaotic. | |||
Mark likes his study and likes playing piano. | |||
Notable details: | |||
Mark makes use of a white cane and audio-visual aids to assist with his near-sightedness. | |||
He just transferred to TU/e for a minor and doesn't know many people yet. Mark will only be here a short time for his minor. He has a service dog at home, but does not have the resources, time or connections to provide for it here, and so he left it at home. | |||
Indoors, mark finds it hard to use his cane because of crowded hallways and he dislikes apologizing when hitting someone with his stick or being an inconvenience to his fellow students. Mark can read and write English fine, but still feels the language barrier. | |||
In a world without our robot mark might have to navigate like this: | |||
Mark has just arrived for his 2nd day of lectures. And will be going to the wide lecture hall at Atlas -0.820. Mark again managed to walk to Atlas (as we will not be tackling exterior navigation), and uses his cane and experience to navigate the stairs and rotary door of Atlas, using it to determine the speed and size of the revolving element to get in, and using the cane to determine the position of the doors and opening<ref>https://youtu.be/mh5L3l_7FqE</ref>. | |||
Once inside, he is greeted by a fellow student who has noticed him navigating the door. Mark had already started concealing the use of his cane, as he doesn't like the attention and so the university staff didn't notice him. Luckily, his fellow student is more than willing to help him get to his lecture hall. Unfortunately, the student is not well versed in guiding visually impaired around, and it has gotten busy with students changing rooms. | |||
Mark is dragged along to the lecture hall by his fellow student, bumping into other students who don't notice he cannot see them, as his guider is hastily pulling him past the students. Mark almost loses his balance when his guide slips past some other students, narrowly avoiding the trashcan while dragging mark by his arm. Mark didn't see the trashcan, which is not at eye level, and collides with the metal frame while trying to copy the movement of his guider to dodge the other students. He is luckily unharmed, and manages to follow his guide again, until he is finally able to sit in the lecture hall, ready to listen for another day. | |||
The next day a student sees Mark struggling with the door and shows Mark a guide robot. The robot has the task of getting mark to the lecture hall Mark needs to be. It starts moving and communicates its intended speed and direction through the feedback in the handle. As a result, Mark can anticipate the route the robot will take, similar as to how a guide would apply force to Mark's hand to change direction. | |||
The robot has reached the crowd of students moving through the busy part of Atlas. Its primary objective is to get Mark through this, and even though many students notice the robot going through, it still uses clear audio indications to warn students it will be moving through and notifies Mark it goes into some alternate mode through the handle. Mark notices, and thus becomes alert as he also feels that the robot reduces the number of turns it makes, navigating through the crowd in the most straightforward route it can take. Mark likes this, it is making it easy for him to follow it, and also for others to avoid them. | |||
Still, a sleepy student bumps into the robot as it is crossing. Luckily the robot is designed to contact other students, and its rounded shape, enclosed wheels (or other moving parts) and softened bumpers prevent harm. The robot does however slightly reduce its pace and makes an audible noise to let the sleepy student know it touched the robot too hard. Mark also notices the collision, partially because the bump makes the robot shake a little and loose a bit of pace, but mainly because his handle clearly and alarmingly notifies him, Mark also knows the robot will still continue, as the feedback of the handle indicates to him that it is not stopping. | |||
After the robot gets through the crowd, it makes it to the lecture hall. It parks just in front of the door and tells mark to extend his free hand slightly above hip level, telling him they arrived at a closed door that opens towards them swinging to his right, similarly to how a guide would, so mark can grab the door handle, and with support of the robot open the door. The robot proceeds mark slowly into the space, it goes a bit too fast though, and mark applies force to the handle, pulling it slightly in his direction. The robot notices this and waits for Mark. | |||
After they enter the lecture hall, the robot asks the lecturer to guide mark to an empty seat (and may provide instructions on how to do so). When mark is seated, the robot returns to its spot near the entrance, waiting for the next person. | |||
==Problem statement== | |||
The previous problem statement was quickly found to be too broad. In this research about state-of-the-art it was found that the problem statement consists out of a plethora of sub problems which all have to work in tandem to create a functional solution. For this reason, it is important to scope the problem as much as possible to create a manageable project. Throughout research on the topic of guidance robots the following problems were identified: | |||
*Localization of the guide | |||
*Identification of obstacles or other persons | |||
*Navigation in sparse crowds | |||
*Navigation in dense crowds | |||
*Overarching strategic planning (e.g. navigating between multiple floors or buildings) | |||
*Interaction with infrastructure (e.g. Doors, elevators, stairs, etc.) | |||
*Effective communication with the user (e.g. user being able to set a goal for the guide) | |||
We decided to focus on ‘Navigation of guidance robots in dense crowds on TU/e campus’. This was chosen because for navigation on campus such a ‘skill’ (an ability the guide can perform) is necessary. Typical scenarios in which such a skill would be useful for a typical student would be during on campus events, navigation in and out of crowded lecture rooms, or simply a crowded bridge or hallway. Besides its necessity it is also an active field of study without a clear final solution yet<ref name=":3">Mavrogiannis, C., Baldini, F., Wang, A., Zhao, D., Trautman, P., Steinfeld, A., & Oh, J. (2021). Core challenges of social robot navigation: A survey. ''arXiv preprint arXiv:2103.05668''.</ref>. Mavrogiannis et al.<ref name=":3" /> defines the task of social navigation as ‘to efficiently reach a goal while abiding by social rules/norms.’. | |||
A reformulation of our problem statement thus results in the following research question: ‘How should robots socially navigate through crowded pedestrian spaces while guiding visual impaired users?’ | |||
To work on this problem, it is assumed the remaining functions of the previous list are assumed to be working. | |||
==='''Scoping the problem'''=== | |||
At this time the first meeting with Assistant professor César López was held. Mr. López is part of the control systems technology group of the TUE and focusses on designing navigation and control algorithms for robots operating in a semi-open world. In our meeting the most important recommendation was that the navigation should be split up even further and a more defined crowd should be used to define the guide’s behaviour. He laid out that different crowds have different qualities. These crowds can roughly be split up into chaotic crowds; where there is no exact order and behaviour is less predictable (e.g., an airport where everyone needs to go in different directions), and structured crowds; where behaviour ''is'' predictable, such as crowds found walking in a hallway. The simplest structured crowd is one where all people have a unidirectional walking direction. This kind of behaviour can also be found in a paper from Helbing et al.<ref>Helbing, D., Buzna, L., Johansson, A., & Werner, T. (2005). Self-Organized Pedestrian Crowd Dynamics: Experiments, Simulations, and Design Solutions. ''Transportation Science'', ''39''(1), 1–24. <nowiki>https://doi.org/10.1287/trsc.1040.0108</nowiki></ref> which amongst other things describes crowd dynamics. The same paper also describes how a crowd with only 2 opposing walking directions self-organizes to two side-by-side opposing ‘streams’ of people. | |||
López then expanded on this finding by saying that the robot, in this crowd, could roughly be in 3 distinct scenarios': The robot could walk along a unidirectional crowd, it could walk in the opposite direction of a unidirectional crowds, or it could walk perpendicular to the unidirectional crowd. All of these have an application when navigating the university. López recommended that our research should be focused on only 1 of these scenarios since they all need different behavioural models unless a general navigation method was found. | |||
To summarize, it can be seen that for the guide to efficiently navigate in tight spaces, like hallways or in a lesser extend doorways, requires it to be able to navigate dense crowds which behave in a unidirectional manner. In navigating such a crowd, different approaches can be taken depending on the walking direction of the crowd and the guide. | |||
On López’s recommendation, it was decided to narrow the behavioural research down to only walking alongside a unidirectional crowd since this was the most standard case. | |||
To conclude this section the final research question is defined as ‘How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?’.<br /> | |||
==USE analysis of the crowd navigation technology== | |||
This section will discuss the relevance and the impact of a safe crowd-navigation guidance robot, on users, society at large, and enterprises. | |||
===Users=== | |||
The robot has a number of possible users, but for this design there are two types distinguished in this design: | |||
*The visually impaired handler of the robot | |||
*The other persons participating in the crowd | |||
In the Netherlands around 2.7% of the population has severe vision loss, including blindness<ref>Country - The International Agency for the Prevention of Blindness (iapb.org)</ref>. This is over 400 thousand people, who do not know which route to walk in a new environment, where only a room number is given. There are aids such as a guide dog or cane, but those make sure blind people do not collide with the environment instead of guiding to an unknown location in a new surrounding. So, a device that guides those visually impaired people to a new location they have never been on campus, such as, meeting room Metaforum 5.199 is needed. To guide them to this meeting room a navigation is needed through crowds. | |||
As mentioned, modern robots have a freezing problem when walking to crowds, which is not optimal when walking with the sometimes-dense crowds on the TU/e campus. That is why nudging and sometime bumping is needed sometimes. The challenge here is to guide the handler as smoothly as possible while sometimes nudging and bumping with third persons. | |||
As the plan was to design a physical robot with inspiration taken from the CaBot, a lot of inspiration is taken from their user research for visually impaired people. On top of that research has been done in guide dogs and their ways of guiding. | |||
For third persons to the robot and handlers research has been done mainly focused on the touching and nudging aspect of the robot. This to see what reactions a touching robot may elicit, the safety of this concept, and the ethics of robotic touch. | |||
Secondary users include institutions that provide the robot for visually impaired people to navigate through their buildings. These users include, universities, government buildings, shopping malls, offices or museums. | |||
===Society=== | |||
As mentioned above, 2.7% of the population severe from vision loss, however there are many more benefits from a robot that can safely and quickly navigate through a crowd. Any robot that has a mobile function in society, will at some point encounter a crowd. Whether that is a dense or sparse crowd, or simply people blocking an entry or hallway. Consider robots that work in social service such as a restaurant, delivery robots or even guide robots for others than visually impaired people, for example at museums or shopping malls. | |||
For these robots, it is important that they can safely traverse through crowds in the quickest way possible. The solution investigated and presented here, is a good step in the right direction. Of course, each of these robots would need a different design in order to properly execute its function, but the strength lies in the social algorithm where the robot moves through a crowd in a different way than robots do now. | |||
Specifically for navigating visually impaired people, it helps with their accessibility and inclusivity in society. Implementing a robot such as this, will allow them to be a more integral part of society without having to rely on other people. | |||
===Enterprise=== | |||
For enterprises that might employ these robots there are two advantages. The use of the robot will enable visually impaired customers to have better access to any services the companies might provide. In addition, they will have a competitive advantage over competitors that do not provide such a robot or such a service. For example, a shopping mall would improve their accessibility which would turn into more customers, whereas government buildings improve general satisfaction. | |||
Specifically for universities such as the TU/e, next to attracting more students, it improves their public image that shows the effort to make a higher education possible and easier for all people. An advantage over other solutions such as a human guide, is that no new employees need to be trained. No big infrastructure changes, such as extra cameras or sensors throughout the building are needed for another type of robot or navigator. And lastly, there is no issue of a failing connection with for example a smartphone. | |||
==Project Requirements, Preferences, and Constraints== | |||
===Creating RPC criteria=== | |||
===='''Setting requirements'''==== | |||
The most important thing in building a robot operating in public spaces is to make it complete its tasks in a safe manner; not harming bystanders or the user themselves. Most hazards in robot-human interactions (or vice versa) in pedestrian spaces are derived from physical contact<ref name=":1">Salvini, P., Paez-Granados, D. & Billard, A. Safety Concerns Emerging from Robots Navigating in Crowded Pedestrian Areas. ''Int J of Soc Robotics'' 14, 441–462 (2022). <nowiki>https://doi.org/10.1007/s12369-021-00796-4</nowiki></ref>. This problem is even more present when working in crowded spaces where physical contact is impractical to avoid or cannot be avoided. Therefore, the robot has to be made physically safe; typical touch, swipes, and collisions are made non-hazardous. This term ‘physically safe’ will be abbreviated to ‘touch safe’ to make its meaning more apparent. | |||
If the robot somehow exhibits unsafe behaviour the user should be able to easily stop the robot with an emergency stop. Because the robot is able to make physical contact and apply substantial force, it becomes even more paramount that rogue behaviour is easily stopped if it occurs. | |||
When interacting with the user it should make them feel safe and thus allow trust in the robot. If the user does not feel safe, they cannot trust the robot and might become unnecessarily anxious or stressed. With as result that the user may avoid its services. Besides this the users might display unpredictable or counter-productive behaviour, e.g., walking excessively slow, not following the robot, etc. To this end the robot should be able to communicate its intent to the user so that they won’t have to be on-edge all the time. | |||
For the robot to be viable in practice there are some restrictions like making the robot relatively cheap since the budget is not unlimited and competing solutions like human guides exist for a set price; too large of a price would make robot guides obsolete. Our use-case also has restrictions on infra-structural modifications to the campus building of the TU/e as a previous solution was rejected due to this reason; installing waypoints all over the buildings was too much of an investment. | |||
===='''Setting preferences'''==== | |||
The robot should not slow down its user when avoidable, so an average speed of 1 m/s (average walking speed visually impaired users<ref name=":2" />) would be a good goal. | |||
For the robot to reach its goal efficiently it should avoid stopping for people. Even more reasons to avoid stopping is to make the user able to walk a constant speed, requiring less mental strain on its user, as well as avoiding hazards which occur due to stopping in pedestrian spaces like surprising and hitting the person behind the user<ref name=":1" />. | |||
===='''Setting constraints'''==== | |||
For the robot to operate in our specified use case it should be able to navigate campus. This involves being able to navigate narrow walk bridges and the wide-open spaces with different walk routes. Such things as interaction with elevators or stairs will not be focused on this research. | |||
===<u>RPC-list</u>=== | |||
====Requirements==== | |||
*Safety | *Safety | ||
** Touch proof | **Touch proof | ||
** Does not harm the user | **Does not harm bystanders or the user | ||
** | **Installed emergency stop | ||
*User feedback/interaction | |||
**Should give feedback about intentions to user | |||
**Robot must be able to receive feedback and information from user | |||
**Handler should feel safe based on interaction with robot | |||
* User feedback/interaction | *Implementable | ||
** Should give feedback about intentions to user | **Relatively cheap | ||
** Robot must be able to receive feedback from user | **No infrastructural changes in buildings | ||
** Handler should feel safe based on interaction with robot | |||
====Preferences==== | |||
* Implementable | |||
** Relatively cheap | *1 m/s (3.6 km/h) walking speed should be reached<ref name=":2">CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People (acm.org)</ref> | ||
** No | *Does not stop for people unnecessarily | ||
====Constraints==== | |||
*Environment (TU/e campus) | |||
**Narrow walk bridges/hallways | |||
**Big open spaces | |||
==The solutions== | |||
In this section the worked-out solution to the problem statement is given. The solution consists of a physical and a behavioural description of the robot. These two factors influence each other: The design has an impact on how the robot should behave while socially navigating through a crowd, while the way it navigates through a crowd makes the specific requirements of the design. These together give a clear answer to the research question on how the robot with this specific design should socially navigate through a unidirectional crowd while guiding visually impaired users. | |||
This chapter consist of a detailed explanation of the physical design of the robot. The robot is designed to adhere as closely as possible to the rpc-list. After the design is defined the corresponding behaviour will be defined using scenarios. These scenarios are used to explain the behaviour we would want to see and expect. In a broader sense, this should demonstrate how the method of navigation can be utilised to effectively and safely navigate through dense crowds. | |||
===Design=== | |||
In this chapter the design of the robot model is documented. With the design, main focusses are safety and communication of nudging to the visually impaired handler, and third persons. | |||
For the design of the robot the main inspiration is the CaBot<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People" />. This is basically a type of suitcase design (rectangular box with 4 motorized wheels), with in the rectangular box all its hardware. Interestingly, it also has a sensor on its handle for vision (higher perspective). This design is rather simple, and the easy flat terrain on the TU/e campus should be no problem for the wheels. The CaBot excels in guiding people to a new location but does not work through crowds. When looking at safety the body design has been altered for nudging and bumping into people. Also, the handle design has been revamped for better communication to user. | |||
=== | ====Handle design==== | ||
[[File:Guide arm front.png|thumb|Front view of the arm design of the guide robot to which the guided can grab on. The speed switch can be seen on the left. The settings are denoted using written numbers instead of braille because of limitations of the CAD software.]] | |||
[[File:Guide arm side.png|alt=Back view of the arm design of the guide robot to which the guided can grab on. Interface utilities have not been added yet.|thumb|Back view of the arm design of the guide robot to which the guided can grab on. The upper arm, connecting to the hand-hold, has a suspension mechanism and a hinge.]] | |||
As the robots behaviour is focused on traversing through crowds of people, there is an important function also part of it. How to communicate this direction to its user? Any audible direction will quickly interfere with the sounds from the surroundings, which can result in missing the entire message or allow for confusion. Although a headset might allow for clearer communication, this is still not ideal. Therefore, the easiest way to provide feedback to the user is through the handle. The robot has a few functions that it needs to be able to communicate with the user or be able to be controlled by the user: | |||
*Speed | |||
* | **Setting a faster or slower speed | ||
** | **Communicating slowing down or accelerating | ||
** | **Emergency stop | ||
** | |||
== | *Direction | ||
**Turning left | |||
**Turning right | |||
All of these functions can be placed inside the handle, while designing for minimal strain on the user's active control. The average breadth of a male adult hand is 8.9 cm<ref>''ANTHROPOMETRY AND BIOMECHANICS''. (n.d.). <nowiki>https://msis.jsc.nasa.gov/sections/section03.htm</nowiki></ref>, which means that the handle needs to be big enough to allow people to hold on while also incorporating the different sensors and actuators. For white canes, the WHO<ref>WHO. (n.d.). ASSISTIVE PRODUCT SPECIFICATION FOR PROCUREMENT. At ''who.int''. <nowiki>https://www.who.int/docs/default-source/assistive-technology-2/aps/vision/aps24-white-canes-oc-use.pdf?sfvrsn=5993e0dc_2</nowiki></ref> has presented a draft product specification where the handle should have a diameter of 2.5cm. Which will be used for the handle of the robot as well. Since the robot can be seen as functioning similar to a guide dog, the handle will have a design similar to harnesses used for blind dogs, meaning a perpendicular, although not curved, handle that will stop in place if released.<ref>dog-harnesses-store.co.uk. (n.d.). ''Best Guide Dog Harnesses in UK for Mobility Assistance''. <nowiki>https://www.dog-harnesses-store.co.uk/guide-dog-harness-uk-c-101/#descSub</nowiki></ref> To be able to comfortable accommodate the controls and sensors described below, the total size of the handle will be 20 cm. | |||
The handle, which is connected to the robot, will provide automatic directional cues, without additional sensors or actuators. This will simplify the robot and act more similar to a guide dog. As for the matter of speed, there are three systems that would be implemented. The emergency stop, feedback about the acceleration and deceleration of the robot and the speed control of the user. The emergency stop can be a simple sensor in the handle that detects if the handle is currently being hold, if not, the robot will automatically stop moving and stay in place. The speed can be regulated via a switch-like control as visible on the CAD render on the right. When walking with a guide dog, the selected walking speed is about 1 m/s <ref name=":2" /> for visually impaired people, which means that with five settings, ranging from 0 m/s, 0.5 m/s, 0.75 m/s, 1.0 m/s and 1.25 m/s, the user can set their own speed preference. In order to give feedback about its current setting, the different numbers will be detailed in braille. Furthermore, changing settings will encounter some resistance and a feelable ‘click’ instead of being a smooth transition. The user can, at any times, use their thumb, or any other finger, to quickly check the position of the device and determine the speed setting. The ‘click’ provides extra security that the speed will not be accidentally adjusted without the user being aware of it. To this end, the settings will only affect the actual walking speed after a short delay to allow the user to have time to revert any changes. | |||
Lastly, the robot might, for whatever reason have to slow down while walking through the crowd. Either for obstacles, other people, or in order to go properly with the flow of the crowd. Since this falls outside the speed setting, the user must be made aware of the robots' actions. A simple piezo haptic actuator can do the trick. By placing it in the middle of the handle, it will be easily detected. A code for slowing down, for example a pulsating rhythm, and a code for speeding up, a continuous vibration, will convey the actions of the robot. Of course, this is in addition to the physical feel that the user has via the pull on the handle via the arm. However, because trust is so important in human-robot interactions, this is just additional feedback from the part of the robot to increase the confidence of the user when using the robot. | |||
<br /> | |||
[[File:Arm design sketches.jpg|alt=3 sketches of different designs of an arm for a guidance robot|thumb|3 sketches of different designs of an arm for a guidance robot]] | |||
====Arm design==== | |||
Multiple designs were considered. The arm connects the handle to the body, it is important here that the handle height can be changed. One thing that was added in the name of safety was suspension, so that the movements of the robot would not jerk the arm of the guided if it were to suddenly change speeds, when for example bumping or nudging. Most design iterations went over on how to integrate the suspension. | |||
The first design was a straight pole from the robot body to the guided arm (as can be seen in the top sketch in the figure to the right). A problem we could see was that if the robot were to stop suddenly, it would push the arm slightly up instead of compressing the suspension. To solve this problem a joint was introduced in the middle of the arm (as can be seen in the middle sketch in the figure to the right). An alternative solution was to have the suspension only act horizontally and internalize it (as can be seen in the bottom sketch). This would allow the pole to have the same design as the first sketch without compromising on the suspension behaviour. Another plus would be that the pole would be marginally lighter due to this suspension being moved inwards. | |||
We have chosen for the second design as it had the intended suspension behaviour while remaining as simple as possible. This allows the mechanism to be constructed from mostly off-the-self parts, reducing the cost. | |||
====Body Design==== | |||
For the body three main designs were considered: A square, a cylindrical form, and a cylinder which changes diameter over its height. The square was immediately ruled out due to its sharp corners making it decidedly not touch safe. The more cylindrical shapes could more easily slide through public and had less chance to hit people hard on the front (it allows for a sliding motion instead of head-on collision). This left the choice between a normal cylinder, a cylinder wide at the bottom, and a cylinder wide at the top. | |||
A bottom-heavy design would help with balance; If the robot would bump it would hit at the lowest point, meaning more stability. However, it may surprise people when it hits as they might not notice the wide bottom. This is where the wide top outperforms, as it hits people around their waist/lower back area where collision can be more easily spotted. Furthermore, this is a more effective place to nudge people for them to get out the way (a lower hit might instead make people lift their leg instead of stepping out of the way). A draw back is that the robot is touched higher and more easily tips over. That is why in the design the best of both worlds is chosen. The body has a big diameter lower with a big bumper to not tip over, and has 'whiskers' of a soft, compressible foam material on top at the front to softly touch, or nudge people if they are in the way. Research has shown that touch by a robot elicits the same response by humans as touch by humans<ref>''Using contact-based inducement for efficient navigation in a congested environment''. (2015, August 1). IEEE Conference Publication | IEEE Xplore. <nowiki>https://ieeexplore.ieee.org/document/7333673</nowiki></ref>. The material for the rest of the body is a plastic as to make it not too hard. | |||
[[File:Guide full.png|center|thumb|This cad design shows the oval body shape of the design. It has its biggest diameter at 30 cm high, and whiskers at 120 cm from the ground.]] | |||
The pole on top of the body has two functions. | |||
*Visibility | |||
*Sensors | |||
The pole is 100 cm long, making the whole guide robot stand at 220 cm tall. This helps for the sensors which get, from a higher point of view, a better overview of the crowd. This height also helps with noticeability in dense crowds where at eye-level it will still be visible even when the lower body is (partially) obscured. | |||
===Behavioural description=== | |||
The behavioural description will concern behaviour in a crowd with a singular, uniform walking direction. As mentioned before, the expected behaviour will be described using scenarios. These will first describe the standard scenario, after which two special cases are discussed. Furthermore, it will be briefly discussed how this behaviour might also benefit other crowd types or behaviour. The purpose of the behaviour is to make the robot guide someone efficiently to reach a goal while abiding by social rules/norms. | |||
It is important to note that joining, and leaving these crowds require different behaviour (like sparse crowd navigation). These are thus not considered to fall inside of the scope of the research question. | |||
First, the standard navigation method will be discussed and how it functions in most scenarios. | |||
====The standard scenario==== | |||
López suggested that to navigate, the guide should check where it ''can'' walk, not where it cannot. He also suggested following a lead of some kind could make navigation in unidirectional crowds easier. These traits have been used to define the standard scenario. | |||
In this scenario the robot uses its LIDAR technology to follow a moving point cloud (i.e., the lead) in front of it. This point cloud could be one person or even a whole group. Regardless of this, the point cloud will always indicate the end of the guide's free walking space (space where nothing else stands in its way). It can thus be said that between this lead and the guide, there will in most cases, always be free walking space. As the lead walks in front of the guide it will continuously be creating a space in the crowd behind it, and in front of the robot, where the guide can move. | |||
The robot cannot see the difference between one person or a group, this will make the robot more robust as small details in people's behaviour will not affect the guide's actions. | |||
===='''Scenario 1: Cut off'''==== | |||
While in the standard scenario, someone or something starts to insert itself in between the guide and the previously thought of leading cloud. This has multiple different sub-scenario’s will be discussed. In this scenario we will consider a crowded space with approximately 0.8 persons/m<sup>2</sup> (which nears shoulder-to-shoulder crowds as found in <ref>Trautman, P., Ma, J., Murray, R. M., & Krause, A. (2015). Robot navigation in dense human crowds: Statistical models and experimental studies of human–robot cooperation. ''The International Journal of Robotics Research'', ''34''(3), 335-356.</ref>), where the people move alongside each other. Since the third person is inserting from the side it may not be assumed that only the feelers of the robot make contact. This means more severe consequences may follow. | |||
====='''Decision making criteria'''===== | |||
The decision making of the guide should depend on the intentions of the third person, the effects of their actions on the guide(d), and the effects on themselves. | |||
By far the most difficult thing is to determine the intentions of the third person. Are they trying to insert themselves in front of the robot or are they simply drifting in front. Since their mind cannot be read it seems reasonable to base the decision purely on the latter 2 decisive factors, namely, the effects on the guide(d) and the effects on the person inserting themselves. | |||
====='''Guide’s options'''===== | |||
There are 3 options the robot can take in any given scenario: | |||
{| class="wikitable" | {| class="wikitable" | ||
| | |Effects Action -> | ||
|Bump | |||
|Make way | |||
|Move to the side | |||
|- | |- | ||
| | |Effects on the guide(d) | ||
| | |<nowiki>- Little to no travel delay</nowiki> | ||
- Depending on the severity of the impact it might result in the robot having a sudden change in speed, inconveniencing the guided. | |||
|<nowiki>- The robot might have to slow down temporarily which might inconvenience the guided.</nowiki> | |||
- The robot might have to slow down permanently due to a change in the leads’ walking speed leading to a higher travel time. | |||
- Other people might also try to slip in front leading to multiple delays. | |||
|<nowiki>- The guided might incur a travel delay due to the perpendicular movement.</nowiki> | |||
- Too much side-to-side movement might lead to sporadic guidance to the guided. | |||
- The guide will have to make accurate decisions when sliding in front of someone else which might lead to unexpected problems or delays. | |||
|- | |- | ||
| | |Effects on the person inserting themselves | ||
| | |<nowiki>- They make physical contact with the robot resulting in a risk of injury depending on the severity.</nowiki> | ||
- They might be surprised by the robot resulting in unpredictable scenarios. | |||
- They might not be able to return to their original spot in the crowd resulting in unpredictable consequences. | |||
|<nowiki>- None</nowiki> | |||
|<nowiki>- None</nowiki> | |||
|} | |||
====='''Scenario variables'''===== | |||
It can be seen that the effect of any action is very much context dependent and as such a well-made decision will only be possible if the guide is well-informed. Assuming this is the case for now we can set up 4 factors which will determine the way the robot should act: | |||
1. The relative normal speed of the third person | |||
2. Their relative perpendicular speed | |||
3. The third person’s space to act | |||
4. The robot’s space to act | |||
From this, 4 behavioural tables can be set up: | |||
====='''Scenario 1: expected behaviour'''===== | |||
The following scenarios might seem excessive since the robot will most likely not be a rule-based reflex-agent. This detailed model should however be of importance when informing our decision-making process in the design of the robot as well as the evaluation of the simulation. The following behavioural tables have the relative forward speed of the guide, while at the top the speed the third person has in the direction perpendicular to the guide's walking direction is given. | |||
'''The third person and the robot are capable of making way''' | |||
{| class="wikitable" | |||
| | |||
|Low perpendicular speed | |||
|Medium perpendicular speed | |||
|High perpendicular speed | |||
|- | |||
|Smaller forward speed | |||
|Robot should make way | |||
|Robot should make way, as people think it shows manners and awareness. | |||
|Robot should make way, as people think it shows manners and awareness. | |||
|- | |||
|Same forward speed | |||
|Robot does not make way | |||
|Robot should make way, but prevent heavy breaking, soft pushing | |||
is still an option if the integration is too narrow | |||
|Robot should make way, but prevent heavy breaking, soft pushing | |||
is still an option if the integration is too narrow. | |||
|- | |||
|Larger forward speed | |||
|Robot does not make way | |||
|Robot does not make way | |||
|Robot does not make way, but tries to soften the by moving along the perpendicular direction of the third person | |||
|} | |||
[[File:Scenario 2 inserting file.png|alt=Depiction of a third person inserting themselves between the guide and the lead.|thumb|Depiction of a third person inserting themselves between the guide and the lead. The circles represent people and the guide. The arrows indicate the direction they are moving. Grey is a normal crowd member, red is the third person cutting of the guide, dark blue is the guide, and light blue is the guided.]] | |||
'''Only the third person is capable of making way (see figure to the right)''' | |||
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself. | |||
{| class="wikitable" | |||
| | |||
|Low perpendicular speed | |||
|Medium perpendicular speed | |||
|High perpendicular speed | |||
|- | |||
|Smaller forward speed | |||
|The guide should not make way and risk impact to indicate it has no free space. | |||
|The guide should not make way and risk impact to indicate it has no free space. | |||
|The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact. | |||
|- | |||
|Same forward speed | |||
|The guide should not make way and risk impact to indicate it has no free space. | |||
|The guide should not make way and risk impact to indicate it has no free space. | |||
|The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact. | |||
|- | |||
|Larger forward speed | |||
|The guide should not make way and risk impact to indicate it has no free space. | |||
|If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down to soften the impact. | |||
|If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down and move in the same perpendicular direction as the third person to soften the impact. | |||
|} | |||
'''Only the robot is capable of making way''' | |||
{| class="wikitable" | |||
| | |||
|Low perpendicular speed | |||
|medium perpendicular speed | |||
|High perpendicular speed | |||
|- | |- | ||
| | |Smaller forward speed | ||
| | |Robot should make way | ||
|Robot should make way | |||
|Robot should make way, trying to prevent heavy breaking | |||
|- | |- | ||
| | |Same forward speed | ||
| | |Robot should make way | ||
|Robot should make way | |||
|Robot should make way, trying to prevent heavy breaking | |||
|- | |- | ||
| | |Larger forward speed | ||
| | |Robot should make way | ||
|Robot tries to make way, preventing heavy breaking | |||
|Robot tries to make way, preventing heavy breaking | |||
|} | |||
'''Neither are capable of making way''' | |||
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself. | |||
{| class="wikitable" | |||
| | |||
|Low perpendicular speed | |||
|Medium perpendicular speed | |||
|High perpendicular speed | |||
|- | |- | ||
| | |Smaller forward speed | ||
| | |Robot should try to make as much way as possible before making continuous contact with the person until the third person finds a way to decouple. | ||
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should try to maintain continuous contact with the person until the third person finds a way to decouple. | |||
|Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple. | |||
|- | |- | ||
| | |Same forward speed | ||
| | |Robot should try to make as much way as possible | ||
If there is not much room the robot should not bother to bump | |||
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until the third person finds a way to decouple. | |||
|Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple. | |||
|- | |- | ||
| | |Larger forward speed | ||
| | |Robot should try to make as much way as possible before making continuous contact with the person until they naturally separate or the third person finds a way to decouple. | ||
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until they naturally separate or the third person finds a way to decouple. | |||
|Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until they naturally decouple or the third person finds a way to decouple. | |||
|} | |||
===='''Scenario 2: Stalled lead'''==== | |||
While in the standard scenario the lead has stopped moving. The guide may avoid the lead or nudge them. The guide is moving unidirectionally with the lead, and it is therefor assumed all impact will occur at the front of the guide. | |||
====='''Guide options'''===== | |||
The decision making of the robot should depend on the effects on the guide(d) and on the leads. The robot may in all situations attempt the following options: | |||
{| class="wikitable" | |||
|Effects Action -> | |||
|Try alternative route | |||
|Robot nudges using feelers | |||
|Stops | |||
|- | |- | ||
| | |Effects on the guide(d) | ||
| | |<nowiki>- The robot has to make side-to-side movement which results in a more sporadic pathing. This might inconvenience the guided.</nowiki> | ||
- Moving aside in a crowded space may result in the guide, or worse guided, to be pushed by other people. | |||
- Behaviour requires more complex observational methods and behaviour. | |||
|<nowiki>- Does not always resolve the problem which leads to more delay. </nowiki> | |||
|<nowiki>- Guide stops. significant time delay.</nowiki> | |||
- People behind guided may walk into or push them. | |||
|- | |||
|Effects on the stalled lead | |||
|<nowiki>- None</nowiki> | |||
|<nowiki>- Person may have to step aside or start moving.</nowiki><br />- Person might be uncomfortable with being nudged or pushed. | |||
|<nowiki>- None</nowiki> | |||
|} | |||
====='''Scenario variables'''===== | |||
The main variables are the following: | |||
1. Space to act for guide | |||
2. Space to act for lead | |||
====='''Scenario 2: expected behaviour'''===== | |||
<u>Due to the great low-risk results of nudging using the feelers this will in all cases be the first action</u>. | |||
If the attempt fails however, it must be decided if the guide should try to path around the now blocked path or to stop. If it can be seen that the lead ahead is stopping out of its own volition (There is free space in front of the lead), the robot should try to navigate around the lead in most cases. If the lead is expected to start moving in a reasonable timeframe, depending on the amount of time rerouting would take, the guide should stop. Something which hasn’t been taken into account yet is the actual freedom of the guide; a dense surrounding, or a fast-moving crowd could stop the guide being able to safely step aside. In these cases, the specifics and safety of the cross-flow-behaviour are of importance. | |||
Assuming the cross-flow-behaviour to be only safe in the limited case of a sparse, normal moving crowd, the following behavioural table can be made: | |||
{| class="wikitable" | |||
| | |||
|Normal moving crowd | |||
|Fast moving crowd | |||
|- | |- | ||
| | |Sparse crowd | ||
| | |Try alternative route | ||
|Stop | |||
|- | |- | ||
| | |Dense crowd | ||
| | |Stop | ||
|Stop | |||
|} | |} | ||
If the guide has stopped for a while and sees an opportunity for the lead to move, it should play a message asking for the lead to move. This also notifies the guided of the situation. | |||
<br /> | |||
====Generalisation==== | |||
The following scenarios pertain to a situation where the guide does not navigate alongside a unidirectional crowd flow. Although this is out of the scope of this research it is useful to look at what a robot with this design can add to other scenarios using touch. First, there will be a short look at the possibilities of physical touch in the other scenarios sketched by López. | |||
Opposing a unidirectional crowd is slightly harder for a robot, while moving through an opposing flow the robot is dependent on people moving out of the way. Otherwise, no space might open up where the robot can go. Here a robot that is programmed to not touch people might stall if there is a high enough crowd density. This is where light bumping comes might be useful, if people do not go fully out of the way they will get a light touch. | |||
=== | Crossing a unidirectional crowd is the hardest scenario. It might be hard for positions to open up where the robot can go. This due to people coming from the side, and the social implications that come with that. Does the robot give space, or will it walk on? Research has found that people find robots more social, and better if they let people pass first, but this has a problem of the robot stalling. That is why in dense crowds it might be preferred that the robot starts nudging to make a way for the guided person. | ||
Integrating into a crowd is an important behaviour of the robot. Inside the TUE, maximum crowd densities are assumed to occur only scarcely in the span of a year. In less dense crowds the guide should be able to integrate into the flow without hitting other people. However, in the scenario that crowds are to be very dense, the guide should be able to act in a more assertive manner due to the increase in safety measures preventing harmful human-robot collisions. | |||
==Simulation== | |||
'''Goal''': | |||
In order for the behaviour description to be relevant, we show that the proposed behaviour is safe to employ in a representative environment. To measure this safety, we first of all measure the collisions and make the reasonable assumption that this is the primary source of harm our robot can afflict. The simulation will gather data about the frequency of collisions, and statistics on the forces applied to the person and robot during the collision. Secondly, we consider the adherence of the robot's behaviour to the ISO guidelines for safety<ref name=":0" /><ref name=":4" />, focussing on the minimum safety gap and the maximum relative velocity guidelines. | |||
'''Overview of applicable ISO Safety standards''' | |||
According to ISO 10218-2:2011<ref name=":4" />, for a (industrial) robot to operate safely: | |||
*Protective barriers should be included (Which is discussed in the body design) | |||
*Warning labels should indicate potential hazards. As the robot does not operate any manipulators or tools, and is designed not to be able to crush someone, or run someone over, the only danger here is tripping, which is also minimized in the design. | |||
*Light curtains, pressure mats and other safety devices: The robot includes whiskers at the front, that also help with avoiding a direct body collision. | |||
*Others, which do not apply to this robot, as it is not present in an industrial setting. | |||
In addition, ISO 15066:2016<ref name=":0" /> indicates requirements for robots in proximity to human operators: | |||
*A risk assessment should be made to identify hazards to surrounding personnel. | |||
*Monitoring systems to keep track of speed and separation, which are included in the form of the LIDAR sensor. | |||
*Force and power limits of the robot: The robot is not incredibly high powered, the amount of force it is capable of applying in normal operation depends on the physical implementation of this proposed concept, but is unlikely to be problematic, as there is no need for high-powered actuators for the drive train to function. | |||
*Emergency stop (already covered in the other ISO standard) | |||
*A safety distance gap should be kept to people around the robot, this is however ignored, as we are developing a solution that aims to be safe, without needing to keep clear of humans. | |||
*Force and Pressure limits are imposed on the robot, to prevent serious harm, both during normal operation and collision. These include: | |||
**A limit on the contact force during collision of 150N, '''this will be the focus of the simulation'''. | |||
**A pressure limit during collision of 1.5 kN/m^2, we make the reasonable assumption this pressure limit cannot be reached without violating the previous condition, as our robot is designed to be as smooth as possible, making it incredibly difficult for the robot to apply a lot of force in a very small and local area. | |||
**Force limiting or compliance techniques are to be implemented to reduce the force applied during collision. This comes in the form of whiskers for the compliance aspects, allowing them to deform to reduce the impact, and the behaviour is designed to limit the force by slowing, satisfying the limitation requirement. | |||
As a performance measure for the simulation, we consider the maximum force applied in a collision during the duration of the simulation. This is the element of the ISO standards that is not negated by the behaviour design, or design of the body itself, and thus the part that remains to show the robot is safe to operate. | |||
This data can also inform the design of the robot and its behaviour, as it can test various form factors, and navigation algorithms to optimize. In the end the simulation results act as assistance in design iteration, and ultimately inform us about the viability of the robot in crowds. | |||
'''Why a simulation:''' | |||
Testing which techniques have an impact require a setting with a lot of people to form a crowd, which can be controlled precise enough to eliminate outside or 'luck' factors. | |||
The performance needs to be a function of measurable starting conditions, and the behaviour of the robot. | |||
When using a real robot, we would need to work in an iterative approach, where we can alternate the appearance and workings of the robot after each simulation, to simulate different scenarios. This would require re-building the robot | |||
each time, which is something we simply don't have time for. Additionally, to obtain a large enough crowd (think of more than a 100 students) would become tricky in such a short notice. Using a real-world crowd (by going to the buildings in-between lectures) would present the most accurate situation but is not controllable and not reproducible. There is also the ethical dilemma of testing a potentially hazardous robot in a real crowd, and logistically, organizing a controlled experiment with a crowd of students is not an option. | |||
===Simulation: situation analysis=== | |||
The real world would have the robot guide a blind person through the atlas building, to a goal. This situation can broadly be dissected as: | |||
*Performance Measure: The maximum force applied during collision with a person, which cannot exceed 150 Newton. | |||
*Environment: Dynamic, partially unknown interior room, designed for human navigation. | |||
*Actuators: wheels. | |||
*Sensors: LIDAR & Camera, abstracted to General purpose vision and environment mapping sensors, but are assumed to be limited range and accuracy, systems capable of deducing depth, position and dynamic- or static obstacles. | |||
The environment is assumed to be: | |||
*Partially Observable | |||
*Stochastic | |||
*Competitive and Collaborative (humans aid each other in navigation, but are also their own obstacles) | |||
*Multi-agent | |||
*Dynamic | |||
*Sequential | |||
*Unknown | |||
===Considered Simulation Design variants.=== | |||
Simulating the robot may take various shapes, each with their own advantages. When considering the type of simulation we will make, we considered the following aspects: | |||
Environment Model: | |||
*Mathematical: Building a model of the environment, purely based on mathematical expressions of the real world. | |||
*Geometrical: Building a 3d version of the environment, using a 3d virtual representation of the environment. | |||
*2D: The environment does not consider depth | |||
*3D: The environment does consider depth | |||
Robot Agent: | |||
*Global awareness: The robot model has access to all information across the entire environment. | |||
*Sensory awareness: Observing the Simulated environment with virtual (imperfect) sensors. The robot only has access to the observed information. | |||
*Mechanics simulation: The detail at which the robot's body is modelled. Factors include whether the precise shape is considered, the accuracy of actuators and other systems, and delay between command and response. | |||
Crowd Behaviour Model: | |||
*Boid: Boids are a common method of simulating herd behaviour in animals (particularly fish) | |||
*Social Forces: The desire to approach a goal and avoid and follow the crowd is captured in vectors, which determine the velocity of each agent in the crowd. | |||
===Simulation: Crowd implementation=== | |||
To test the robot's capabilities in crowds through a simulation, the simulation must include a realistic model of how crowds behave. In the 1970s Henderson already related a macro view of the crowds with fluid-dynamics with great success<ref>Henderson LF. The statistics of crowd fluids. Nature. 1971 Feb 5;229(5284):381-3. doi: 10.1038/229381a0. PMID: 16059256.</ref>. For the local interactions the robot would experience in real life, this macro view is not realistic enough to model these interactions. Therefore, we have to use a more micro level description of crowds. We came across the social fore model created by D. Helbing and P. Molnár<ref>Helbing, D., & Molnar, P. (1995). Social force model for pedestrian dynamics. ''Physical review'', ''51''(5), 4282–4286. <nowiki>https://doi.org/10.1103/physreve.51.4282</nowiki></ref> in 1998. This model is well acclaimed and even though it has it's drawbacks like a full stop of pedestrians not working well in the model, we have decided to use the original formulation suggested in 1998. | |||
The social force model is a physical description of pedestrian behaviour, it models pedestrians as point masses with physical forces working upon them. Each pedestrian experiences a few different forces, which will be shortly explained. First, there is a driving force, this force models the internal desire of a pedestrian to go somewhere, it is represented as a direction and the pedestrians desired walking speed. The desired walking speed used is the same that the paper suggests namely a normally distributed random variable with mean of 1.34 m/s and a standard deviation of 0.26 m/s. The direction is calculated by using Unity's navmesh which generates paths through the environment given a start and end. Second, every pedestrian experiences some repulsive force generated by other pedestrians. These repulsive forces are calculated using the fact that humans, want to keep enough distance to each other and instinctively take into account the step size of others. This is calculated by creating an ellipse which is as big as the step the other pedestrian is taking. Then depending on this ellipse it's turned into a force which grows exponentially the closer to the other pedestrian you are, this is called the territorial effect and it points away from the other pedestrian. This is done for every pedestrian in the vicinity. Third, there is another repulsive force from walls and obstacles, this is far simpler as it can be described by an exponential force the closer you get to an obstacle, which points away from the obstacle. Finally, there is an attractive force, this force can be used for multiple things, either for friends who you would want to walk closer to or interesting objects or people in the vicinity. This force decreases over time as people lose interest, however this force is not applied in our model. Both the repulsive and attractive forces are weighted depending on if the object applying the force is inside the field of vision of a pedestrian. The net force applied to a pedestrian is the summation of all these forces and can be applied as an acceleration where the maximum attainable speed of a pedestrian is capped by its desired speed. For performance reasons most of this calculation is done in parallel on the GPU, because of this a trade-off was made. For the repulsive force generated by the walls only the closest object is taken into account since passing all the objects to the GPU creates too much overhead for the CPU loading the data to it. If everything would have been handled by the CPU however, the possible amount of simulated people would have been too little to form a crowd. | |||
===Simulation: Robot agent=== | |||
The robot agent was implemented using Unity. The body of the robot was created by importing the CAD model into Blender and then importing it into Unity. To this model a mesh collider is added to try and make collisions more precise. Attaching a rigid body to the robot agent allowed it to interact with its environment as well as follow the laws of physics (or at least the physics of the Unity engine). | |||
The behaviour of the robot was implemented in the following way: | |||
====Map of the environment==== | |||
One of our base assumptions was that the robot has a map of the environment it is in, with landmarks placed. Thus, it would know how the base environment is structured according to the floor plan for example. Thus, it knows where there are walls as well as points of interests, which are the goals to which it will guide people. This was implemented into the simulation via Unity's Navmesh. It allows us to create a mesh of the environment, dividing the space into places where the robot can and cannot move. Then using the default path-finding algorithm of Navmesh, the robot agent will calculate a path using this mesh, thus moving though the environment, while also keeping in mind the overlay of the map. The only issue with this approach is that the algorithm used for pathfinding is A* which, while it will calculate the shortest path to the goal, sometimes the shortest path is not the best path overall. | |||
====Sensing the environment==== | |||
Our robot agent is supposed to use a combination of a LiDAR, a camera and a thermal camera to recognize obstacles in its path that are not in the built-in map, or in other words more dynamic obstacles. While we have stated in our report how one could try and detect dynamic obstacles, in the simulation due to constraints, we rather than using point clouds to create a map of the close environment around the robot and then combine that map with the thermal camera vision to try and detect humans, we instead make use of Unity's raycast functionality, which allows us to cast light beams from our agent. Thus, using multiple beams from these raycasts we emulate a 2D LiDAR. Using this LiDAR as the main sensor we created two versions. | |||
The first version has better obstacle avoidance and overall smoothness of movement. Using the raycasts, based on whether the beam hits an object that as been tagged as an "undiscovered human", or "undiscovered obstacle" it convers the tag to discovered, which will then carve a space around the object on the mesh, which makes out the agent move around the object in the path it must take is near the obstacle. This version has some limitations, however. Due to the implementation of NavMesh and the movement ai in Unity, it does not allow us to follow the regular laws of physics, thus the robot could not interact with its environment correctly. Thus, we created a second version. | |||
The second version makes use of NavMesh to calculate a path much like the first version, however the way the agent moves is different. Rather than depend on the navigation ai, it uses a movement function of the rigid body component to traverse the environment to follow that path. This allows the agent to have physics in its interactions with its environment. The obstacle detection and avoidance are also done in a different way. Rather than carve out the mesh, we use three different sets of beams. Left, right and front. Based on where the obstacle is detected the agent reacts by slightly deviating from its path. The issue with this version however was that the movements of the robot were not smooth, thus while it could interact better with its environment, its movements when turning for example were not realistic. That is why we used the first version for the macro simulation. | |||
Finally it must be noted that the follow and bump behaviour in the implementation of the robot has some issues. Mainly that the robot would sometimes follow during times where it should not follow, as well as not following in moments where such behaviour would be the most efficient. The reasons for these issues is the fact that it is difficult to determine which person would be an ideal candidate to follow. For example our implementation depends of the direction the agent is looking at, as well as the rotation of the humans around the robot. If both the robot and a human have the same rotation, then that human is seen as a potential candidate. While on paper the idea seems good, in certain situations, for example when the robot is turning around corners, or making small adjustments in its path, it will then not be looking at the final goal. This would mean that it is possible it starts following a person, going in the wrong direction, granted there is no other option but to initiate the follow behaviour (when there are obstacles detected that prevent the robot from moving to the left, right, and forwards). This leads the robot to sometimes take inefficient paths. | |||
===Simulation: Environment=== | |||
The environment is a 3D geometry-based replica of the first floor of the ATLAS building in terms of large collision parameters. | |||
It has been constructed by tracing the edges a floorplan of Atlas, provided by the RE department, with collision objects. | |||
After the model was constructed, it was re-scaled in the unity engine to match the metric of the Atlas building. | |||
It should be noted that not all elements of the floorplan are accurate, as the layout of Atlas changes frequently to accommodate events. | |||
The model has various abstraction to accommodate constrains of the simulation. | |||
Entry ways have been blocked of, to avoid the crowd of walking outside of the defined perimeter, and doors are considered to be closed. | |||
The stairs have also been omitted, or remodelled to be impassable, as we do not consider other floors of the Atlas building in this simulation. | |||
Only the lower portion of this floor is considered, as there will be no walking crowd that collides with anything higher than 2 meters. | |||
===Simulation: Results=== | |||
'''Parameters:''' | |||
To obtain the results, the simulation was run with the robot starting at the north side of Atlas, moving towards the goal on the opposing south side of Atlas. The crowd was setup to contain 1500 agents, which is the maximum number of people the ground floor of Atlas is designed for, according to the Real Estate department. | |||
[[File:CROWD.png|alt=CROWD|thumb|1056x1056px|Screenshot of the crowd simulation in ATLAS. The robot is about to approach a chokepoint.]] | |||
'''Expected results:''' | |||
The expect result is for the Social Force model to generate a crowd that is typical of a very busy day in Atlas. With this comes: | |||
*The generation of dense 'streams' of agents moving in a similar path from goal to goal. | |||
*The existence of sparse and dense pockets of space, where some areas are move heavily congested. | |||
We do not expect the social force model to generate agents stationary near goals (such as real students, buying a drink at a machine, creating congestion around a coffee machine), as the model is focused on the movement of pedestrians. | |||
In order to behave safely in accordance to the ISO 10218-2:2011 and 15066:2016 requirements<ref name=":0">ISO 15066:2016(EN) Robots and robotic devices — Collaborative robots, International Organization for Standardization. https://www.iso.org/standard/62996.html, 2016</ref><ref name=":4">ISO 10218-2:2011 Robots and robotic devices — Safety requirements for industrial robots — Part 2: Robot systems and integration, International Organization for Standardization, https://www.iso.org/standard/41571.html, 2011-07</ref>, We expect the robot to: | |||
*Avoid collisions in the sparsely populated areas and follow its own path. | |||
*Follow crowd-agents to prevent collisions in adequately dense areas, where there is still enough space to avoid agents, but not enough to find your own path. | |||
*Follow its own path when the currently followed agent deviates too much from the optimal path. | |||
*Bump into crowd-agents when there is insufficient space to avoid them. | |||
*When bumping, the force should be minimal: The robot should ensure a relative velocity low enough to not cause pain or major discomfort. | |||
'''Result:''' | |||
*We observed that central spaces, such as the centre of the main hall, are indeed very calm. The crowd that formed was very sparse here, | |||
and as such the robot could use standard avoidance and pathfinding algorithms, A* in this case, to avoid the agents of the crowd and reach the goal without making a single collision. | |||
*We also observed that there tend to be congested areas around hallway entries and more narrow spaces. Here the crowd would become very dense, with agents themselves bumping into each other or narrowly avoiding each other. | |||
*We observed that the robot generally crosses a stream of densely packed agents, rather than that the stream moves in the same direction as the robot, so that in can follow it. While doing so, it does attempt to avoid agents, or reduce impact. | |||
*We observed that the robot does indeed bump into agents that are in the way, but it is hard to definitively state the robots uses bumping as a last resort. | |||
*We observed that the robot bumps through congested areas, instead of avoiding them, if its path requires it to get through this area. | |||
A video of a single iteration: https://www.youtube.com/watch?v=YAjKelmA9mM | |||
'''Conclusions''' | |||
We observed that the crowd generated by the Social Force model was indeed indicative of a typical crowd in Atlas. This caused the problem that the crowd, although it is representative, does not comply with the assumptions the robot makes in order to navigate a dense crowd. As the scope of the described behaviour ends at these assumptions, the implemented behaviour of the robot, which could only work inside the scope of this project, simply does not generate adequate results in terms of safety and performance. The robot behaviour earlier described assumes a laminar flow of people to navigate, and the streams that occur in the Atlas setting are often only partially laminar. Especially when streams cross, and around congested chokepoints, this assumption simply does not hold. Additional implementation would be required, to deal with non-laminar or generally omni-directional movement of crowd flows. | |||
We conclude that this is the reason why the robot does not always tend to follow flows and avoid bumps. As the scenarios we chose to focus the behaviour of this project on, are mixed with other scenarios such as crossing a crowd, that are not explicitly considered. As such the robot resorts to its basic non-crowd routine of attempting to follow the most efficient path. This effectively bypasses the behaviour we wish to test the safety for. As such we can only conclude from this simulation, that simplistic pathfinding behaviour with obstacle avoidance is sufficient to generate safe behaviour for navigating sparsely populated areas of Atlas. | |||
To show the safety of the behaviour itself, we thus decided to create more focussed environments, that force compliance with the robots expected crowd. | |||
===Simulation: Micro-simulations=== | |||
[[File:SUDDEN STOP.png|thumb|1036x1036px|Screenshot showing the micro simulation, where the robot is following a person that is suddenly stopping]] | |||
To test the safety of the robot's behaviour implementation, we created specific scenario's, which are better suited to showcase the intended behaviour of the robot, which together cover a large subset of problem the robot can solve. | |||
These scenarios are specifically created after running the Social Force simulation and are controlled instances of situations the robot agent encountered during its navigation in that simulation. | |||
The advantage of these scenarios, is that they are altered to force compliance with the robot's assumptions about the crowd, as described in the scenario's and behaviour sections of this wiki. | |||
As a consequence, the robot will show the behaviour that is described, and thus the safety of the robot, in situations encountered in the Atlas representative model, can be tested with the correct behaviour of the robot. | |||
Each scenario was run a total of 10 times, 5 times to observe the robot behaviour, and another 5 times to obtain force measurements during collisions that may occur in the scenarios. In each iteration, the parameters are the exact same. | |||
'''Micro scenario - sudden stop''' | |||
This micro scenario focuses on our second scenario "Stalled lead", in which the robot is following a person, and this person suddenly stops. In the scenario, the robot is forced to slow its pace and bump into the person, by placing a row of persons on each side to prevent it from avoiding the person. The robot slows it pace, and eventually bumps into the person, until they move. | |||
A video of the scenario in development is shown here: https://www.youtube.com/watch?v=rcPF2ZiYqlw | |||
{| class="wikitable" | |||
|+Simulation collision measurements | |||
!duration [frames] | |||
!impulse Magnitude [Newton-seconds] | |||
!nr of collisions | |||
!average force [Newton] | |||
|- | |||
|44 | |||
|67.0125 | |||
|8 | |||
|91.3807 | |||
|- | |||
|38 | |||
|56.0443 | |||
|6 | |||
|88.4910 | |||
|- | |||
|41 | |||
|62.1002 | |||
|8 | |||
|90.8783 | |||
|- | |||
|41 | |||
|63.2706 | |||
|8 | |||
|92.5911 | |||
|- | |||
|40 | |||
|59.3228 | |||
|6 | |||
|88.9842 | |||
|} | |||
'''Micro scenario - intersecting agents''' | |||
During this scenario, the robot is following a person, while another person is crossing the space between the robot and its lead. The robot shows it is capable of detecting the crossing person in time and reacts by allowing the crossing person to pass in front of it by slowing to a near halt. When the person has passed, the robot accelerates, and we observe that it returns the to the same distance it was following the person at earlier. | |||
[[File:CROSSING.png|thumb|1042x1042px|Screenshot showing the second micro simulation, where the robot is cut of by a crossing person while following a lead.]] | |||
We observed that the robot now indeed is capable of avoiding collision, instead of resorting immediately to bumping. As a result, during all 10 iterations run on this scenario did not result in any collisions. | |||
A video of the scenario in development is shown here: https://youtu.be/J_IOsJ16ifs | |||
'''Micro scenario - Results''' | |||
During the above scenario's, a script was attached to the robot that measured the number of collision events, the largest duration in frames (where 60 frames are 1 second) of the collision events, and the largest Impulse measured during the collisions. | |||
The evaluating script shows that for 5 separate iterations of the first scenario, the average force applied is well below the 150 Newton threshold. It should be noted that the script yields the largest and longest corresponding force and time a collision occurred during each run. The number of collision events is also computed using the convex rigid body of the robot mesh, which means that the number of collisions is likely to be higher, as the convex hull encapsulated space below the whiskers that is not actually occupied by the robot. | |||
'''Conclusion''' | |||
The micro simulations show that, if the assumptions on the robot behaviour are met, the total average force applied during the simulation is below the 150N threshold laid out in the ISO standard. In addition, the simulation shows that the behaviour of the robot is successful in avoiding contact in crowds unless required, satisfying the force limitation requirements in the ISO standard. We thus conclude that the proposed behaviour in this concept is safe, if the crowd behaviour is captured by the scenario's previously discussed in this document. | |||
<br /> | |||
==Conclusion== | |||
== | ===Project findings=== | ||
== | To the research question 'How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?' we have given an answer in the form of the provided various behavioural descriptions under the scenarios. From the micro-simulations it can be seen that it is safe to act in accordance with those for at least some of the behavioural rules. The simulations should however not be seen as definitive proof because it uses Unity's physics engine which lacks any kind of material simulations. To verify the claims made in this project regarding safety it would be best to run actual material simulations to find exact pressures. Furthermore, most of the behaviour has not been tested. | ||
Overall, this behaviour has its uses, making a navigation method like this which is not reliant on perfect information, allows the robot to neglect some observations simplifying what sensors are necessary. It also makes the robot more robust for small changes. For example, a non-living entity will not change how the robot behaves. | |||
===Future research=== | |||
The behaviour as described in the scenario's should be implemented in a more advanced simulation. This can be done in a discrete manner (rule-based agent) or a more inspired manner (Utility-based or learning agent, for which the descriptions would act more like a guideline). | |||
The acceptance of the design in crowds and users should be verified, this is a point which was lacking in this research. César López has mentioned that this can be designed for using established researched as a guideline but is finally verified with a physical prototype and a survey designed for such research. | |||
The design could also be made more detailed by adding any of the assumed working pieces mentioned in the problem scoping including adding behaviour for different kinds of dense crowds: | |||
*Localization of the guide | |||
*Identification of obstacles or other persons | |||
*Navigation in sparse crowds | |||
*Navigation in dense crowds | |||
*Overarching strategic planning (e.g., navigating between multiple floors or buildings) | |||
*Interaction with infrastructure (e.g., Doors, elevators, stairs, etc.) | |||
*Effective communication with the user (e.g., user being able to set a goal for the guide) | |||
Any of the behavioural changes or additions would require some kind of transitional system to switch between them. López mentioned that this can be done by selecting the behavioural model for which all conditions are met but implementing a general navigation method is a good way to make sure the guide always has something to fall back on. | |||
Finally, the risks and hazards of this design should be worked out in even more detail (like mechanical failure). | |||
===Project evaluation=== | |||
First it is important to note that what is presented in this report is not a full 8 weeks of work for 6 students. This is due to the change of subject after 2 weeks, and another 2 weeks it took to have a clear problem statement narrowed down enough to work on. This gave us a last 4 weeks in which a lot of work was done. During these remaining weeks, after the second meeting with López, it became clear that the scenarios that were worked were too extensive and fell outside the scope of the project; walking along a unidirectional crowd. | |||
After the final presentation there was a final meeting with César López in which the end result was evaluated and some of the main points will be discussed here. | |||
For this type of research safety is usually taken care of in the design process before development by using predetermined safety standards for such products. Due to time constraints small safety research was done alongside the making of the simulation. At the moment there is not an in-depth safety analysis done where possible hazards are identified, risks are determined, and consequences are determined. The main focus of the design is based on research on what might work when navigating a robot through a crowd. | |||
Furthermore, the simulation that was designed should have been more constrained from the beginning, to fit the chosen problem. This again shows how the scoping of the research question should have been done earlier in the project. This would allow the assumptions for the behaviour to be met. Making a simulation with clear assumptions that are met allows the behaviour of the design to be more intelligently formed, using a more iterative process, instead of the current methods. | |||
==== | ==Appendix== | ||
=== | ===Code:=== | ||
The code for the simulation can be found in the following github page: https://github.com/JJellie/VirtualCrowdSim | |||
Here some papers used in the research to the guide robot are summarized. These papers are mostly the state of the art of the hard- and software of guide robots, and crowd navigation. These summaries could be read to get a deeper understanding of the state of the art. | |||
===Literature Research=== | ===Literature Research=== | ||
Line 359: | Line 891: | ||
|- | |- | ||
|The Fuzzy Control Approach for a Quadruped Robot Guide Dog | |The Fuzzy Control Approach for a Quadruped Robot Guide Dog | ||
|<ref>https://link.springer.com/article/10.1007/s40815-020-01046-x?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref> | |<ref name="The Fuzzy Control Approach for a Quadruped Robot Guide Dog">https://link.springer.com/article/10.1007/s40815-020-01046-x?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref> | ||
|Wouter | |Wouter | ||
|- | |- | ||
|Design of a Portable Indoor Guide Robot for Blind People | |Design of a Portable Indoor Guide Robot for Blind People | ||
|<ref>https://ieeexplore.ieee.org/document/9536077</ref> | |<ref name="Design of a Portable Indoor Guide Robot for Blind People">https://ieeexplore.ieee.org/document/9536077</ref> | ||
|Wouter | |Wouter | ||
|- | |- | ||
|Guiding visually impaired people in the exhibition | |Guiding visually impaired people in the exhibition | ||
|<ref>Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. ''Mobile Guide'', ''6'', 1-6.</ref> | |<ref name="Guiding visually impaired people in the exhibition">Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. ''Mobile Guide'', ''6'', 1-6.</ref> | ||
|Joaquim | |Joaquim | ||
|- | |- | ||
|CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People | |CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People | ||
|<ref name=" | |<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People"> João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771 </ref> | ||
|Boril | |Boril | ||
|- | |- | ||
Line 379: | Line 911: | ||
|- | |- | ||
|Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques | |Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques | ||
|<ref name = " | |<ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref> | ||
|Boril | |||
|- | |||
|Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques | |||
|<ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref> | |||
|Boril | |Boril | ||
|} | |} | ||
Line 387: | Line 923: | ||
The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems). | The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems). | ||
It | It highlights sensors that we may also need: | ||
" In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. " | " In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. " | ||
Line 396: | Line 932: | ||
An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration. | An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration. | ||
The paper modelled acceleration, and rotation according to various lengthy math equations and matrices | The paper modelled acceleration, and rotation according to various lengthy math equations and matrices and applied noise and other real-word modifiers to the generated data. | ||
It notably uses cartesian and homogeneous coordinates in order to | It notably uses cartesian and homogeneous coordinates in order to separate and combine different components of their final model, such as rotation and translation. These components are shown in matrix form and are derived from specification of real-world sensors, known and common effects, and mathematical derivations of the latter two. | ||
The proposed model can be used to test code for our robot's position computations. | The proposed model can be used to test code for our robot's position computations. | ||
====An introduction to inertial navigation==== | ====An introduction to inertial navigation==== | ||
This paper (as report) is meant to be a guide towards determining positional and other navigation data from | This paper (as report) is meant to be a guide towards determining positional and other navigation data from interior based sensors like gyroscopes, accelerometers and IMU's in general. | ||
It starts by explaining the inner workings of a general IMU | It starts by explaining the inner workings of a general IMU and gives an overview of an algorithm used to determine position from said sensors' readings using integration, showing what intermitted values represent using pictograms. | ||
It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices. | It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices. | ||
In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using | In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using Allan variance and shows how this effects the values from a gyroscope. | ||
Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems. | Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems. | ||
Line 424: | Line 960: | ||
Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation. | Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation. | ||
Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a | Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a Karman filter to combat noise and drift. The final processing pipeline discussed is immensely complex due to the inclusion of bounce, collision and beacon-failure handling. | ||
Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost. | Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost. | ||
====Stepper motors: fundamentals, applications and design==== | ====Stepper motors: fundamentals, applications and design==== | ||
This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore it goes in-depth about how they are controlled. | This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore, it goes in-depth about how they are controlled. | ||
====Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities==== | ====Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities==== | ||
According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power. | According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power. | ||
The | The author's approach is by creating a 2d Delaunay triangulation from tracked key points, and then projecting this into 3d, this projection can have some issues where points are close in 2d but not in 3d, this is solved by geometric filters. Some algorithms update a mesh for every frame, but the authors try to maintain a mesh over multiple frames to reduce computational complexity, capture more of the scene and capture structural regularities. Using the triangular faces of the mesh they are able to extract geometry non-iteratively. | ||
In the next part of the paper they talk about optimizing the optimization problem derived from the previously mentioned specifications. | In the next part of the paper, they talk about optimizing the optimization problem derived from the previously mentioned specifications. | ||
Finally the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time. | Finally, the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time. | ||
====Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization==== | ====Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization==== | ||
In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity. | In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity. | ||
Line 448: | Line 983: | ||
The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation. | The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation. | ||
Finally they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled. | Finally, they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled. | ||
====Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry==== | ====Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry==== | ||
The authors from this paper propose an algorithm that fuses feature tracks from any | The authors from this paper propose an algorithm that fuses feature tracks from any number of cameras along with IMU measurements into a single optimization process, handles feature tracking on cameras with overlapping fovs, a subroutine to select the best landmarks for optimization reducing computational time and results from extensive testing. | ||
First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then | First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then it is refined by matching it with to the closest image feature in the camera projected to using Euclidian distance. After this it is explained how feature selection is done, this is done by computing a Jacobian matrix and then finding a submatrix that preserves the spectral distribution best. | ||
Finally experimental results show that with their system is closer to the ground truth than other similar systems. | Finally experimental results show that with their system is closer to the ground truth than other similar systems. | ||
====Optical 3D laser measurement system for navigation of autonomous mobile robot==== | ====Optical 3D laser measurement system for navigation of autonomous mobile robot==== | ||
This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT, four-wheel, four-motor ski-steer robotics platform. | This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method, the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT, four-wheel, four-motor ski-steer robotics platform. | ||
After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle. | After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus, after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle. | ||
Finally the authors go over their results from simulating this robot in | Finally, the authors go over their results from simulating this robot in Matlab as well as analyse its performance. | ||
====A mobile robot based system for fully automated thermal 3D mapping==== | ====A mobile robot based system for fully automated thermal 3D mapping==== | ||
This paper showcases a fully autonomous robot, which can create 3D thermal models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning. | This paper showcases a fully autonomous robot, which can create 3D thermal models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning. | ||
The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360 degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm. | The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360-degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally, the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm. | ||
====A review of 3D reconstruction techniques in civil engineering and their applications==== | ====A review of 3D reconstruction techniques in civil engineering and their applications==== | ||
This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds: | This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore, 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds: | ||
For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction | For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction | ||
feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors. | feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors. | ||
Line 474: | Line 1,009: | ||
For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction. | For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction. | ||
After this the authors explain how to generate point clouds from video images. | After this the authors explain how to generate point clouds from video images. | ||
in Techniques for processing data, the authors showcase a couple of algorithms for | in Techniques for processing data, the authors showcase a couple of algorithms for data processing. For point cloud processing they use ICP. For Mesh reconstruction- PSR, for point cloud segmentation- they divide the algorithms into two categories- feature-based segmentation (region growth and clustering, K-means clustering) and model-based segmentation (Hough transform and RANSAC). After this the authors go in depth on applications of 3D reconstruction in civil engineering such as reconstructing construction sites and reconstructing pipelines of MEP systems. | ||
Finally the authors go over the issues and challenges of 3D reconstruction. | Finally, the authors go over the issues and challenges of 3D reconstruction. | ||
====2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment==== | ====2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment==== | ||
This paper goes over how to effectively fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for | This paper goes over how to effectively fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for colour and texture information, while a 2D LiDAR is used as the range sensor. To calibrate the correspondences between the camera and LiDAR, a planar checkerboard pattern is used to extract corners from the camera image and intensity image of the 2D LiDAR. Thus, the authors rely of 2D-2D correspondences. A pinhole camera model is applied to project 3D point clouds to 2D planes. RANSAC is used to estimate the point-to-point correspondence. Using transformation matrices, the authors match the colour images of the digital camera to with the intensity images. yB aligning a 3D colour point cloud in different location, the authors generate the 3D model of the environment. Via a turret widow X servo, the 2D LiDAR is moved in vertical direction for a 180-degree horizontal field of view. The digital camera rotates in both vertical and horizontal directions, to generate panoramas by stitching series of images. In the third paragraph the authors go over how they calibrated the two image sources. To determine the rigid transformation between camera images and 3D points cloud a fidual target is used, RANSAC is used to estimate outliers during calibration process and a checkerboard with 7x9 squares is employed to find correspondences between LiDAR and camera. Finally, the authors go over their results. | ||
<br /> | <br /> | ||
====A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR==== | ====A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR==== | ||
This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled. | This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components, and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled. | ||
Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation. | Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation. | ||
Line 494: | Line 1,029: | ||
This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map. | This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map. | ||
The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360 degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space. | The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360-degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space. | ||
The | The researchers go on to describe how the quality of the proposal is measured, namely in the distance travelled and uncertainty of the map. The uncertainty is a function of the uncertainty in the robot's current position, and the accuracy of the range finder. The robot has a pre-computed expected position of each point, and a post-measurement position of each point, which is then evaluated through relative entropy to compute the increment of the point-information. This and similar equations for the robot's position data are used to select the optimal points for observing the environment. | ||
Lastly, the points of each observation point are combined into one map, by using the robot's position data. | Lastly, the points of each observation point are combined into one map, by using the robot's position data. | ||
====Mobile Robot Localization Using Landmarks==== | ====Mobile Robot Localization Using Landmarks==== | ||
The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods. | The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia-based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods. | ||
It highlights the robot's capability to: | It highlights the robot's capability to: | ||
Line 507: | Line 1,042: | ||
- Use this data to compute its position. | - Use this data to compute its position. | ||
It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image) | It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image) and solves the correspondence problem (identifying an object from 2 angles) to find its location. In the paper, the technique is tested in an office environment. | ||
The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks. | The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks. | ||
Line 513: | Line 1,048: | ||
====The Fuzzy Control Approach for a Quadruped Robot Guide Dog==== | ====The Fuzzy Control Approach for a Quadruped Robot Guide Dog <ref>The Fuzzy Control Approach for a Quadruped Robot Guide Dog | SpringerLink</ref>==== | ||
This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people. | This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people. Based on its low viewpoint. | ||
The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed; it does not show how to guide a blind person. | |||
The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed | |||
Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked. | Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked. | ||
====Design of a Portable Indoor Guide Robot for Blind People==== | ====Design of a Portable Indoor Guide Robot for Blind People==== | ||
This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. | This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. The design is basically a motorized walker with sensors on it. | ||
This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities. | This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities. | ||
The conclusion stated that the robot did well, and it was a low cost, convenient-to-carry, and strong perception blind guide robot. | |||
The conclusion stated that the robot did well and it was a low cost, convenient-to-carry, and strong perception blind guide robot. | |||
====Guiding visually impaired people in the exhibition==== | ====Guiding visually impaired people in the exhibition==== | ||
Line 564: | Line 1,090: | ||
This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product. The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance. | This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product. The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance. | ||
=====Design===== | =====Design===== | ||
Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, | Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, which is placed on the frontal edge of the robot, the map environment is mapped beforehand. Localisation- using wheel odometry and LiDAR scanning it estimates the current location. Compares the real-time scanning and map to previously generated using Monte Carlo localisation (AMCL) package of ROS. In addition, odometry information can be computed using the LiDAR and stereo camera. Path Planning- path on the LiDAR map is planned based on the user's starting point and destination. To avoid obstacles, and to navigate a dynamic environment local, low-level pathing is implemented using the navigation packages of ROS. The robot also considers the space that is occupied both by it and the user in its pathfinding. This is done via a custom algorithm. The robot also provides haptic feedback. The authors use vibro-tactile feedback (different vibration locations and patterns) on the handle to convey the intent of the robot to the user. Via buttons on the handle one can change the speed of the robot. After this explanation, the paper goes over the conducted user study and its results. | ||
====Tour-Guide Robot==== | ====Tour-Guide Robot==== | ||
Line 571: | Line 1,097: | ||
====Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques==== | ====Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques==== | ||
This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a | This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a heterogeneous computer cluster. SLAM is achieved via laser scanners (LiDAR) or RGBD cameras. The paper names some popular such robots: | ||
TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead. | TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead. | ||
Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move | Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move | ||
Line 579: | Line 1,105: | ||
The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account. | The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account. | ||
As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures. | As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures. | ||
NTU-1- autonomous tour guide robot that guides | NTU-1- autonomous tour guide robot that guides on the campus of the National University of Taiwan. It is a big robot, weighting around 80 kg, with a two-wheel differential actuated by a DC brushless motor. It uses multiple sensing technologies such DGPS, dead reckoning and a digital compass, which are all fused by the way of Extended Kalman Filtering. | ||
For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters. | For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters. | ||
Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the | Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the paper are ASKA, Urbano, Indigo, LeBlanc, Konard and Suse. | ||
<references /> | <references /> |
Latest revision as of 18:00, 10 April 2023
Group members
Name | Student id | Role |
---|---|---|
Vincent van Haaren | 1626736 | Human Interaction Specialist |
Jelmer Lap | 1569570 | LIDAR & Environment mapping Specialist |
Wouter Litjens | 1751808 | Chassis & Drivetrain Specialist |
Boril Minkov | 1564889 | Data Processing Specialist |
Jelmer Schuttert | 1480731 | Robotic Motion Tracking Specialist |
Joaquim Zweers | 1734504 | Actuation and Locomotive Systems Specialist |
Introduction
In this project we have been allowed to pursue a self-defined project. Of course, the focus should be on USE; user, society, and Enterprise. Our chosen project is the design of a product. Taking inspiration from our personal experiences we’ve chosen to find a solution to solve the navigation problems we encounter in the campus buildings in the TU/e. After some research about the topic and contacting TU/e Real Estate department we found out that guidance robots for people with visual impairment had demand. This was thus chosen as our topic. More specifically defined, the problem statement is: ‘Visually impaired people have ineffective means of navigating through the, at times, confusing pathways of campus buildings.’. When researching state-of-the-art electronic travel aids, we found 3 distinct solutions: Robotic Navigation Aids, Smartphone solutions, wearable attachments. The pros and cons are described in the table below:
Types of ETA | Implementation | Advantages | Negatives |
---|---|---|---|
Robotic Navigation Aids | Smart Cane | Offers portability and can be used as a normal white cane should the electronics cease to function | Needs to be compact and lightweight
Lacks obstacle information because of restricted sensing ability offers little information on wayfinding and navigation purposes as it requires bigger and bulkier hardware |
Robotic Navigation Aids | Robotic guide dog/mobile robot | The system gives room for larger hardware, as it does not require a user to carry it | Complicated mechanicals while manoeuvering through stairs and terrain |
Robotic Navigation Aids | Robotic Wheelchair | Suitable for the elderly and people who have a physical limitation provides navigation and mobility assistance for elderly visually impaired who cannot walk on their own, multi-handicapped, or people who have more than one disabling condition | Safety remains an issue as user mobility fully depends on the robotic wheelchair navigation, road-crossing and stair climbing are difficult circumstances where the reliability of the wheelchair is of extreme necessity |
Smartphone solutions | Android apps
Maps Image Processing |
Mobility/portability
No load or invasive factor as the only device is the smartphone |
The system depends on sensors available on the smartphone.
May communicate with an outer sensor such as beacon or external server but then it limits the usage for indoor requires certain orientation for image processing or internet signal for online maps |
Wearable Attachments | Eyeglasses
Glove Belt Headgear Backpack |
Gives a natural appearance to the visually impaired user when navigating outdoor | Too much attention is required, thus giving a cognitive load to the user
These devices are intrusive as they cover ears and involve the use of hands users are burdened with the system’s weight. Requires an extended period of training |
Sourced from: [1]
Furthermore, another state-of-the-art solution for guiding devices was found; a device which would use electronic waypoints installed in the building, to localise the user and relay directions and information about the surroundings[2].
A previous attempt was made at the TU/e (our case study) to use this method. But because it required infrastructure to be created in all the buildings in which it would work, it was never implemented. Therefore, we’ve decided to discard all solutions which would require such infrastructure.
Wearable attachments have been discarded as it is inherently invasive meaning the user will have to equip it themselves. Furthermore, larger attachments with many sensors are made impossible due to weight-limits, and lastly wearing such a device in extended meetings is impractical. Any such device will require some prior knowledge on how to operate it. Due to all these reasons, we’ve chosen not to pursue wearable attachments.
We’ve decided against smartphone solutions because it would be difficult to make a one-size-fits-all solution due to differing phones and sensors. A slightly more biased reason is that half of our group members are not at all adept at creating such applications and have no interest in the field. We also worried that we would struggle creating a practical app due to the limitations of the phone hardware.
Robotic wheelchair was decided against due to its invasive nature and concerns for the user’s autonomy. Furthermore, this solution would be very bulky which makes it unsuited for crowded spaces. The user base which will most likely consist of furthermore well-abled students which do not need such support and might feel uncomfortable using such a device.
A Smart Cane is not well-suited to guide the user due to the small form factor and weight requirement which would make inside-out localisation difficult.
The mobile platform guide robot has a few problems besides its price. The most important one is that it has trouble navigating stairs and rough terrain. Luckily, the robot will (for now) only be operating indoors in TU/e buildings. The presented use case of the TU/e campus has walk bridges connecting buildings and elevators in (almost) all buildings which mitigates most of the solution’s downsides. These factors make it the perfect place to implement such a guidance robot.
In summary we chose a robotic guide due to its user accessibility and potential for future improvements. It is a good way for people (with visual impairment or not) to be navigated through buildings.
State of the art
It is commonly known that the most common tools used by visually impaired people are the white cane and the guide dog. The white cane is used to navigate and identify. With its help these people get tactile information about their environment, allowing the visually impaired to explore their surroundings and detect obstacles. However, the use of this can be cumbersome, as it can get stuck in cracks, or tiny spaces. Its efficiency is also limited in the event of bad weather conditions or a crowd.[3] The guide dog on the other hand can guide the user through familiar paths, while also avoiding obstacles. They can also assist with locating steps, curbs and even elevator buttons. They can also keep their user centred, when crossing sidewalks for example.[4][5] There are a couple of issues with guide dogs however. They can only work for 6 to 8 years and have a very high cost of training.[6] They also require constant work on maintaining that training. The dog can also get sick. Another potential issue is bystanders that pet or take interest in the dog while it is working, which is a detriment to the handler.[7]
None of these tools can efficiently assist the person in navigating to a specific landmark in an unknown environment.[8] That is why currently a human assistant is preferred/needed to perform such a task, for example when walking in a museum.[9] In regards to the technological means there is currently no robot that is capable of efficiently performing such a task, especially is the environment is a crowded building. However, there are multiple robots that have implemented parts of this function. In the following paragraph we have divided them into their own sections for ease of reading.
Tour-guide robots
We first begin with the tour-guide robots. These robots are used in places such as museums, university campuses, workplaces and more. The objective of these robots is to guide a user to a destination. Once at the destination these robots will most often relay information about the object, exhibition or room of the destination. In terms of implementation, these robots use a predefined map of the environment, where digital beacons are placed to mark the landmarks and points of interest. These robots also often make use of ways to detect and avoid obstacles such as using laser scanners (such as LiDAR), RGB cameras, kinetic cameras or sonars. This research paper [10] goes in depth on the advances in this field in the recent 20 years, the most notable of which are "Cate", "Konard and Suse". As our goal is to guide visually impaired people throughout the TU/e campus, this field of robotics is of upmost interest for the navigation system of a guidance robot.
Aid technology for the visually impaired
This section is split into two. First, we cover guidance robots for the visually impaired, after which we cover other technological aids that have been created for this user group.
Guidance robots
Guidance robots for the visually impaired are very similar to the tour guide robots. They often use much the same technology to navigate through the environment (predefined map with landmarks and obstacle detection and avoidance). What differentiates these robots from the tour-guide robots is the adaptation of the shape and functionality of the robots to better suit the needs of the visually impaired. The robots have handles, or leashes, which the visually impaired can hold, much the same as a guide dog or a white cane. As the user cannot see, the designs incorporate ways of communicating the intent of the robot to the user as well as ways of guiding the user around obstacles together with the robot. Examples of such designs are the Cabot[11]- a suitcase shaped robot, that stands in front of the user. It uses a LiDAR to analyse its environment and incorporates haptic feedback to inform the user of its intended movement pattern. Another possible design is the quadruped robot guide dog[12], which based on Spot could be used as a robotic guide dog, given some adjustments. Finally there is also this design of a portable indoor guide robot[13], which is a low-cost guidance robot, which also alerts the user of obstacles in the air.
As our design has the objective of guiding the user through a university campus it is reasonable to expect that there will be crowds of students at certain times of the day. For our design to be helpful, it needs to handle such situations in an efficient way. Thus, we took inspiration from the minor field of crowd-navigation of robotics. The goal of these robots is exactly that- enabling the robot to continue moving through a crowd, rather than freeze up, every time there is an obstacle in front of it. Some relevant research are these papers "Unfreezing the Robot: Navigation in Dense, Interacting Crowds"[14], a robot that can navigate crowds with deep reinforcement learning[15].
User scenarios
To get a better feeling of the problem, and the possible solutions two user scenarios are made that show the impact of the guide robot on visually impaired people that want to move through unknown crowded spaces. The design mentioned in these stories are both not what we ended up making, but the intended goal is the same; these stories and the solution we ended up making both try to expand the navigational tools a guidance robot has in crowded spaces. It is important to note that some parts of the robot here described fall out of the scope of the exact problem solved.
Physical contact through crowded spaces
Jack is partially sighted and can see only a small part of what is in front of him. He has recently been helping out fellow students with their field tests which tests a robot guide. Last month he worked with a robot called Visior which helps steer him through his surroundings. Visior is a robot which is inspired and shares its physical features with CaBot.
When Jack used Visior to get to the library to pick up a print request he had to pass through a mediumly-crowded Atlas building since there was an event going on. This went mostly as expected; not too fast and having to stop semi-periodically because of people walking or stopping in front of Visior. The robot was strictly disallowed to purposely make physical contact with other humans. Jack knows this so he learned to step up in these situations and try to kindly ask for the people in front to make way. This used to happen less when he used his white cane since people would easily identify him and his needs. After Jack arrived at printing room in MetaForum he picked up his print request. He handily put his batch of paper on top of his guiding robot, so he didn’t have to carry it himself.
On his way back he almost fell over his guiding robot when it suddenly stopped when a hurried student ran by. Luckily, he did not get hurt. When Jack came home after this errand he crashed on his couch after an exhausting trip of anticipating the robot’s quirky behaviour.
The next day the researchers and developers of Visior came to ask about his experiences. Jack told them about his experience with Visior and their trip to the library. The developers thanked him for his feedback and started working on improving Visior.
This week they came back with the now new and improved Visior-robot. This version has been installed with a softer exterior and now rides in front of Jack instead of by his side. The developers have made it capable of safely bumping into people without causing harm. They also made it capable of communicating with Jack if it thinks it might have to stop suddenly to make Jack a bit more at ease when traveling together.
The next day Jack used it to again make a trip to the printing space in MetaForum to compare the experience. When passing through the crowded Atlas again (there somehow always seems to be an event there) he was pleasantly surprised. He found it easier to trust Visior now that it was able to communicate the points in the trip where Visior thought they might have to stop or bump into other pedestrians. For example, when they came across a slightly more crowded space Visior had guided Jack to walk alongside a flow of other pedestrians. Jack was made aware of the slightly unknown nature of their surroundings by Visior. Then when a student suddenly tried to cross their path without looking, Visior had unfortunately bumped into their side. Visior gradually slowed their pace down to a halt. Jack obviously felt the bump but was easily able to stay stable due to the prior warning and the less drastic decrease in speed. The student who was now naturally aware of the something moving in their blind spot immediately stepped out of the way and looked at Jack and Visior; seeing the sticker stating that Jack was visually impaired. Jack asked them if they were alright, to which they responded with saying they were fine after which they both went on their way. After picking up his print he went back home. On his way back he had to pass through the small bridge between MetaForum and Atlas in which a group of people were now talking, blocking a large part of the walking space. Visior guided Jack to a small traversable path open besides the group; taking the risk that the person there would slightly move and come onto their path. Visior and Jack could luckily squeeze by without any trouble and their way back home was further uneventful.
When the Developers of Visior came back the next day to check up on him Jack told them the experience was leagues better then before. He told them he found walking with Visior less exhausting than it had been before and found the behaviour of it more human-like making it easier to work with.
Familiar guidance advantage
Meet Mark from Croatia He is a Minor Student following Mathematics courses, and lives on (or near) campus Mark is severely near-sighted, being born with the condition he has never seen very well. Mark is optimistic but chaotic. Mark likes his study and likes playing piano.
Notable details: Mark makes use of a white cane and audio-visual aids to assist with his near-sightedness. He just transferred to TU/e for a minor and doesn't know many people yet. Mark will only be here a short time for his minor. He has a service dog at home, but does not have the resources, time or connections to provide for it here, and so he left it at home.
Indoors, mark finds it hard to use his cane because of crowded hallways and he dislikes apologizing when hitting someone with his stick or being an inconvenience to his fellow students. Mark can read and write English fine, but still feels the language barrier.
In a world without our robot mark might have to navigate like this: Mark has just arrived for his 2nd day of lectures. And will be going to the wide lecture hall at Atlas -0.820. Mark again managed to walk to Atlas (as we will not be tackling exterior navigation), and uses his cane and experience to navigate the stairs and rotary door of Atlas, using it to determine the speed and size of the revolving element to get in, and using the cane to determine the position of the doors and opening[16].
Once inside, he is greeted by a fellow student who has noticed him navigating the door. Mark had already started concealing the use of his cane, as he doesn't like the attention and so the university staff didn't notice him. Luckily, his fellow student is more than willing to help him get to his lecture hall. Unfortunately, the student is not well versed in guiding visually impaired around, and it has gotten busy with students changing rooms.
Mark is dragged along to the lecture hall by his fellow student, bumping into other students who don't notice he cannot see them, as his guider is hastily pulling him past the students. Mark almost loses his balance when his guide slips past some other students, narrowly avoiding the trashcan while dragging mark by his arm. Mark didn't see the trashcan, which is not at eye level, and collides with the metal frame while trying to copy the movement of his guider to dodge the other students. He is luckily unharmed, and manages to follow his guide again, until he is finally able to sit in the lecture hall, ready to listen for another day.
The next day a student sees Mark struggling with the door and shows Mark a guide robot. The robot has the task of getting mark to the lecture hall Mark needs to be. It starts moving and communicates its intended speed and direction through the feedback in the handle. As a result, Mark can anticipate the route the robot will take, similar as to how a guide would apply force to Mark's hand to change direction.
The robot has reached the crowd of students moving through the busy part of Atlas. Its primary objective is to get Mark through this, and even though many students notice the robot going through, it still uses clear audio indications to warn students it will be moving through and notifies Mark it goes into some alternate mode through the handle. Mark notices, and thus becomes alert as he also feels that the robot reduces the number of turns it makes, navigating through the crowd in the most straightforward route it can take. Mark likes this, it is making it easy for him to follow it, and also for others to avoid them.
Still, a sleepy student bumps into the robot as it is crossing. Luckily the robot is designed to contact other students, and its rounded shape, enclosed wheels (or other moving parts) and softened bumpers prevent harm. The robot does however slightly reduce its pace and makes an audible noise to let the sleepy student know it touched the robot too hard. Mark also notices the collision, partially because the bump makes the robot shake a little and loose a bit of pace, but mainly because his handle clearly and alarmingly notifies him, Mark also knows the robot will still continue, as the feedback of the handle indicates to him that it is not stopping.
After the robot gets through the crowd, it makes it to the lecture hall. It parks just in front of the door and tells mark to extend his free hand slightly above hip level, telling him they arrived at a closed door that opens towards them swinging to his right, similarly to how a guide would, so mark can grab the door handle, and with support of the robot open the door. The robot proceeds mark slowly into the space, it goes a bit too fast though, and mark applies force to the handle, pulling it slightly in his direction. The robot notices this and waits for Mark.
After they enter the lecture hall, the robot asks the lecturer to guide mark to an empty seat (and may provide instructions on how to do so). When mark is seated, the robot returns to its spot near the entrance, waiting for the next person.
Problem statement
The previous problem statement was quickly found to be too broad. In this research about state-of-the-art it was found that the problem statement consists out of a plethora of sub problems which all have to work in tandem to create a functional solution. For this reason, it is important to scope the problem as much as possible to create a manageable project. Throughout research on the topic of guidance robots the following problems were identified:
- Localization of the guide
- Identification of obstacles or other persons
- Navigation in sparse crowds
- Navigation in dense crowds
- Overarching strategic planning (e.g. navigating between multiple floors or buildings)
- Interaction with infrastructure (e.g. Doors, elevators, stairs, etc.)
- Effective communication with the user (e.g. user being able to set a goal for the guide)
We decided to focus on ‘Navigation of guidance robots in dense crowds on TU/e campus’. This was chosen because for navigation on campus such a ‘skill’ (an ability the guide can perform) is necessary. Typical scenarios in which such a skill would be useful for a typical student would be during on campus events, navigation in and out of crowded lecture rooms, or simply a crowded bridge or hallway. Besides its necessity it is also an active field of study without a clear final solution yet[17]. Mavrogiannis et al.[17] defines the task of social navigation as ‘to efficiently reach a goal while abiding by social rules/norms.’.
A reformulation of our problem statement thus results in the following research question: ‘How should robots socially navigate through crowded pedestrian spaces while guiding visual impaired users?’
To work on this problem, it is assumed the remaining functions of the previous list are assumed to be working.
Scoping the problem
At this time the first meeting with Assistant professor César López was held. Mr. López is part of the control systems technology group of the TUE and focusses on designing navigation and control algorithms for robots operating in a semi-open world. In our meeting the most important recommendation was that the navigation should be split up even further and a more defined crowd should be used to define the guide’s behaviour. He laid out that different crowds have different qualities. These crowds can roughly be split up into chaotic crowds; where there is no exact order and behaviour is less predictable (e.g., an airport where everyone needs to go in different directions), and structured crowds; where behaviour is predictable, such as crowds found walking in a hallway. The simplest structured crowd is one where all people have a unidirectional walking direction. This kind of behaviour can also be found in a paper from Helbing et al.[18] which amongst other things describes crowd dynamics. The same paper also describes how a crowd with only 2 opposing walking directions self-organizes to two side-by-side opposing ‘streams’ of people.
López then expanded on this finding by saying that the robot, in this crowd, could roughly be in 3 distinct scenarios': The robot could walk along a unidirectional crowd, it could walk in the opposite direction of a unidirectional crowds, or it could walk perpendicular to the unidirectional crowd. All of these have an application when navigating the university. López recommended that our research should be focused on only 1 of these scenarios since they all need different behavioural models unless a general navigation method was found.
To summarize, it can be seen that for the guide to efficiently navigate in tight spaces, like hallways or in a lesser extend doorways, requires it to be able to navigate dense crowds which behave in a unidirectional manner. In navigating such a crowd, different approaches can be taken depending on the walking direction of the crowd and the guide.
On López’s recommendation, it was decided to narrow the behavioural research down to only walking alongside a unidirectional crowd since this was the most standard case.
To conclude this section the final research question is defined as ‘How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?’.
This section will discuss the relevance and the impact of a safe crowd-navigation guidance robot, on users, society at large, and enterprises.
Users
The robot has a number of possible users, but for this design there are two types distinguished in this design:
- The visually impaired handler of the robot
- The other persons participating in the crowd
In the Netherlands around 2.7% of the population has severe vision loss, including blindness[19]. This is over 400 thousand people, who do not know which route to walk in a new environment, where only a room number is given. There are aids such as a guide dog or cane, but those make sure blind people do not collide with the environment instead of guiding to an unknown location in a new surrounding. So, a device that guides those visually impaired people to a new location they have never been on campus, such as, meeting room Metaforum 5.199 is needed. To guide them to this meeting room a navigation is needed through crowds.
As mentioned, modern robots have a freezing problem when walking to crowds, which is not optimal when walking with the sometimes-dense crowds on the TU/e campus. That is why nudging and sometime bumping is needed sometimes. The challenge here is to guide the handler as smoothly as possible while sometimes nudging and bumping with third persons.
As the plan was to design a physical robot with inspiration taken from the CaBot, a lot of inspiration is taken from their user research for visually impaired people. On top of that research has been done in guide dogs and their ways of guiding.
For third persons to the robot and handlers research has been done mainly focused on the touching and nudging aspect of the robot. This to see what reactions a touching robot may elicit, the safety of this concept, and the ethics of robotic touch.
Secondary users include institutions that provide the robot for visually impaired people to navigate through their buildings. These users include, universities, government buildings, shopping malls, offices or museums.
Society
As mentioned above, 2.7% of the population severe from vision loss, however there are many more benefits from a robot that can safely and quickly navigate through a crowd. Any robot that has a mobile function in society, will at some point encounter a crowd. Whether that is a dense or sparse crowd, or simply people blocking an entry or hallway. Consider robots that work in social service such as a restaurant, delivery robots or even guide robots for others than visually impaired people, for example at museums or shopping malls.
For these robots, it is important that they can safely traverse through crowds in the quickest way possible. The solution investigated and presented here, is a good step in the right direction. Of course, each of these robots would need a different design in order to properly execute its function, but the strength lies in the social algorithm where the robot moves through a crowd in a different way than robots do now.
Specifically for navigating visually impaired people, it helps with their accessibility and inclusivity in society. Implementing a robot such as this, will allow them to be a more integral part of society without having to rely on other people.
Enterprise
For enterprises that might employ these robots there are two advantages. The use of the robot will enable visually impaired customers to have better access to any services the companies might provide. In addition, they will have a competitive advantage over competitors that do not provide such a robot or such a service. For example, a shopping mall would improve their accessibility which would turn into more customers, whereas government buildings improve general satisfaction.
Specifically for universities such as the TU/e, next to attracting more students, it improves their public image that shows the effort to make a higher education possible and easier for all people. An advantage over other solutions such as a human guide, is that no new employees need to be trained. No big infrastructure changes, such as extra cameras or sensors throughout the building are needed for another type of robot or navigator. And lastly, there is no issue of a failing connection with for example a smartphone.
Project Requirements, Preferences, and Constraints
Creating RPC criteria
Setting requirements
The most important thing in building a robot operating in public spaces is to make it complete its tasks in a safe manner; not harming bystanders or the user themselves. Most hazards in robot-human interactions (or vice versa) in pedestrian spaces are derived from physical contact[20]. This problem is even more present when working in crowded spaces where physical contact is impractical to avoid or cannot be avoided. Therefore, the robot has to be made physically safe; typical touch, swipes, and collisions are made non-hazardous. This term ‘physically safe’ will be abbreviated to ‘touch safe’ to make its meaning more apparent.
If the robot somehow exhibits unsafe behaviour the user should be able to easily stop the robot with an emergency stop. Because the robot is able to make physical contact and apply substantial force, it becomes even more paramount that rogue behaviour is easily stopped if it occurs.
When interacting with the user it should make them feel safe and thus allow trust in the robot. If the user does not feel safe, they cannot trust the robot and might become unnecessarily anxious or stressed. With as result that the user may avoid its services. Besides this the users might display unpredictable or counter-productive behaviour, e.g., walking excessively slow, not following the robot, etc. To this end the robot should be able to communicate its intent to the user so that they won’t have to be on-edge all the time.
For the robot to be viable in practice there are some restrictions like making the robot relatively cheap since the budget is not unlimited and competing solutions like human guides exist for a set price; too large of a price would make robot guides obsolete. Our use-case also has restrictions on infra-structural modifications to the campus building of the TU/e as a previous solution was rejected due to this reason; installing waypoints all over the buildings was too much of an investment.
Setting preferences
The robot should not slow down its user when avoidable, so an average speed of 1 m/s (average walking speed visually impaired users[21]) would be a good goal.
For the robot to reach its goal efficiently it should avoid stopping for people. Even more reasons to avoid stopping is to make the user able to walk a constant speed, requiring less mental strain on its user, as well as avoiding hazards which occur due to stopping in pedestrian spaces like surprising and hitting the person behind the user[20].
Setting constraints
For the robot to operate in our specified use case it should be able to navigate campus. This involves being able to navigate narrow walk bridges and the wide-open spaces with different walk routes. Such things as interaction with elevators or stairs will not be focused on this research.
RPC-list
Requirements
- Safety
- Touch proof
- Does not harm bystanders or the user
- Installed emergency stop
- User feedback/interaction
- Should give feedback about intentions to user
- Robot must be able to receive feedback and information from user
- Handler should feel safe based on interaction with robot
- Implementable
- Relatively cheap
- No infrastructural changes in buildings
Preferences
- 1 m/s (3.6 km/h) walking speed should be reached[21]
- Does not stop for people unnecessarily
Constraints
- Environment (TU/e campus)
- Narrow walk bridges/hallways
- Big open spaces
The solutions
In this section the worked-out solution to the problem statement is given. The solution consists of a physical and a behavioural description of the robot. These two factors influence each other: The design has an impact on how the robot should behave while socially navigating through a crowd, while the way it navigates through a crowd makes the specific requirements of the design. These together give a clear answer to the research question on how the robot with this specific design should socially navigate through a unidirectional crowd while guiding visually impaired users.
This chapter consist of a detailed explanation of the physical design of the robot. The robot is designed to adhere as closely as possible to the rpc-list. After the design is defined the corresponding behaviour will be defined using scenarios. These scenarios are used to explain the behaviour we would want to see and expect. In a broader sense, this should demonstrate how the method of navigation can be utilised to effectively and safely navigate through dense crowds.
Design
In this chapter the design of the robot model is documented. With the design, main focusses are safety and communication of nudging to the visually impaired handler, and third persons.
For the design of the robot the main inspiration is the CaBot[11]. This is basically a type of suitcase design (rectangular box with 4 motorized wheels), with in the rectangular box all its hardware. Interestingly, it also has a sensor on its handle for vision (higher perspective). This design is rather simple, and the easy flat terrain on the TU/e campus should be no problem for the wheels. The CaBot excels in guiding people to a new location but does not work through crowds. When looking at safety the body design has been altered for nudging and bumping into people. Also, the handle design has been revamped for better communication to user.
Handle design
As the robots behaviour is focused on traversing through crowds of people, there is an important function also part of it. How to communicate this direction to its user? Any audible direction will quickly interfere with the sounds from the surroundings, which can result in missing the entire message or allow for confusion. Although a headset might allow for clearer communication, this is still not ideal. Therefore, the easiest way to provide feedback to the user is through the handle. The robot has a few functions that it needs to be able to communicate with the user or be able to be controlled by the user:
- Speed
- Setting a faster or slower speed
- Communicating slowing down or accelerating
- Emergency stop
- Direction
- Turning left
- Turning right
All of these functions can be placed inside the handle, while designing for minimal strain on the user's active control. The average breadth of a male adult hand is 8.9 cm[22], which means that the handle needs to be big enough to allow people to hold on while also incorporating the different sensors and actuators. For white canes, the WHO[23] has presented a draft product specification where the handle should have a diameter of 2.5cm. Which will be used for the handle of the robot as well. Since the robot can be seen as functioning similar to a guide dog, the handle will have a design similar to harnesses used for blind dogs, meaning a perpendicular, although not curved, handle that will stop in place if released.[24] To be able to comfortable accommodate the controls and sensors described below, the total size of the handle will be 20 cm.
The handle, which is connected to the robot, will provide automatic directional cues, without additional sensors or actuators. This will simplify the robot and act more similar to a guide dog. As for the matter of speed, there are three systems that would be implemented. The emergency stop, feedback about the acceleration and deceleration of the robot and the speed control of the user. The emergency stop can be a simple sensor in the handle that detects if the handle is currently being hold, if not, the robot will automatically stop moving and stay in place. The speed can be regulated via a switch-like control as visible on the CAD render on the right. When walking with a guide dog, the selected walking speed is about 1 m/s [21] for visually impaired people, which means that with five settings, ranging from 0 m/s, 0.5 m/s, 0.75 m/s, 1.0 m/s and 1.25 m/s, the user can set their own speed preference. In order to give feedback about its current setting, the different numbers will be detailed in braille. Furthermore, changing settings will encounter some resistance and a feelable ‘click’ instead of being a smooth transition. The user can, at any times, use their thumb, or any other finger, to quickly check the position of the device and determine the speed setting. The ‘click’ provides extra security that the speed will not be accidentally adjusted without the user being aware of it. To this end, the settings will only affect the actual walking speed after a short delay to allow the user to have time to revert any changes.
Lastly, the robot might, for whatever reason have to slow down while walking through the crowd. Either for obstacles, other people, or in order to go properly with the flow of the crowd. Since this falls outside the speed setting, the user must be made aware of the robots' actions. A simple piezo haptic actuator can do the trick. By placing it in the middle of the handle, it will be easily detected. A code for slowing down, for example a pulsating rhythm, and a code for speeding up, a continuous vibration, will convey the actions of the robot. Of course, this is in addition to the physical feel that the user has via the pull on the handle via the arm. However, because trust is so important in human-robot interactions, this is just additional feedback from the part of the robot to increase the confidence of the user when using the robot.
Arm design
Multiple designs were considered. The arm connects the handle to the body, it is important here that the handle height can be changed. One thing that was added in the name of safety was suspension, so that the movements of the robot would not jerk the arm of the guided if it were to suddenly change speeds, when for example bumping or nudging. Most design iterations went over on how to integrate the suspension.
The first design was a straight pole from the robot body to the guided arm (as can be seen in the top sketch in the figure to the right). A problem we could see was that if the robot were to stop suddenly, it would push the arm slightly up instead of compressing the suspension. To solve this problem a joint was introduced in the middle of the arm (as can be seen in the middle sketch in the figure to the right). An alternative solution was to have the suspension only act horizontally and internalize it (as can be seen in the bottom sketch). This would allow the pole to have the same design as the first sketch without compromising on the suspension behaviour. Another plus would be that the pole would be marginally lighter due to this suspension being moved inwards.
We have chosen for the second design as it had the intended suspension behaviour while remaining as simple as possible. This allows the mechanism to be constructed from mostly off-the-self parts, reducing the cost.
Body Design
For the body three main designs were considered: A square, a cylindrical form, and a cylinder which changes diameter over its height. The square was immediately ruled out due to its sharp corners making it decidedly not touch safe. The more cylindrical shapes could more easily slide through public and had less chance to hit people hard on the front (it allows for a sliding motion instead of head-on collision). This left the choice between a normal cylinder, a cylinder wide at the bottom, and a cylinder wide at the top.
A bottom-heavy design would help with balance; If the robot would bump it would hit at the lowest point, meaning more stability. However, it may surprise people when it hits as they might not notice the wide bottom. This is where the wide top outperforms, as it hits people around their waist/lower back area where collision can be more easily spotted. Furthermore, this is a more effective place to nudge people for them to get out the way (a lower hit might instead make people lift their leg instead of stepping out of the way). A draw back is that the robot is touched higher and more easily tips over. That is why in the design the best of both worlds is chosen. The body has a big diameter lower with a big bumper to not tip over, and has 'whiskers' of a soft, compressible foam material on top at the front to softly touch, or nudge people if they are in the way. Research has shown that touch by a robot elicits the same response by humans as touch by humans[25]. The material for the rest of the body is a plastic as to make it not too hard.
The pole on top of the body has two functions.
- Visibility
- Sensors
The pole is 100 cm long, making the whole guide robot stand at 220 cm tall. This helps for the sensors which get, from a higher point of view, a better overview of the crowd. This height also helps with noticeability in dense crowds where at eye-level it will still be visible even when the lower body is (partially) obscured.
Behavioural description
The behavioural description will concern behaviour in a crowd with a singular, uniform walking direction. As mentioned before, the expected behaviour will be described using scenarios. These will first describe the standard scenario, after which two special cases are discussed. Furthermore, it will be briefly discussed how this behaviour might also benefit other crowd types or behaviour. The purpose of the behaviour is to make the robot guide someone efficiently to reach a goal while abiding by social rules/norms.
It is important to note that joining, and leaving these crowds require different behaviour (like sparse crowd navigation). These are thus not considered to fall inside of the scope of the research question.
First, the standard navigation method will be discussed and how it functions in most scenarios.
The standard scenario
López suggested that to navigate, the guide should check where it can walk, not where it cannot. He also suggested following a lead of some kind could make navigation in unidirectional crowds easier. These traits have been used to define the standard scenario.
In this scenario the robot uses its LIDAR technology to follow a moving point cloud (i.e., the lead) in front of it. This point cloud could be one person or even a whole group. Regardless of this, the point cloud will always indicate the end of the guide's free walking space (space where nothing else stands in its way). It can thus be said that between this lead and the guide, there will in most cases, always be free walking space. As the lead walks in front of the guide it will continuously be creating a space in the crowd behind it, and in front of the robot, where the guide can move.
The robot cannot see the difference between one person or a group, this will make the robot more robust as small details in people's behaviour will not affect the guide's actions.
Scenario 1: Cut off
While in the standard scenario, someone or something starts to insert itself in between the guide and the previously thought of leading cloud. This has multiple different sub-scenario’s will be discussed. In this scenario we will consider a crowded space with approximately 0.8 persons/m2 (which nears shoulder-to-shoulder crowds as found in [26]), where the people move alongside each other. Since the third person is inserting from the side it may not be assumed that only the feelers of the robot make contact. This means more severe consequences may follow.
Decision making criteria
The decision making of the guide should depend on the intentions of the third person, the effects of their actions on the guide(d), and the effects on themselves.
By far the most difficult thing is to determine the intentions of the third person. Are they trying to insert themselves in front of the robot or are they simply drifting in front. Since their mind cannot be read it seems reasonable to base the decision purely on the latter 2 decisive factors, namely, the effects on the guide(d) and the effects on the person inserting themselves.
Guide’s options
There are 3 options the robot can take in any given scenario:
Effects Action -> | Bump | Make way | Move to the side |
Effects on the guide(d) | - Little to no travel delay
- Depending on the severity of the impact it might result in the robot having a sudden change in speed, inconveniencing the guided. |
- The robot might have to slow down temporarily which might inconvenience the guided.
- The robot might have to slow down permanently due to a change in the leads’ walking speed leading to a higher travel time. - Other people might also try to slip in front leading to multiple delays. |
- The guided might incur a travel delay due to the perpendicular movement.
- Too much side-to-side movement might lead to sporadic guidance to the guided. - The guide will have to make accurate decisions when sliding in front of someone else which might lead to unexpected problems or delays. |
Effects on the person inserting themselves | - They make physical contact with the robot resulting in a risk of injury depending on the severity.
- They might be surprised by the robot resulting in unpredictable scenarios. - They might not be able to return to their original spot in the crowd resulting in unpredictable consequences. |
- None | - None |
Scenario variables
It can be seen that the effect of any action is very much context dependent and as such a well-made decision will only be possible if the guide is well-informed. Assuming this is the case for now we can set up 4 factors which will determine the way the robot should act:
1. The relative normal speed of the third person
2. Their relative perpendicular speed
3. The third person’s space to act
4. The robot’s space to act
From this, 4 behavioural tables can be set up:
Scenario 1: expected behaviour
The following scenarios might seem excessive since the robot will most likely not be a rule-based reflex-agent. This detailed model should however be of importance when informing our decision-making process in the design of the robot as well as the evaluation of the simulation. The following behavioural tables have the relative forward speed of the guide, while at the top the speed the third person has in the direction perpendicular to the guide's walking direction is given.
The third person and the robot are capable of making way
Low perpendicular speed | Medium perpendicular speed | High perpendicular speed | |
Smaller forward speed | Robot should make way | Robot should make way, as people think it shows manners and awareness. | Robot should make way, as people think it shows manners and awareness. |
Same forward speed | Robot does not make way | Robot should make way, but prevent heavy breaking, soft pushing
is still an option if the integration is too narrow |
Robot should make way, but prevent heavy breaking, soft pushing
is still an option if the integration is too narrow. |
Larger forward speed | Robot does not make way | Robot does not make way | Robot does not make way, but tries to soften the by moving along the perpendicular direction of the third person |
Only the third person is capable of making way (see figure to the right)
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.
Low perpendicular speed | Medium perpendicular speed | High perpendicular speed | |
Smaller forward speed | The guide should not make way and risk impact to indicate it has no free space. | The guide should not make way and risk impact to indicate it has no free space. | The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact. |
Same forward speed | The guide should not make way and risk impact to indicate it has no free space. | The guide should not make way and risk impact to indicate it has no free space. | The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact. |
Larger forward speed | The guide should not make way and risk impact to indicate it has no free space. | If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down to soften the impact. | If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down and move in the same perpendicular direction as the third person to soften the impact. |
Only the robot is capable of making way
Low perpendicular speed | medium perpendicular speed | High perpendicular speed | |
Smaller forward speed | Robot should make way | Robot should make way | Robot should make way, trying to prevent heavy breaking |
Same forward speed | Robot should make way | Robot should make way | Robot should make way, trying to prevent heavy breaking |
Larger forward speed | Robot should make way | Robot tries to make way, preventing heavy breaking | Robot tries to make way, preventing heavy breaking |
Neither are capable of making way
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.
Low perpendicular speed | Medium perpendicular speed | High perpendicular speed | |
Smaller forward speed | Robot should try to make as much way as possible before making continuous contact with the person until the third person finds a way to decouple. | Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should try to maintain continuous contact with the person until the third person finds a way to decouple. | Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple. |
Same forward speed | Robot should try to make as much way as possible
If there is not much room the robot should not bother to bump |
Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until the third person finds a way to decouple. | Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple. |
Larger forward speed | Robot should try to make as much way as possible before making continuous contact with the person until they naturally separate or the third person finds a way to decouple. | Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until they naturally separate or the third person finds a way to decouple. | Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until they naturally decouple or the third person finds a way to decouple. |
Scenario 2: Stalled lead
While in the standard scenario the lead has stopped moving. The guide may avoid the lead or nudge them. The guide is moving unidirectionally with the lead, and it is therefor assumed all impact will occur at the front of the guide.
Guide options
The decision making of the robot should depend on the effects on the guide(d) and on the leads. The robot may in all situations attempt the following options:
Effects Action -> | Try alternative route | Robot nudges using feelers | Stops |
Effects on the guide(d) | - The robot has to make side-to-side movement which results in a more sporadic pathing. This might inconvenience the guided.
- Moving aside in a crowded space may result in the guide, or worse guided, to be pushed by other people. - Behaviour requires more complex observational methods and behaviour. |
- Does not always resolve the problem which leads to more delay. | - Guide stops. significant time delay.
- People behind guided may walk into or push them. |
Effects on the stalled lead | - None | - Person may have to step aside or start moving. - Person might be uncomfortable with being nudged or pushed. |
- None |
Scenario variables
The main variables are the following:
1. Space to act for guide
2. Space to act for lead
Scenario 2: expected behaviour
Due to the great low-risk results of nudging using the feelers this will in all cases be the first action.
If the attempt fails however, it must be decided if the guide should try to path around the now blocked path or to stop. If it can be seen that the lead ahead is stopping out of its own volition (There is free space in front of the lead), the robot should try to navigate around the lead in most cases. If the lead is expected to start moving in a reasonable timeframe, depending on the amount of time rerouting would take, the guide should stop. Something which hasn’t been taken into account yet is the actual freedom of the guide; a dense surrounding, or a fast-moving crowd could stop the guide being able to safely step aside. In these cases, the specifics and safety of the cross-flow-behaviour are of importance.
Assuming the cross-flow-behaviour to be only safe in the limited case of a sparse, normal moving crowd, the following behavioural table can be made:
Normal moving crowd | Fast moving crowd | |
Sparse crowd | Try alternative route | Stop |
Dense crowd | Stop | Stop |
If the guide has stopped for a while and sees an opportunity for the lead to move, it should play a message asking for the lead to move. This also notifies the guided of the situation.
Generalisation
The following scenarios pertain to a situation where the guide does not navigate alongside a unidirectional crowd flow. Although this is out of the scope of this research it is useful to look at what a robot with this design can add to other scenarios using touch. First, there will be a short look at the possibilities of physical touch in the other scenarios sketched by López.
Opposing a unidirectional crowd is slightly harder for a robot, while moving through an opposing flow the robot is dependent on people moving out of the way. Otherwise, no space might open up where the robot can go. Here a robot that is programmed to not touch people might stall if there is a high enough crowd density. This is where light bumping comes might be useful, if people do not go fully out of the way they will get a light touch.
Crossing a unidirectional crowd is the hardest scenario. It might be hard for positions to open up where the robot can go. This due to people coming from the side, and the social implications that come with that. Does the robot give space, or will it walk on? Research has found that people find robots more social, and better if they let people pass first, but this has a problem of the robot stalling. That is why in dense crowds it might be preferred that the robot starts nudging to make a way for the guided person.
Integrating into a crowd is an important behaviour of the robot. Inside the TUE, maximum crowd densities are assumed to occur only scarcely in the span of a year. In less dense crowds the guide should be able to integrate into the flow without hitting other people. However, in the scenario that crowds are to be very dense, the guide should be able to act in a more assertive manner due to the increase in safety measures preventing harmful human-robot collisions.
Simulation
Goal:
In order for the behaviour description to be relevant, we show that the proposed behaviour is safe to employ in a representative environment. To measure this safety, we first of all measure the collisions and make the reasonable assumption that this is the primary source of harm our robot can afflict. The simulation will gather data about the frequency of collisions, and statistics on the forces applied to the person and robot during the collision. Secondly, we consider the adherence of the robot's behaviour to the ISO guidelines for safety[27][28], focussing on the minimum safety gap and the maximum relative velocity guidelines.
Overview of applicable ISO Safety standards
According to ISO 10218-2:2011[28], for a (industrial) robot to operate safely:
- Protective barriers should be included (Which is discussed in the body design)
- Warning labels should indicate potential hazards. As the robot does not operate any manipulators or tools, and is designed not to be able to crush someone, or run someone over, the only danger here is tripping, which is also minimized in the design.
- Light curtains, pressure mats and other safety devices: The robot includes whiskers at the front, that also help with avoiding a direct body collision.
- Others, which do not apply to this robot, as it is not present in an industrial setting.
In addition, ISO 15066:2016[27] indicates requirements for robots in proximity to human operators:
- A risk assessment should be made to identify hazards to surrounding personnel.
- Monitoring systems to keep track of speed and separation, which are included in the form of the LIDAR sensor.
- Force and power limits of the robot: The robot is not incredibly high powered, the amount of force it is capable of applying in normal operation depends on the physical implementation of this proposed concept, but is unlikely to be problematic, as there is no need for high-powered actuators for the drive train to function.
- Emergency stop (already covered in the other ISO standard)
- A safety distance gap should be kept to people around the robot, this is however ignored, as we are developing a solution that aims to be safe, without needing to keep clear of humans.
- Force and Pressure limits are imposed on the robot, to prevent serious harm, both during normal operation and collision. These include:
- A limit on the contact force during collision of 150N, this will be the focus of the simulation.
- A pressure limit during collision of 1.5 kN/m^2, we make the reasonable assumption this pressure limit cannot be reached without violating the previous condition, as our robot is designed to be as smooth as possible, making it incredibly difficult for the robot to apply a lot of force in a very small and local area.
- Force limiting or compliance techniques are to be implemented to reduce the force applied during collision. This comes in the form of whiskers for the compliance aspects, allowing them to deform to reduce the impact, and the behaviour is designed to limit the force by slowing, satisfying the limitation requirement.
As a performance measure for the simulation, we consider the maximum force applied in a collision during the duration of the simulation. This is the element of the ISO standards that is not negated by the behaviour design, or design of the body itself, and thus the part that remains to show the robot is safe to operate.
This data can also inform the design of the robot and its behaviour, as it can test various form factors, and navigation algorithms to optimize. In the end the simulation results act as assistance in design iteration, and ultimately inform us about the viability of the robot in crowds.
Why a simulation:
Testing which techniques have an impact require a setting with a lot of people to form a crowd, which can be controlled precise enough to eliminate outside or 'luck' factors. The performance needs to be a function of measurable starting conditions, and the behaviour of the robot. When using a real robot, we would need to work in an iterative approach, where we can alternate the appearance and workings of the robot after each simulation, to simulate different scenarios. This would require re-building the robot each time, which is something we simply don't have time for. Additionally, to obtain a large enough crowd (think of more than a 100 students) would become tricky in such a short notice. Using a real-world crowd (by going to the buildings in-between lectures) would present the most accurate situation but is not controllable and not reproducible. There is also the ethical dilemma of testing a potentially hazardous robot in a real crowd, and logistically, organizing a controlled experiment with a crowd of students is not an option.
Simulation: situation analysis
The real world would have the robot guide a blind person through the atlas building, to a goal. This situation can broadly be dissected as:
- Performance Measure: The maximum force applied during collision with a person, which cannot exceed 150 Newton.
- Environment: Dynamic, partially unknown interior room, designed for human navigation.
- Actuators: wheels.
- Sensors: LIDAR & Camera, abstracted to General purpose vision and environment mapping sensors, but are assumed to be limited range and accuracy, systems capable of deducing depth, position and dynamic- or static obstacles.
The environment is assumed to be:
- Partially Observable
- Stochastic
- Competitive and Collaborative (humans aid each other in navigation, but are also their own obstacles)
- Multi-agent
- Dynamic
- Sequential
- Unknown
Considered Simulation Design variants.
Simulating the robot may take various shapes, each with their own advantages. When considering the type of simulation we will make, we considered the following aspects: Environment Model:
- Mathematical: Building a model of the environment, purely based on mathematical expressions of the real world.
- Geometrical: Building a 3d version of the environment, using a 3d virtual representation of the environment.
- 2D: The environment does not consider depth
- 3D: The environment does consider depth
Robot Agent:
- Global awareness: The robot model has access to all information across the entire environment.
- Sensory awareness: Observing the Simulated environment with virtual (imperfect) sensors. The robot only has access to the observed information.
- Mechanics simulation: The detail at which the robot's body is modelled. Factors include whether the precise shape is considered, the accuracy of actuators and other systems, and delay between command and response.
Crowd Behaviour Model:
- Boid: Boids are a common method of simulating herd behaviour in animals (particularly fish)
- Social Forces: The desire to approach a goal and avoid and follow the crowd is captured in vectors, which determine the velocity of each agent in the crowd.
Simulation: Crowd implementation
To test the robot's capabilities in crowds through a simulation, the simulation must include a realistic model of how crowds behave. In the 1970s Henderson already related a macro view of the crowds with fluid-dynamics with great success[29]. For the local interactions the robot would experience in real life, this macro view is not realistic enough to model these interactions. Therefore, we have to use a more micro level description of crowds. We came across the social fore model created by D. Helbing and P. Molnár[30] in 1998. This model is well acclaimed and even though it has it's drawbacks like a full stop of pedestrians not working well in the model, we have decided to use the original formulation suggested in 1998.
The social force model is a physical description of pedestrian behaviour, it models pedestrians as point masses with physical forces working upon them. Each pedestrian experiences a few different forces, which will be shortly explained. First, there is a driving force, this force models the internal desire of a pedestrian to go somewhere, it is represented as a direction and the pedestrians desired walking speed. The desired walking speed used is the same that the paper suggests namely a normally distributed random variable with mean of 1.34 m/s and a standard deviation of 0.26 m/s. The direction is calculated by using Unity's navmesh which generates paths through the environment given a start and end. Second, every pedestrian experiences some repulsive force generated by other pedestrians. These repulsive forces are calculated using the fact that humans, want to keep enough distance to each other and instinctively take into account the step size of others. This is calculated by creating an ellipse which is as big as the step the other pedestrian is taking. Then depending on this ellipse it's turned into a force which grows exponentially the closer to the other pedestrian you are, this is called the territorial effect and it points away from the other pedestrian. This is done for every pedestrian in the vicinity. Third, there is another repulsive force from walls and obstacles, this is far simpler as it can be described by an exponential force the closer you get to an obstacle, which points away from the obstacle. Finally, there is an attractive force, this force can be used for multiple things, either for friends who you would want to walk closer to or interesting objects or people in the vicinity. This force decreases over time as people lose interest, however this force is not applied in our model. Both the repulsive and attractive forces are weighted depending on if the object applying the force is inside the field of vision of a pedestrian. The net force applied to a pedestrian is the summation of all these forces and can be applied as an acceleration where the maximum attainable speed of a pedestrian is capped by its desired speed. For performance reasons most of this calculation is done in parallel on the GPU, because of this a trade-off was made. For the repulsive force generated by the walls only the closest object is taken into account since passing all the objects to the GPU creates too much overhead for the CPU loading the data to it. If everything would have been handled by the CPU however, the possible amount of simulated people would have been too little to form a crowd.
Simulation: Robot agent
The robot agent was implemented using Unity. The body of the robot was created by importing the CAD model into Blender and then importing it into Unity. To this model a mesh collider is added to try and make collisions more precise. Attaching a rigid body to the robot agent allowed it to interact with its environment as well as follow the laws of physics (or at least the physics of the Unity engine). The behaviour of the robot was implemented in the following way:
Map of the environment
One of our base assumptions was that the robot has a map of the environment it is in, with landmarks placed. Thus, it would know how the base environment is structured according to the floor plan for example. Thus, it knows where there are walls as well as points of interests, which are the goals to which it will guide people. This was implemented into the simulation via Unity's Navmesh. It allows us to create a mesh of the environment, dividing the space into places where the robot can and cannot move. Then using the default path-finding algorithm of Navmesh, the robot agent will calculate a path using this mesh, thus moving though the environment, while also keeping in mind the overlay of the map. The only issue with this approach is that the algorithm used for pathfinding is A* which, while it will calculate the shortest path to the goal, sometimes the shortest path is not the best path overall.
Sensing the environment
Our robot agent is supposed to use a combination of a LiDAR, a camera and a thermal camera to recognize obstacles in its path that are not in the built-in map, or in other words more dynamic obstacles. While we have stated in our report how one could try and detect dynamic obstacles, in the simulation due to constraints, we rather than using point clouds to create a map of the close environment around the robot and then combine that map with the thermal camera vision to try and detect humans, we instead make use of Unity's raycast functionality, which allows us to cast light beams from our agent. Thus, using multiple beams from these raycasts we emulate a 2D LiDAR. Using this LiDAR as the main sensor we created two versions. The first version has better obstacle avoidance and overall smoothness of movement. Using the raycasts, based on whether the beam hits an object that as been tagged as an "undiscovered human", or "undiscovered obstacle" it convers the tag to discovered, which will then carve a space around the object on the mesh, which makes out the agent move around the object in the path it must take is near the obstacle. This version has some limitations, however. Due to the implementation of NavMesh and the movement ai in Unity, it does not allow us to follow the regular laws of physics, thus the robot could not interact with its environment correctly. Thus, we created a second version.
The second version makes use of NavMesh to calculate a path much like the first version, however the way the agent moves is different. Rather than depend on the navigation ai, it uses a movement function of the rigid body component to traverse the environment to follow that path. This allows the agent to have physics in its interactions with its environment. The obstacle detection and avoidance are also done in a different way. Rather than carve out the mesh, we use three different sets of beams. Left, right and front. Based on where the obstacle is detected the agent reacts by slightly deviating from its path. The issue with this version however was that the movements of the robot were not smooth, thus while it could interact better with its environment, its movements when turning for example were not realistic. That is why we used the first version for the macro simulation.
Finally it must be noted that the follow and bump behaviour in the implementation of the robot has some issues. Mainly that the robot would sometimes follow during times where it should not follow, as well as not following in moments where such behaviour would be the most efficient. The reasons for these issues is the fact that it is difficult to determine which person would be an ideal candidate to follow. For example our implementation depends of the direction the agent is looking at, as well as the rotation of the humans around the robot. If both the robot and a human have the same rotation, then that human is seen as a potential candidate. While on paper the idea seems good, in certain situations, for example when the robot is turning around corners, or making small adjustments in its path, it will then not be looking at the final goal. This would mean that it is possible it starts following a person, going in the wrong direction, granted there is no other option but to initiate the follow behaviour (when there are obstacles detected that prevent the robot from moving to the left, right, and forwards). This leads the robot to sometimes take inefficient paths.
Simulation: Environment
The environment is a 3D geometry-based replica of the first floor of the ATLAS building in terms of large collision parameters. It has been constructed by tracing the edges a floorplan of Atlas, provided by the RE department, with collision objects.
After the model was constructed, it was re-scaled in the unity engine to match the metric of the Atlas building. It should be noted that not all elements of the floorplan are accurate, as the layout of Atlas changes frequently to accommodate events.
The model has various abstraction to accommodate constrains of the simulation. Entry ways have been blocked of, to avoid the crowd of walking outside of the defined perimeter, and doors are considered to be closed. The stairs have also been omitted, or remodelled to be impassable, as we do not consider other floors of the Atlas building in this simulation. Only the lower portion of this floor is considered, as there will be no walking crowd that collides with anything higher than 2 meters.
Simulation: Results
Parameters:
To obtain the results, the simulation was run with the robot starting at the north side of Atlas, moving towards the goal on the opposing south side of Atlas. The crowd was setup to contain 1500 agents, which is the maximum number of people the ground floor of Atlas is designed for, according to the Real Estate department.
Expected results:
The expect result is for the Social Force model to generate a crowd that is typical of a very busy day in Atlas. With this comes:
- The generation of dense 'streams' of agents moving in a similar path from goal to goal.
- The existence of sparse and dense pockets of space, where some areas are move heavily congested.
We do not expect the social force model to generate agents stationary near goals (such as real students, buying a drink at a machine, creating congestion around a coffee machine), as the model is focused on the movement of pedestrians.
In order to behave safely in accordance to the ISO 10218-2:2011 and 15066:2016 requirements[27][28], We expect the robot to:
- Avoid collisions in the sparsely populated areas and follow its own path.
- Follow crowd-agents to prevent collisions in adequately dense areas, where there is still enough space to avoid agents, but not enough to find your own path.
- Follow its own path when the currently followed agent deviates too much from the optimal path.
- Bump into crowd-agents when there is insufficient space to avoid them.
- When bumping, the force should be minimal: The robot should ensure a relative velocity low enough to not cause pain or major discomfort.
Result:
- We observed that central spaces, such as the centre of the main hall, are indeed very calm. The crowd that formed was very sparse here,
and as such the robot could use standard avoidance and pathfinding algorithms, A* in this case, to avoid the agents of the crowd and reach the goal without making a single collision.
- We also observed that there tend to be congested areas around hallway entries and more narrow spaces. Here the crowd would become very dense, with agents themselves bumping into each other or narrowly avoiding each other.
- We observed that the robot generally crosses a stream of densely packed agents, rather than that the stream moves in the same direction as the robot, so that in can follow it. While doing so, it does attempt to avoid agents, or reduce impact.
- We observed that the robot does indeed bump into agents that are in the way, but it is hard to definitively state the robots uses bumping as a last resort.
- We observed that the robot bumps through congested areas, instead of avoiding them, if its path requires it to get through this area.
A video of a single iteration: https://www.youtube.com/watch?v=YAjKelmA9mM
Conclusions
We observed that the crowd generated by the Social Force model was indeed indicative of a typical crowd in Atlas. This caused the problem that the crowd, although it is representative, does not comply with the assumptions the robot makes in order to navigate a dense crowd. As the scope of the described behaviour ends at these assumptions, the implemented behaviour of the robot, which could only work inside the scope of this project, simply does not generate adequate results in terms of safety and performance. The robot behaviour earlier described assumes a laminar flow of people to navigate, and the streams that occur in the Atlas setting are often only partially laminar. Especially when streams cross, and around congested chokepoints, this assumption simply does not hold. Additional implementation would be required, to deal with non-laminar or generally omni-directional movement of crowd flows.
We conclude that this is the reason why the robot does not always tend to follow flows and avoid bumps. As the scenarios we chose to focus the behaviour of this project on, are mixed with other scenarios such as crossing a crowd, that are not explicitly considered. As such the robot resorts to its basic non-crowd routine of attempting to follow the most efficient path. This effectively bypasses the behaviour we wish to test the safety for. As such we can only conclude from this simulation, that simplistic pathfinding behaviour with obstacle avoidance is sufficient to generate safe behaviour for navigating sparsely populated areas of Atlas.
To show the safety of the behaviour itself, we thus decided to create more focussed environments, that force compliance with the robots expected crowd.
Simulation: Micro-simulations
To test the safety of the robot's behaviour implementation, we created specific scenario's, which are better suited to showcase the intended behaviour of the robot, which together cover a large subset of problem the robot can solve. These scenarios are specifically created after running the Social Force simulation and are controlled instances of situations the robot agent encountered during its navigation in that simulation.
The advantage of these scenarios, is that they are altered to force compliance with the robot's assumptions about the crowd, as described in the scenario's and behaviour sections of this wiki. As a consequence, the robot will show the behaviour that is described, and thus the safety of the robot, in situations encountered in the Atlas representative model, can be tested with the correct behaviour of the robot.
Each scenario was run a total of 10 times, 5 times to observe the robot behaviour, and another 5 times to obtain force measurements during collisions that may occur in the scenarios. In each iteration, the parameters are the exact same.
Micro scenario - sudden stop
This micro scenario focuses on our second scenario "Stalled lead", in which the robot is following a person, and this person suddenly stops. In the scenario, the robot is forced to slow its pace and bump into the person, by placing a row of persons on each side to prevent it from avoiding the person. The robot slows it pace, and eventually bumps into the person, until they move.
A video of the scenario in development is shown here: https://www.youtube.com/watch?v=rcPF2ZiYqlw
duration [frames] | impulse Magnitude [Newton-seconds] | nr of collisions | average force [Newton] |
---|---|---|---|
44 | 67.0125 | 8 | 91.3807 |
38 | 56.0443 | 6 | 88.4910 |
41 | 62.1002 | 8 | 90.8783 |
41 | 63.2706 | 8 | 92.5911 |
40 | 59.3228 | 6 | 88.9842 |
Micro scenario - intersecting agents
During this scenario, the robot is following a person, while another person is crossing the space between the robot and its lead. The robot shows it is capable of detecting the crossing person in time and reacts by allowing the crossing person to pass in front of it by slowing to a near halt. When the person has passed, the robot accelerates, and we observe that it returns the to the same distance it was following the person at earlier.
We observed that the robot now indeed is capable of avoiding collision, instead of resorting immediately to bumping. As a result, during all 10 iterations run on this scenario did not result in any collisions.
A video of the scenario in development is shown here: https://youtu.be/J_IOsJ16ifs
Micro scenario - Results
During the above scenario's, a script was attached to the robot that measured the number of collision events, the largest duration in frames (where 60 frames are 1 second) of the collision events, and the largest Impulse measured during the collisions.
The evaluating script shows that for 5 separate iterations of the first scenario, the average force applied is well below the 150 Newton threshold. It should be noted that the script yields the largest and longest corresponding force and time a collision occurred during each run. The number of collision events is also computed using the convex rigid body of the robot mesh, which means that the number of collisions is likely to be higher, as the convex hull encapsulated space below the whiskers that is not actually occupied by the robot.
Conclusion
The micro simulations show that, if the assumptions on the robot behaviour are met, the total average force applied during the simulation is below the 150N threshold laid out in the ISO standard. In addition, the simulation shows that the behaviour of the robot is successful in avoiding contact in crowds unless required, satisfying the force limitation requirements in the ISO standard. We thus conclude that the proposed behaviour in this concept is safe, if the crowd behaviour is captured by the scenario's previously discussed in this document.
Conclusion
Project findings
To the research question 'How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?' we have given an answer in the form of the provided various behavioural descriptions under the scenarios. From the micro-simulations it can be seen that it is safe to act in accordance with those for at least some of the behavioural rules. The simulations should however not be seen as definitive proof because it uses Unity's physics engine which lacks any kind of material simulations. To verify the claims made in this project regarding safety it would be best to run actual material simulations to find exact pressures. Furthermore, most of the behaviour has not been tested.
Overall, this behaviour has its uses, making a navigation method like this which is not reliant on perfect information, allows the robot to neglect some observations simplifying what sensors are necessary. It also makes the robot more robust for small changes. For example, a non-living entity will not change how the robot behaves.
Future research
The behaviour as described in the scenario's should be implemented in a more advanced simulation. This can be done in a discrete manner (rule-based agent) or a more inspired manner (Utility-based or learning agent, for which the descriptions would act more like a guideline).
The acceptance of the design in crowds and users should be verified, this is a point which was lacking in this research. César López has mentioned that this can be designed for using established researched as a guideline but is finally verified with a physical prototype and a survey designed for such research.
The design could also be made more detailed by adding any of the assumed working pieces mentioned in the problem scoping including adding behaviour for different kinds of dense crowds:
- Localization of the guide
- Identification of obstacles or other persons
- Navigation in sparse crowds
- Navigation in dense crowds
- Overarching strategic planning (e.g., navigating between multiple floors or buildings)
- Interaction with infrastructure (e.g., Doors, elevators, stairs, etc.)
- Effective communication with the user (e.g., user being able to set a goal for the guide)
Any of the behavioural changes or additions would require some kind of transitional system to switch between them. López mentioned that this can be done by selecting the behavioural model for which all conditions are met but implementing a general navigation method is a good way to make sure the guide always has something to fall back on.
Finally, the risks and hazards of this design should be worked out in even more detail (like mechanical failure).
Project evaluation
First it is important to note that what is presented in this report is not a full 8 weeks of work for 6 students. This is due to the change of subject after 2 weeks, and another 2 weeks it took to have a clear problem statement narrowed down enough to work on. This gave us a last 4 weeks in which a lot of work was done. During these remaining weeks, after the second meeting with López, it became clear that the scenarios that were worked were too extensive and fell outside the scope of the project; walking along a unidirectional crowd.
After the final presentation there was a final meeting with César López in which the end result was evaluated and some of the main points will be discussed here.
For this type of research safety is usually taken care of in the design process before development by using predetermined safety standards for such products. Due to time constraints small safety research was done alongside the making of the simulation. At the moment there is not an in-depth safety analysis done where possible hazards are identified, risks are determined, and consequences are determined. The main focus of the design is based on research on what might work when navigating a robot through a crowd.
Furthermore, the simulation that was designed should have been more constrained from the beginning, to fit the chosen problem. This again shows how the scoping of the research question should have been done earlier in the project. This would allow the assumptions for the behaviour to be met. Making a simulation with clear assumptions that are met allows the behaviour of the design to be more intelligently formed, using a more iterative process, instead of the current methods.
Appendix
Code:
The code for the simulation can be found in the following github page: https://github.com/JJellie/VirtualCrowdSim
Here some papers used in the research to the guide robot are summarized. These papers are mostly the state of the art of the hard- and software of guide robots, and crowd navigation. These summaries could be read to get a deeper understanding of the state of the art.
Literature Research
Paper Title | Reference | Reader |
---|---|---|
Modelling an accelerometer for robot position estimation | [31] | Jelmer S |
An introduction to inertial navigation | [32] | Jelmer S |
Position estimation for mobile robot using in-plane 3-axis IMU and active beacon | [33] | Jelmer S |
Stepper motors: fundamentals, applications and design | [34] | Joaquim |
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities | [35] | Jelmer L |
Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization | [36] | Jelmer L |
Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry | [37] | Jelmer L |
Optical 3D laser measurement system for navigation of autonomous mobile robot | [38] | Boril |
A mobile robot based system for fully automated thermal 3D mapping | [39] | Boril |
A review of 3D reconstruction techniques in civil engineering and their applications | [40] | Boril |
2D LiDAR and Camera Fusion in 3D Modeling of Indoor Environment | [41] | Boril |
A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR | [42] | Jelmer L |
An information-based exploration strategy for environment mapping with mobile robots | [43] | Jelmer S |
Mobile robot localization using landmarks | [44] | Jelmer S |
The Fuzzy Control Approach for a Quadruped Robot Guide Dog | [12] | Wouter |
Design of a Portable Indoor Guide Robot for Blind People | [13] | Wouter |
Guiding visually impaired people in the exhibition | [45] | Joaquim |
CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People | [11] | Boril |
Tour-Guide Robot | [46] | Boril |
Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques | [10] | Boril |
Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques | [10] | Boril |
Modelling an accelerometer for robot position estimation
The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems).
It highlights sensors that we may also need: " In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. "
It mentions that, in order to derive position data from acceleration, it needs to be doubly integrated, which tents to yield great inaccuracy.
drawback: the robot needs to stop after a short time (to re-calibrate) when using double-integration to minimize error-accumulation: “Double integration of an acceleration error of 0.1g would mean a position error of more than 350 m at the end of the test”.
An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration. The paper modelled acceleration, and rotation according to various lengthy math equations and matrices and applied noise and other real-word modifiers to the generated data.
It notably uses cartesian and homogeneous coordinates in order to separate and combine different components of their final model, such as rotation and translation. These components are shown in matrix form and are derived from specification of real-world sensors, known and common effects, and mathematical derivations of the latter two.
The proposed model can be used to test code for our robot's position computations.
This paper (as report) is meant to be a guide towards determining positional and other navigation data from interior based sensors like gyroscopes, accelerometers and IMU's in general.
It starts by explaining the inner workings of a general IMU and gives an overview of an algorithm used to determine position from said sensors' readings using integration, showing what intermitted values represent using pictograms.
It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices.
In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using Allan variance and shows how this effects the values from a gyroscope.
Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems.
Lastly, it shows how to simulate data from the earlier discussed sensors. Notably though the previous paper already discussed a more accurate and recent algorithm (building on this paper).
Position estimation for mobile robot using in-plane 3-axis IMU and active beacon
The paper highlights 2 types of positioning determination: Absolute (does not depend on previous location) and Relative (does depend on previous location). It goes on to highlight advantages and disadvantages of several location determination systems. It then proposes a navigation system that mitigates as much of the flaws as possible.
The paper continues by describing the sensors used to construct the in plane 3 axis IMU: - x/y accelerometer, - z-axis gyroscope
Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation.
Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a Karman filter to combat noise and drift. The final processing pipeline discussed is immensely complex due to the inclusion of bounce, collision and beacon-failure handling.
Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost.
Stepper motors: fundamentals, applications and design
This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore, it goes in-depth about how they are controlled.
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities
According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power.
The author's approach is by creating a 2d Delaunay triangulation from tracked key points, and then projecting this into 3d, this projection can have some issues where points are close in 2d but not in 3d, this is solved by geometric filters. Some algorithms update a mesh for every frame, but the authors try to maintain a mesh over multiple frames to reduce computational complexity, capture more of the scene and capture structural regularities. Using the triangular faces of the mesh they are able to extract geometry non-iteratively.
In the next part of the paper, they talk about optimizing the optimization problem derived from the previously mentioned specifications.
Finally, the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time.
Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization
In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity.
The contributions the authors claim to bring are constructing a pose graph without expressing global pose uncertainty, provide a fully probabilistic derivation of IMU error terms and develop both hardware and software for accurate real-time slam.
The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation.
Finally, they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled.
Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry
The authors from this paper propose an algorithm that fuses feature tracks from any number of cameras along with IMU measurements into a single optimization process, handles feature tracking on cameras with overlapping fovs, a subroutine to select the best landmarks for optimization reducing computational time and results from extensive testing.
First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then it is refined by matching it with to the closest image feature in the camera projected to using Euclidian distance. After this it is explained how feature selection is done, this is done by computing a Jacobian matrix and then finding a submatrix that preserves the spectral distribution best.
Finally experimental results show that with their system is closer to the ground truth than other similar systems.
This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method, the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT, four-wheel, four-motor ski-steer robotics platform.
After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus, after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle. Finally, the authors go over their results from simulating this robot in Matlab as well as analyse its performance.
A mobile robot based system for fully automated thermal 3D mapping
This paper showcases a fully autonomous robot, which can create 3D thermal models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning. The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360-degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally, the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm.
A review of 3D reconstruction techniques in civil engineering and their applications
This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore, 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds: For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors. Feature matching- matching feature points of each image pair. Camera motion estimation is used to find out the camera parameters of each image. The Sparse 3D reconstruction step is to compute the 3D location of points using the feature points and camera parameters, generating a point cloud. This is done via the triangulation algorithm. Then the model parameters correction step is to correct the camera parameters of each image. This step leads to precise 3D locations of points in the point cloud. Absolute scale recovery aims to determine the absolute scale of the sparse point cloud by using the dimensions/points of absolute scale in the sparse point cloud. Finally using all of the above is used to generate a dense point cloud. For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction. After this the authors explain how to generate point clouds from video images. in Techniques for processing data, the authors showcase a couple of algorithms for data processing. For point cloud processing they use ICP. For Mesh reconstruction- PSR, for point cloud segmentation- they divide the algorithms into two categories- feature-based segmentation (region growth and clustering, K-means clustering) and model-based segmentation (Hough transform and RANSAC). After this the authors go in depth on applications of 3D reconstruction in civil engineering such as reconstructing construction sites and reconstructing pipelines of MEP systems. Finally, the authors go over the issues and challenges of 3D reconstruction.
2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment
This paper goes over how to effectively fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for colour and texture information, while a 2D LiDAR is used as the range sensor. To calibrate the correspondences between the camera and LiDAR, a planar checkerboard pattern is used to extract corners from the camera image and intensity image of the 2D LiDAR. Thus, the authors rely of 2D-2D correspondences. A pinhole camera model is applied to project 3D point clouds to 2D planes. RANSAC is used to estimate the point-to-point correspondence. Using transformation matrices, the authors match the colour images of the digital camera to with the intensity images. yB aligning a 3D colour point cloud in different location, the authors generate the 3D model of the environment. Via a turret widow X servo, the 2D LiDAR is moved in vertical direction for a 180-degree horizontal field of view. The digital camera rotates in both vertical and horizontal directions, to generate panoramas by stitching series of images. In the third paragraph the authors go over how they calibrated the two image sources. To determine the rigid transformation between camera images and 3D points cloud a fidual target is used, RANSAC is used to estimate outliers during calibration process and a checkerboard with 7x9 squares is employed to find correspondences between LiDAR and camera. Finally, the authors go over their results.
A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR
This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components, and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled.
Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation.
In contrast to the speed and low computational complexity of loosely coupled systems, tightly coupled systems sacrifice some of this for greater accuracy. One of the main points of these systems is a derivation of the error term and pre-integration formula for the IMU, this can be used to increase accuracy of the IMU measurements by estimating the IMU bias and noise. For LiDAR-IMU systems this derivation is used for removing distortion in LiDAR scans, optimizing both measurements and many different approached to couple the 2 devices to obtain greater accuracy and computation speed. The LiDAR-Visual-IMU use strong correlation between images and point clouds to produce more accurate pose detection.
The authors then do performance comparisons on SLAM datasets where most recent SLAM systems appear to estimate pose really close to the ground truth even over distances of several 100 meters.
An information-based exploration strategy for environment mapping with mobile robots
This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map.
The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360-degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space.
The researchers go on to describe how the quality of the proposal is measured, namely in the distance travelled and uncertainty of the map. The uncertainty is a function of the uncertainty in the robot's current position, and the accuracy of the range finder. The robot has a pre-computed expected position of each point, and a post-measurement position of each point, which is then evaluated through relative entropy to compute the increment of the point-information. This and similar equations for the robot's position data are used to select the optimal points for observing the environment. Lastly, the points of each observation point are combined into one map, by using the robot's position data.
Mobile Robot Localization Using Landmarks
The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia-based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods.
It highlights the robot's capability to: - Find landmarks - Associate landmarks with points on a map - Use this data to compute its position.
It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image) and solves the correspondence problem (identifying an object from 2 angles) to find its location. In the paper, the technique is tested in an office environment.
The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks. The paper then evaluates the expected distribution in angle-error and position on each axis, to correct for the noise, using the method described above.
The Fuzzy Control Approach for a Quadruped Robot Guide Dog [47]
This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people. Based on its low viewpoint.
The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed; it does not show how to guide a blind person.
Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked.
Design of a Portable Indoor Guide Robot for Blind People
This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. The design is basically a motorized walker with sensors on it.
This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities.
The conclusion stated that the robot did well, and it was a low cost, convenient-to-carry, and strong perception blind guide robot.
Guiding visually impaired people in the exhibition
This paper talks about a robotic guide used to help (partially) blind people navigate an exhibition (a noisy, crowded (4 square meters/person), unfamiliar environment). These people are often faced with the challenge of maintaining spatial orientation; ‘the ability to establish awareness of space position relative to landmarks in the surrounding environment’. The paper proposes that supporting functional independence of these people can thus be achieved by ‘providing references and sorts of landmarks to enhance awareness of the surroundings’.
The technology used by this paper to achieve this is a handheld device capable of radio-frequency localization. To prepare the environment a RFID sensor was placed for each 300 square meters (~17x17 m area) at points of interest, services and major areas. The paper does not go into the details of how the localization is done but an educated guess would be that the guiding devices carried by the guided persons are scanned by these fixed sensors which then communicate to calculate the position of the guided. Keep in mind this exhibition took place in 2006, but they found a resolution of 5 meters (minimal distance between distinguishable tags).
The interface of the device makes use of hardware buttons, which they find a solution suited for visually impaired people. Apart from standard navigation and audio control buttons, the device was also equipped with a button which gives quick access to an emergency number.
In this particular use-case the device guided people using an event-system which would ask the user if they wanted to hear a description of their environment. This event would trigger when the handheld device would recognize signals from local sensors. This description would include:
- an extended title
- the description of the point of interest
- one or more extended descriptions
- descriptions to invite and spatially guide the user near the featured flowers and plants.
The device would also describe near points of interest such as crossroads, entrances, exits, restaurants, toilets etc. such that the user can create their own mental map of their surroundings allowing them to build and follow their own path; being unconstrained by the predefined path.
To overcome noise the user was provided with headphones. Another problem was that some users were frustrated with the silence of the device when they were not at a point of interest. This was solved by providing a message stating this.
The device was recognized by the visually impaired users to allow them a large degree of freedom which traditional (fixed) guides do not.
The authors end with saying the experience would probably be significantly improved with a better localization technology.
This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product. The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance.
Design
Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, which is placed on the frontal edge of the robot, the map environment is mapped beforehand. Localisation- using wheel odometry and LiDAR scanning it estimates the current location. Compares the real-time scanning and map to previously generated using Monte Carlo localisation (AMCL) package of ROS. In addition, odometry information can be computed using the LiDAR and stereo camera. Path Planning- path on the LiDAR map is planned based on the user's starting point and destination. To avoid obstacles, and to navigate a dynamic environment local, low-level pathing is implemented using the navigation packages of ROS. The robot also considers the space that is occupied both by it and the user in its pathfinding. This is done via a custom algorithm. The robot also provides haptic feedback. The authors use vibro-tactile feedback (different vibration locations and patterns) on the handle to convey the intent of the robot to the user. Via buttons on the handle one can change the speed of the robot. After this explanation, the paper goes over the conducted user study and its results.
Tour-Guide Robot
This paper introduces a tour-guide robot using Kinect technology. The robot follows tourists wherever they go, avoiding obstacles and providing information. The paper begins by naming some previous implementations of such tour guide robots. Such robots are Rhino, Minerva, Asimo, Tawabo, Toyota tour guide robot, Skycall. Using Kinsect to determine gestures and spoken commands as well as facial recognition. Main parts- RGB camera, 3D depth sensing system, multi array microphone. The platform of the robot has ultrasonic sensors to detect obstacles. RFID is used to detect the RFID cards around the museum to correctly identify item and play the corresponding audio file. Base robot platform- Eddie.
This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a heterogeneous computer cluster. SLAM is achieved via laser scanners (LiDAR) or RGBD cameras. The paper names some popular such robots: TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead. Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move REEM-C- ROS-enabled autonomous humanoid robot, using RGBD camera for 3D mapping. The paper contains useful tables containing information about these robots, as well as popular ROS computing platforms and mapping sensors. The authors propose the use of lidar measurements on a road's surface to detect road boundaries. based on multiple model method the existence of cubs is determined. The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account. As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures. NTU-1- autonomous tour guide robot that guides on the campus of the National University of Taiwan. It is a big robot, weighting around 80 kg, with a two-wheel differential actuated by a DC brushless motor. It uses multiple sensing technologies such DGPS, dead reckoning and a digital compass, which are all fused by the way of Extended Kalman Filtering. For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters. Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the paper are ASKA, Urbano, Indigo, LeBlanc, Konard and Suse.
- ↑ Romlay, M. R. M., Toha, S. F., Ibrahim, A. M., & Venkat, I. (2021). Methodologies and evaluation of electronic travel aids for the visually impaired people: a review. Bulletin of Electrical Engineering and Informatics, 10(3), 1747–1758. https://doi.org/10.11591/eei.v10i3.3055
- ↑ (1) (PDF) Guiding visually impaired people in the exhibition (researchgate.net)
- ↑ What are the problems that the visually impaired face with the white cane? (n.d.). Quora. https://www.quora.com/What-are-the-problems-that-the-visually-impaired-face-with-the-white-cane
- ↑ Healthdirect Australia. (n.d.). Guide dogs. healthdirect. https://www.healthdirect.gov.au/guide-dogs#:~:text=Guide%20dogs%20help%20people%20who,city%20centres%20to%20quiet%20parks.
- ↑ What A Guide Dog Does. (n.d.). Guide Dogs Site. https://www.guidedogs.org.uk/getting-support/guide-dogs/what-a-guide-dog-does/
- ↑ Guide Dogs Vs. White Canes: The Comprehensive Comparison – Clovernook. (2020, 18 September). https://clovernook.org/2020/09/18/guide-dogs-vs-white-canes-the-comprehensive-comparison/
- ↑ Guide Dog Etiquette: What you should and shouldn’t do – Clovernook. (2020, 10 September). https://clovernook.org/2020/09/10/guide-dog-etiquette/
- ↑ Guide Dogs for the Blind. (2020, 1 July). Guide Dog Training. https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times.
- ↑ Guide Dogs for the Blind. (2020b, July 1). Guide Dog Training. https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times.
- ↑ 12.0 12.1 https://link.springer.com/article/10.1007/s40815-020-01046-x utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot Cite error: Invalid
<ref>
tag; name "The Fuzzy Control Approach for a Quadruped Robot Guide Dog" defined multiple times with different content - ↑ 13.0 13.1 https://ieeexplore.ieee.org/document/9536077
- ↑ https://youtu.be/mh5L3l_7FqE
- ↑ 17.0 17.1 Mavrogiannis, C., Baldini, F., Wang, A., Zhao, D., Trautman, P., Steinfeld, A., & Oh, J. (2021). Core challenges of social robot navigation: A survey. arXiv preprint arXiv:2103.05668.
- ↑ Helbing, D., Buzna, L., Johansson, A., & Werner, T. (2005). Self-Organized Pedestrian Crowd Dynamics: Experiments, Simulations, and Design Solutions. Transportation Science, 39(1), 1–24. https://doi.org/10.1287/trsc.1040.0108
- ↑ Country - The International Agency for the Prevention of Blindness (iapb.org)
- ↑ 20.0 20.1 Salvini, P., Paez-Granados, D. & Billard, A. Safety Concerns Emerging from Robots Navigating in Crowded Pedestrian Areas. Int J of Soc Robotics 14, 441–462 (2022). https://doi.org/10.1007/s12369-021-00796-4
- ↑ 21.0 21.1 21.2 CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People (acm.org)
- ↑ ANTHROPOMETRY AND BIOMECHANICS. (n.d.). https://msis.jsc.nasa.gov/sections/section03.htm
- ↑ WHO. (n.d.). ASSISTIVE PRODUCT SPECIFICATION FOR PROCUREMENT. At who.int. https://www.who.int/docs/default-source/assistive-technology-2/aps/vision/aps24-white-canes-oc-use.pdf?sfvrsn=5993e0dc_2
- ↑ dog-harnesses-store.co.uk. (n.d.). Best Guide Dog Harnesses in UK for Mobility Assistance. https://www.dog-harnesses-store.co.uk/guide-dog-harness-uk-c-101/#descSub
- ↑ Using contact-based inducement for efficient navigation in a congested environment. (2015, August 1). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/7333673
- ↑ Trautman, P., Ma, J., Murray, R. M., & Krause, A. (2015). Robot navigation in dense human crowds: Statistical models and experimental studies of human–robot cooperation. The International Journal of Robotics Research, 34(3), 335-356.
- ↑ 27.0 27.1 27.2 ISO 15066:2016(EN) Robots and robotic devices — Collaborative robots, International Organization for Standardization. https://www.iso.org/standard/62996.html, 2016
- ↑ 28.0 28.1 28.2 ISO 10218-2:2011 Robots and robotic devices — Safety requirements for industrial robots — Part 2: Robot systems and integration, International Organization for Standardization, https://www.iso.org/standard/41571.html, 2011-07
- ↑ Henderson LF. The statistics of crowd fluids. Nature. 1971 Feb 5;229(5284):381-3. doi: 10.1038/229381a0. PMID: 16059256.
- ↑ Helbing, D., & Molnar, P. (1995). Social force model for pedestrian dynamics. Physical review, 51(5), 4282–4286. https://doi.org/10.1103/physreve.51.4282
- ↑ Z. Kowalczuk and T. Merta, "Modelling an accelerometer for robot position estimation," 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 2014, pp. 909-914, doi: 10.1109/MMAR.2014.6957478.
- ↑ Woodman, O. J. (2007). An introduction to inertial navigation (No. UCAM-CL-TR-696). University of Cambridge, Computer Laboratory.
- ↑ T. Lee, J. Shin and D. Cho, "Position estimation for mobile robot using in-plane 3-axis IMU and active beacon," 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South), 2009, pp. 1956-1961, doi: 10.1109/ISIE.2009.5214363.
- ↑ Athani, V. V. (1997). Stepper motors: fundamentals, applications and design. New Age International.
- ↑ https://arxiv.org/pdf/1903.01067v2.pdf
- ↑ http://www.roboticsproceedings.org/rss09/p37.pdf
- ↑ https://www.robots.ox.ac.uk/~mobile/drs/Papers/2022RAL_zhang.pdf
- ↑ Luis C. Básaca-PreciadoOleg Yu. SergiyenkoJulio C. Rodríguez-QuinonezXochitl GarcíaVera V. TyrsaMoises Rivas-LopezDaniel Hernandez-BalbuenaPaolo MercorelliMikhail PodrygaloAlexander GurkoIrina TabakovaOleg Starostenko (2013), Optical 3D laser measurement system for navigation of autonomous mobile robot, https://www.sciencedirect.com/science/article/pii/S0143816613002480
- ↑ Dorit Borrmann, Andreas Nüchter, Marija Ðakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, Jasmin Velagić, A mobile robot based system for fully automated thermal 3D mapping (2014), https://www.sciencedirect.com/science/article/pii/S1474034614000408
- ↑ Zhiliang Ma, Shilong Liu, 2018, A review of 3D reconstruction techniques in civil engineering and their applications (2014), https://www.sciencedirect.com/science/article/pii/S1474034617304275?casa_token=Bv6W7b-GeUAAAAAA:nGuyojclQld2SMnIeHougCByarFJX7eu049kMp_IWrnU5e8ljX9RMao-U4vs6cB3nREk8JP3qIA
- ↑ Juan Li, Xiang He, Jia L, 2D LiDAR and camera fusion in 3D modeling of indoor environment (2015), https://ieeexplore.ieee.org/document/7443100
- ↑ https://www.mdpi.com/2072-4292/14/12/2835
- ↑ Francesco Amigoni, Vincenzo Caglioti, An information-based exploration strategy for environment mapping with mobile robots, Robotics and Autonomous Systems, Volume 58, Issue 5, 2010, Pages 684-699, ISSN 0921-8890, https://doi.org/10.1016/j.robot.2009.11.005. (https://www.sciencedirect.com/science/article/pii/S0921889009002024)
- ↑ M. Betke and L. Gurvits, "Mobile robot localization using landmarks," in IEEE Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251-263, April 1997, doi: 10.1109/70.563647.
- ↑ Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. Mobile Guide, 6, 1-6.
- ↑ Asraa Al-Wazzan , Farah Al-Ali, Rawan Al-Farhan , Mohammed El-Abd, Tour-Guide Robot (2016), https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7462397
- ↑ The Fuzzy Control Approach for a Quadruped Robot Guide Dog | SpringerLink