PRE2022 3 Group5: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Tag: 2017 source edit
(→‎Simulation: Results: slight adjustments to the simulation conclusion)
 
(164 intermediate revisions by 6 users not shown)
Line 5: Line 5:
!Role
!Role
|-
|-
|Vincent van Haaren||1
|Vincent van Haaren||1626736
|Human Interaction Specialist
|Human Interaction Specialist
|-
|-
Line 24: Line 24:
|}
|}


==Project Idea==
<s>The project idea we settled on is designing a crawler robot to autonomously create 3d maps of difficult to traverse environments so humans can plan routes through small unknown spaces</s>


Our project concept is a guiding robot capable of helping users find rooms/locations. Our concept will be designed with the environment of the university and students in mind. The points we will try to improve on when compared to existing concepts is including stair climbing capabilities and increasing speed as much as possible to not slow down the users.


==Project planning==
<br />
 
==Introduction==
In this project we have been allowed to pursue a self-defined project. Of course, the focus should be on USE; user, society, and Enterprise. Our chosen project is the design of a product. Taking inspiration from our personal experiences we’ve chosen to find a solution to solve the navigation problems we encounter in the campus buildings in the TU/e. After some research about the topic and contacting TU/e Real Estate department we found out that guidance robots for people with visual impairment had demand. This was thus chosen as our topic. More specifically defined, the problem statement is: ‘Visually impaired people have ineffective means of navigating through the, at times, confusing pathways of campus buildings.’. When researching state-of-the-art electronic travel aids, we found 3 distinct solutions: Robotic Navigation Aids, Smartphone solutions, wearable attachments. The pros and cons are described in the table below:
{| class="wikitable"
{| class="wikitable"
|+
|+
!Week
!Types of ETA
!Description
!Implementation
!Advantages
!Negatives
|-
|Robotic Navigation Aids
|Smart Cane
|Offers portability and can be used as a normal white cane should the electronics cease to function
|Needs to be compact and lightweight
 
Lacks obstacle information because of restricted sensing ability offers little information on wayfinding and navigation purposes as it requires bigger and bulkier hardware
|-
|Robotic Navigation Aids
|Robotic guide dog/mobile robot
|The system gives room for larger hardware, as it does not require a user to carry it
|Complicated mechanicals while manoeuvering through stairs and terrain
|-
|-
|1
|Robotic Navigation Aids
|Group formation
|Robotic Wheelchair
|Suitable for the elderly and people who have a physical limitation provides navigation and mobility assistance for elderly visually impaired who cannot walk on their own, multi-handicapped, or people who have more than one disabling condition
|Safety remains an issue as user mobility fully depends on the robotic wheelchair navigation, road-crossing and stair climbing are difficult circumstances where the reliability of the wheelchair is of extreme necessity
|-
|-
|2
|Smartphone solutions
|Tasks:
|Android apps
Maps
Image
Processing
|Mobility/portability
 
No load or invasive factor as the only device is the smartphone
|The system depends on sensors available on the smartphone.


*Create/do user survey study
May communicate with an outer sensor such as beacon or external server but then it limits the usage for indoor requires certain orientation for image processing or internet signal for online maps
|-
|Wearable Attachments
|Eyeglasses
Glove
Belt
Headgear
Backpack
|Gives a natural appearance to the visually impaired user when navigating outdoor
|Too much attention is required, thus giving a cognitive load to the user


*Research stair climbing mechanisms and create list of hardware
These devices are intrusive as they cover ears and involve the use of hands users are burdened with the system’s weight.


*Expand upon mapping technologies for this purpose (finding/navigating environment) and create a list of hardware
Requires an extended period of training
*Do some research on existing concepts and research for guiding robots (Try to find previous project)
|}
Sourced from: <ref>Romlay, M. R. M., Toha, S. F., Ibrahim, A. M., & Venkat, I. (2021). Methodologies and evaluation of electronic travel aids for the visually impaired people: a review. ''Bulletin of Electrical Engineering and Informatics'', ''10''(3), 1747–1758. <nowiki>https://doi.org/10.11591/eei.v10i3.3055</nowiki></ref>


Goals at the end of the week:
Furthermore, another state-of-the-art solution for guiding devices was found; a device which would use electronic waypoints installed in the building, to localise the user and relay directions and information about the surroundings<ref>(1) (PDF) Guiding visually impaired people in the exhibition (researchgate.net)</ref>.


*Prototype components have a detailed plan (sketch) and bill of materials
A previous attempt was made at the TU/e (our case study) to use this method. But because it required infrastructure to be created in all the buildings in which it would work, it was never implemented. Therefore, we’ve decided to discard all solutions which would require such infrastructure.


*Order components/find and reserve them in robotica-lab
Wearable attachments have been discarded as it is inherently invasive meaning the user will have to equip it themselves. Furthermore, larger attachments with many sensors are made impossible due to weight-limits, and lastly wearing such a device in extended meetings is impractical. Any such device will require some prior knowledge on how to operate it. Due to all these reasons, we’ve chosen not to pursue wearable attachments.
|-
 
|break
We’ve decided against smartphone solutions because it would be difficult to make a one-size-fits-all solution due to differing phones and sensors. A slightly more biased reason is that half of our group members are not at all adept at creating such applications and have no interest in the field. We also worried that we would struggle creating a practical app due to the limitations of the phone hardware.
|Carnaval Break
 
|-
Robotic wheelchair was decided against due to its invasive nature and concerns for the user’s autonomy. Furthermore, this solution would be very bulky which makes it unsuited for crowded spaces. The user base which will most likely consist of furthermore well-abled students which do not need such support and might feel uncomfortable using such a device.
|3
 
|Monday: Split into sub-teams
A Smart Cane is not well-suited to guide the user due to the small form factor and weight requirement which would make inside-out localisation difficult.
work started on prototypes for LIDAR, Locomotion and Navigation
 
|-
The mobile platform guide robot has a few problems besides its price. The most important one is that it has trouble navigating stairs and rough terrain. Luckily, the robot will (for now) only be operating indoors in TU/e buildings. The presented use case of the TU/e campus has walk bridges connecting buildings and elevators in (almost) all buildings which mitigates most of the solution’s downsides. These factors make it the perfect place to implement such a guidance robot.
|4
 
|Thursday: Start of integration of all prototypes into robot demonstrator
 
|-
In summary we chose a robotic guide due to its user accessibility and potential for future improvements. It is a good way for people (with visual impairment or not) to be navigated through buildings.
|5
|Thursday: First iteration of robot prototype done [MILESTONE]
|-
|6
|Buffer week - expected troubles with integration
|-
|7
|Environment & User testing started [MILESTONE]
|-
|8
|Iteration upon design based upon test results
|-
|9
|Monday: Final prototype done [MILESTONE] & presentation
|-
|10
|Examination
|}


==State of the Art==
==State of the art==
===Literature Research===
{| class="wikitable"
|+Overview
!Paper Title
!Reference
!Reader
|-
|Modelling an accelerometer for robot position estimation
|<ref>Z. Kowalczuk and T. Merta, "Modelling an accelerometer for robot position estimation," 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 2014, pp. 909-914, doi: 10.1109/MMAR.2014.6957478.</ref>
|Jelmer S
|-
|An introduction to inertial navigation
|<ref>Woodman, O. J. (2007). ''An introduction to inertial navigation'' (No. UCAM-CL-TR-696). University of Cambridge, Computer Laboratory.</ref>
|Jelmer S
|-
|Position estimation for mobile robot using in-plane 3-axis IMU and active beacon
|<ref>T. Lee, J. Shin and D. Cho, "Position estimation for mobile robot using in-plane 3-axis IMU and active beacon," 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South), 2009, pp. 1956-1961, doi: 10.1109/ISIE.2009.5214363.</ref>
|Jelmer S
|-
|Mapping and localization module in a mobile robot for insulating building crawl spaces
|<ref>Mapping and localization module in a mobile robot for insulating building crawl spaces. (z.d.). https://www.sciencedirect.com/science/article/pii/S0926580517306726</ref>
|Jelmer L
|-
|A review of locomotion mechanisms of urban search and rescue robot
|<ref>Wang, Z. and Gu, H. (2007), "A review of locomotion mechanisms of urban search and rescue robot", ''Industrial Robot'', Vol. 34 No. 5, pp. 400-411. <nowiki>https://doi.org/10.1108/01439910710774403</nowiki></ref>
|Joaquim
|-
|Variable Geometry Tracked Vehicle, description, model and behavior
|<ref>Jean-Luc Paillat, Philippe Lucidarme, Laurent Hardouin. Variable Geometry Tracked Vehicle, description, model and behavior. Mecatronics, 2008, Le Grand Bornand, France. pp.21-23. ffhal-03430328</ref>
|Joaquim
|-
|Stepper motors: fundamentals, applications and design
|<ref>Athani, V. V. (1997). ''Stepper motors: fundamentals, applications and design''. New Age International.<br /></ref>
|Joaquim
|-
|Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities
|<ref>https://arxiv.org/pdf/1903.01067v2.pdf</ref>
|Jelmer L
|-
|Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization
|<ref><nowiki>http://www.roboticsproceedings.org/rss09/p37.pdf</nowiki></ref>
|Jelmer L
|-
|Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry
|<ref>https://www.robots.ox.ac.uk/~mobile/drs/Papers/2022RAL_zhang.pdf</ref>
|Jelmer L
|-
|Optical 3D laser measurement system for navigation of autonomous mobile robot
|<ref> Luis C. Básaca-PreciadoOleg Yu. SergiyenkoJulio C. Rodríguez-QuinonezXochitl GarcíaVera V. TyrsaMoises Rivas-LopezDaniel Hernandez-BalbuenaPaolo MercorelliMikhail PodrygaloAlexander GurkoIrina TabakovaOleg Starostenko (2013),
Optical 3D laser measurement system for navigation of autonomous
mobile robot,  https://www.sciencedirect.com/science/article/pii/S0143816613002480</ref>
|Boril
|-
|Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism
|<ref>Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism | IEEE Conference Publication | IEEE Xplore</ref>
|Wouter
|-
|Rough terrain motion planning for actuated, Tracked robots
|<ref>Rough Terrain Motion Planning for Actuated, Tracked Robots | SpringerLink</ref>
|Wouter
|-
|Realization of a Modular Reconfigurable Robot for Rough Terrain
|<ref>IEEE Xplore Full-Text PDF:</ref>
|Wouter
|-
|A mobile robot based system for fully automated thermal 3D mapping
|<ref> Dorit Borrmann, Andreas Nüchter, Marija Ðakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, Jasmin Velagić,  A mobile robot based system for fully automated thermal 3D mapping (2014), https://www.sciencedirect.com/science/article/pii/S1474034614000408 </ref>
|Boril
|-
|A review of 3D reconstruction techniques in civil engineering and their applications
|<ref> Zhiliang Ma, Shilong Liu, 2018,  A review of 3D reconstruction techniques in civil engineering and their
applications (2014), https://www.sciencedirect.com/science/article/pii/S1474034617304275?casa_token=Bv6W7b-GeUAAAAAA:nGuyojclQld2SMnIeHougCByarFJX7eu049kMp_IWrnU5e8ljX9RMao-U4vs6cB3nREk8JP3qIA </ref>
|Boril
|-
|Analysis and optimization of geometry of 3D printer part cooling fan duct
|<ref>Analysis and optimization of geometry of 3D printer part cooling fan duct - ScienceDirect</ref>
|Wouter
|-
|2D LiDAR and Camera Fusion in 3D Modeling of Indoor Environment
|<ref> Juan Li, Xiang He, Jia L,  2D LiDAR and camera fusion in 3D modeling of indoor environment (2015), https://ieeexplore.ieee.org/document/7443100 </ref>
|Boril
|-
|A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR
|<ref>https://www.mdpi.com/2072-4292/14/12/2835</ref>
|Jelmer L
|-
|An information-based exploration strategy for environment mapping with mobile robots
|<ref>Francesco Amigoni, Vincenzo Caglioti,
An information-based exploration strategy for environment mapping with mobile robots,
Robotics and Autonomous Systems,
Volume 58, Issue 5,
2010,
Pages 684-699,
ISSN 0921-8890,
<nowiki>https://doi.org/10.1016/j.robot.2009.11.005</nowiki>.
(<nowiki>https://www.sciencedirect.com/science/article/pii/S0921889009002024</nowiki>)</ref>
|Jelmer S
|-
|Mobile robot localization using landmarks
|<ref>M. Betke and L. Gurvits, "Mobile robot localization using landmarks," in IEEE Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251-263, April 1997, doi: 10.1109/70.563647.</ref>
|Jelmer S
|-
|A Staircase and Slope Accessing Reconfigurable Cleaning Robot and its Validation
|<ref>https://ieeexplore.ieee.org/document/9714003</ref>
|Wouter
|-
|The Fuzzy Control Approach for a Quadruped Robot Guide Dog
|<ref>https://link.springer.com/article/10.1007/s40815-020-01046-x?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref>
|Wouter
|-
|Design of a Portable Indoor Guide Robot for Blind People
|<ref>https://ieeexplore.ieee.org/document/9536077</ref>
|Wouter
|-
|Guiding visually impaired people in the exhibition
|<ref>Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. ''Mobile Guide'', ''6'', 1-6.</ref>
|Joaquim
|-
|CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People
|<ref> João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771 </ref>
|Boril
|-
|Tour-Guide Robot
|<ref> Asraa Al-Wazzan , Farah Al-Ali, Rawan Al-Farhan , Mohammed El-Abd, Tour-Guide Robot  (2016), https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7462397 </ref>
|Boril
|-
|Dynamics and stability analysis on stairs climbing of wheel–track mobile robot
|<ref name=":0">Gao, X., Cui, D., Guo, W., Mu, Y., & Li, B. (2017). Dynamics and stability analysis on stairs climbing of wheel–track mobile robot. ''International Journal of Advanced Robotic Systems'', ''14''(4), 1729881417720783.<br /></ref>
|Joaquim
|-
|Research on Dynamics and Stability in the Stairs-climbing of a Tracked Mobile Robot
|<ref>Tao, W., Ou, Y., & Feng, H. (2012). Research on Dynamics and Stability in the Stairs-climbing of a Tracked Mobile Robot. ''International Journal of Advanced Robotic Systems'', ''9''(4), 146.</ref>
|Joaquim
|-
|Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques
|<ref> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques  (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref>
|Boril
|}


====Modelling an accelerometer for robot position estimation====
It is commonly known that the most common tools used by visually impaired people are the white cane and the guide dog. The white cane is used to navigate and identify. With its help these people get tactile information about their environment, allowing the visually impaired to explore their surroundings and detect obstacles. However, the use of this can be cumbersome, as it can get stuck in cracks, or tiny spaces. Its efficiency is also limited in the event of bad weather conditions or a crowd.<ref>''What are the problems that the visually impaired face with the white cane?'' (n.d.). Quora. <nowiki>https://www.quora.com/What-are-the-problems-that-the-visually-impaired-face-with-the-white-cane</nowiki></ref> The guide dog on the other hand can guide the user through familiar paths, while also avoiding obstacles. They can also assist with locating steps, curbs and even elevator buttons. They can also keep their user centred, when crossing sidewalks for example.<ref>Healthdirect Australia. (n.d.). ''Guide dogs''. healthdirect. <nowiki>https://www.healthdirect.gov.au/guide-dogs#:~:text=Guide%20dogs%20help%20people%20who,city%20centres%20to%20quiet%20parks</nowiki>.</ref><ref>''What A Guide Dog Does''. (n.d.). Guide Dogs Site. <nowiki>https://www.guidedogs.org.uk/getting-support/guide-dogs/what-a-guide-dog-does/</nowiki></ref> There are a couple of issues with guide dogs however. They can only work for 6 to 8 years and have a very high cost of training.<ref>''Guide Dogs Vs. White Canes: The Comprehensive Comparison – Clovernook''. (2020, 18 September). <nowiki>https://clovernook.org/2020/09/18/guide-dogs-vs-white-canes-the-comprehensive-comparison/</nowiki></ref> They also require constant work on maintaining that training. The dog can also get sick. Another potential issue is bystanders that pet or take interest in the dog while it is working, which is a detriment to the handler.<ref>''Guide Dog Etiquette: What you should and shouldn’t do – Clovernook''. (2020, 10 September). <nowiki>https://clovernook.org/2020/09/10/guide-dog-etiquette/</nowiki></ref>


The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems).
None of these tools can efficiently assist the person in navigating to a specific landmark in an unknown environment.<ref>Guide Dogs for the Blind. (2020, 1 July). ''Guide Dog Training''. <nowiki>https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times</nowiki>.</ref> That is why currently a human assistant is preferred/needed to perform such a task, for example when walking in a museum.<ref>Guide Dogs for the Blind. (2020b, July 1). ''Guide Dog Training''. <nowiki>https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times</nowiki>.</ref> In regards to the technological means there is currently no robot that is capable of efficiently performing such a task, especially is the environment is a crowded building. However, there are multiple robots that have implemented parts of this function. In the following paragraph we have divided them into their own sections for ease of reading.


It highlight sensors that we may also need:
===Tour-guide robots===
" In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. "
We first begin with the tour-guide robots. These robots are used in places such as museums, university campuses, workplaces and more. The objective of these robots is to guide a user to a destination. Once at the destination these robots will most often relay information about the object, exhibition or room of the destination. In terms of implementation, these robots use a predefined map of the environment, where digital beacons are placed to mark the landmarks and points of interest. These robots also often make use of ways to detect and avoid obstacles such as using laser scanners (such as LiDAR), RGB cameras, kinetic cameras or sonars. This research paper <ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques  (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref> goes in depth on the advances in this field in the recent 20 years, the most notable of which are "Cate", "Konard and Suse". As our goal is to guide visually impaired people throughout the TU/e campus, this field of robotics is of upmost interest for the navigation system of a guidance robot.


It mentions that, in order to derive position data from acceleration, it needs to be doubly integrated, which tents to yield great inaccuracy.
====Aid technology for the visually impaired====
This section is split into two. First, we cover guidance robots for the visually impaired, after which we cover other technological aids that have been created for this user group.
=====Guidance robots=====
Guidance robots for the visually impaired are very similar to the tour guide robots. They often use much the same technology to navigate through the environment (predefined map with landmarks and obstacle detection and avoidance). What differentiates these robots from the tour-guide robots is the adaptation of the shape and functionality of the robots to better suit the needs of the visually impaired. The robots have handles, or leashes, which the visually impaired can hold, much the same as a guide dog or a white cane. As the user cannot see, the designs incorporate ways of communicating the intent of the robot to the user as well as ways of guiding the user around obstacles together with the robot. Examples of such designs are the Cabot<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People"> João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771 </ref>- a suitcase shaped robot, that stands in front of the user. It uses a LiDAR to analyse its environment and incorporates haptic feedback to inform the user of its intended movement pattern. Another possible design is the quadruped robot guide dog<ref name="The Fuzzy Control Approach for a Quadruped Robot Guide Dog">https://link.springer.com/article/10.1007/s40815-020-01046-x utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref>, which based on Spot could be used as a robotic guide dog, given some adjustments. Finally there is also this design of a portable indoor guide robot<ref name="Design of a Portable Indoor Guide Robot for Blind People">https://ieeexplore.ieee.org/document/9536077</ref>, which is a low-cost guidance robot, which also alerts the user of obstacles in the air.


drawback: the robot needs to stop after a short time (to re-calibrate) when using double-integration to minimize error-accumulation:
====Crowd-navigation robots====
“Double integration of an acceleration error of 0.1g would mean a position error of more than 350 m at the end of the test”.
As our design has the objective of guiding the user through a university campus it is reasonable to expect that there will be crowds of students at certain times of the day. For our design to be helpful, it needs to handle such situations in an efficient way. Thus, we took inspiration from the minor field of crowd-navigation of robotics. The goal of these robots is exactly that- enabling the robot to continue moving through a crowd, rather than freeze up, every time there is an obstacle in front of it. Some relevant research are these papers "Unfreezing the Robot: Navigation in Dense, Interacting Crowds"<ref name="Unfreezing the Robot: Navigation in Dense, Interacting Crowds">Unfreezing the Robot: Navigation in Dense, Interacting Crowds, Peter Trautman and Andreas Krause, 2010 https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5654369&casa_token=3UPVOvK4kjwAAAAA:IjkyGh3f-uh_x-01jDPtspxLX--eSCBTrZEGTwtVEXc8hU9D2oLLEuOCTCz6OdGHWmy76bX3JA&tag=1</ref>, a robot that can navigate crowds with deep reinforcement learning<ref name="Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning">Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning, Changan Chen, Yuejiang Liu, Sven Kreiss and Alexandre Alahi, 2019, https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8794134&casa_token=neBCeEpBndIAAAAA:wZuGoZYF-YCscI-kJGi5ljIIGkUFpzejSTaxySxytUbIUKeV4sUZze6lZN32gw2DmKwbw-G6ZA</ref>.
==User scenarios==
To get a better feeling of the problem, and the possible solutions two user scenarios are made that show the impact of the guide robot on visually impaired people that want to move through unknown crowded spaces. The design mentioned in these stories are both not what we ended up making, but the intended goal is the same; these stories and the solution we ended up making both try to expand the navigational tools a guidance robot has in crowded spaces. It is important to note that some parts of the robot here described fall out of the scope of the exact problem solved.


An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration.
===Physical contact through crowded spaces===
The paper modelled acceleration, and rotation according to various lengthy math equations and matrices, and applied noise and other real-word modifiers to the generated data.
Jack is partially sighted and can see only a small part of what is in front of him. He has recently been helping out fellow students with their field tests which tests a robot guide. Last month he worked with a robot called Visior which helps steer him through his surroundings. Visior is a robot which is inspired and shares its physical features with CaBot.


It notably uses cartesian and homogeneous coordinates in order to seperate and combine different components of their final model, such as rotation and translation. These components are shown in matrix form and are derived from specification of real-world sensors, known and common effects, and mathematical derivations of the latter two.
When Jack used Visior to get to the library to pick up a print request he had to pass through a mediumly-crowded Atlas building since there was an event going on. This went mostly as expected; not too fast and having to stop semi-periodically because of people walking or stopping in front of Visior. The robot was strictly disallowed to purposely make physical contact with other humans. Jack knows this so he learned to step up in these situations and try to kindly ask for the people in front to make way. This used to happen less when he used his white cane since people would easily identify him and his needs. After Jack arrived at printing room in MetaForum he picked up his print request. He handily put his batch of paper on top of his guiding robot, so he didn’t have to carry it himself.  


The proposed model can be used to test code for our robot's position computations.
On his way back he almost fell over his guiding robot when it suddenly stopped when a hurried student ran by. Luckily, he did not get hurt. When Jack came home after this errand he crashed on his couch after an exhausting trip of anticipating the robot’s quirky behaviour.


====An introduction to inertial navigation====
The next day the researchers and developers of Visior came to ask about his experiences. Jack told them about his experience with Visior and their trip to the library. The developers thanked him for his feedback and started working on improving Visior.
This paper (as report) is meant to be a guide towards determining positional and other navigation data from interia based sensors like gyroscopes, accelerometers and IMU's in general.  


It starts by explaining the inner workings of a general IMU, and gives an overview of an algorithm used to determine position from said sensors' readings using integration, showing what intermitted values represent using pictograms.
This week they came back with the now new and improved Visior-robot. This version has been installed with a softer exterior and now rides in front of Jack instead of by his side. The developers have made it capable of safely bumping into people without causing harm. They also made it capable of communicating with Jack if it thinks it might have to stop suddenly to make Jack a bit more at ease when traveling together.


It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices.
The next day Jack used it to again make a trip to the printing space in MetaForum to compare the experience. When passing through the crowded Atlas again (there somehow always seems to be an event there) he was pleasantly surprised. He found it easier to trust Visior now that it was able to communicate the points in the trip where Visior thought they might have to stop or bump into other pedestrians. For example, when they came across a slightly more crowded space Visior had guided Jack to walk alongside a flow of other pedestrians. Jack was made aware of the slightly unknown nature of their surroundings by Visior. Then when a student suddenly tried to cross their path without looking, Visior had unfortunately bumped into their side. Visior gradually slowed their pace down to a halt. Jack obviously felt the bump but was easily able to stay stable due to the prior warning and the less drastic decrease in speed. The student who was now naturally aware of the something moving in their blind spot immediately stepped out of the way and looked at Jack and Visior; seeing the sticker stating that Jack was visually impaired. Jack asked them if they were alright, to which they responded with saying they were fine after which they both went on their way. After picking up his print he went back home. On his way back he had to pass through the small bridge between MetaForum and Atlas in which a group of people were now talking, blocking a large part of the walking space. Visior guided Jack to a small traversable path open besides the group; taking the risk that the person there would slightly move and come onto their path. Visior and Jack could luckily squeeze by without any trouble and their way back home was further uneventful.


In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using allan variance, and shows how this effects the values from a gyroscope.
When the Developers of Visior came back the next day to check up on him Jack told them the experience was leagues better then before. He told them he found walking with Visior less exhausting than it had been before and found the behaviour of it more human-like making it easier to work with.


Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems.
===Familiar guidance advantage===
Meet Mark from Croatia
He is a Minor Student following Mathematics courses, and lives on (or near) campus
Mark is severely near-sighted, being born with the condition he has never seen very well. Mark is optimistic but chaotic.
Mark likes his study and likes playing piano.


Lastly, it shows how to simulate data from the earlier discussed sensors. Notably though the previous paper already discussed a more accurate and recent algorithm (building on this paper).
Notable details:
Mark makes use of a white cane and audio-visual aids to assist with his near-sightedness.
He just transferred to TU/e for a minor and doesn't know many people yet. Mark will only be here a short time for his minor. He has a service dog at home, but does not have the resources, time or connections to provide for it here, and so he left it at home.


====Position estimation for mobile robot using in-plane 3-axis IMU and active beacon====
Indoors, mark finds it hard to use his cane because of crowded hallways and he dislikes apologizing when hitting someone with his stick or being an inconvenience to his fellow students. Mark can read and write English fine, but still feels the language barrier.
The paper highlights 2 types of positioning determination: Absolute (does not depend on previous location) and Relative (does depend on previous location). It goes on to highlight advantages and disadvantages of several location determination systems. It then proposes a navigation system that mitigates as much of the flaws as possible.


The paper continues by describing the sensors used to construct the in plane 3 axis IMU:  
In a world without our robot mark might have to navigate like this:
- x/y accelerometer,
Mark has just arrived for his 2nd day of lectures. And will be going to the wide lecture hall at Atlas -0.820. Mark again managed to walk to Atlas (as we will not be tackling exterior navigation), and uses his cane and experience to navigate the stairs and rotary door of Atlas, using it to determine the speed and size of the revolving element to get in, and using the cane to determine the position of the doors and opening<ref>https://youtu.be/mh5L3l_7FqE</ref>.
- z-axis gyroscope


Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation.  
Once inside, he is greeted by a fellow student who has noticed him navigating the door. Mark had already started concealing the use of his cane, as he doesn't like the attention and so the university staff didn't notice him. Luckily, his fellow student is more than willing to help him get to his lecture hall. Unfortunately, the student is not well versed in guiding visually impaired around, and it has gotten busy with students changing rooms.


Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a karman filter to combat noise and drift. The final processing pipeline discussed is immensely complex due to the inclusion of bounce, collision and beacon-failure handling.
Mark is dragged along to the lecture hall by his fellow student, bumping into other students who don't notice he cannot see them, as his guider is hastily pulling him past the students. Mark almost loses his balance when his guide slips past some other students, narrowly avoiding the trashcan while dragging mark by his arm. Mark didn't see the trashcan, which is not at eye level, and collides with the metal frame while trying to copy the movement of his guider to dodge the other students. He is luckily unharmed, and manages to follow his guide again, until he is finally able to sit in the lecture hall, ready to listen for another day.


Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost.
The next day a student sees Mark struggling with the door and shows Mark a guide robot. The robot has the task of getting mark to the lecture hall Mark needs to be. It starts moving and communicates its intended speed and direction through the feedback in the handle. As a result, Mark can anticipate the route the robot will take, similar as to how a guide would apply force to Mark's hand to change direction.


====Mapping and localization module in a mobile robot for insulating building crawl spaces====
The robot has reached the crowd of students moving through the busy part of Atlas. Its primary objective is to get Mark through this, and even though many students notice the robot going through, it still uses clear audio indications to warn students it will be moving through and notifies Mark it goes into some alternate mode through the handle. Mark notices, and thus becomes alert as he also feels that the robot reduces the number of turns it makes, navigating through the crowd in the most straightforward route it can take. Mark likes this, it is making it easy for him to follow it, and also for others to avoid them.
This paper describes a possible use case of the system we are trying to develop. According to studies referenced by the authors the crawl spaces in many european buildings can be a key factor in heat loss in houses. Therefore a good solution would be to insulate below floor to increase the energy efficiency of these buildings. However this is a daunting task since it requires to open up the entire floor and applying rolls of insulation. The authors then propose the creation of a robotic vehicle that can autonomously drive around the voids between floors and spray on foam insulation. There already exist human operated forms of this product, however the authors suggest an autonomous vehicle can save time and costs. A big problem with the Simultanious localization and mapping (SLAM) problem in underfloor environments according to the authors is the presence of dust, sand, poor illumination and shadows, this makes the mapping very complex according to the authors.


A proposed way to solve the complex mapping problem is by using both camera and laser vision combined to create accurate maps of the environment. The authors also describe the 3 reference frames of the robot, they consist of the robot frame, the laser frame and the camera frame. The laser provides a distance and with known angles 3d points can be created which can then be transformed into the robot frame. The paper also describes a way of mapping the color data of the camera onto the points
Still, a sleepy student bumps into the robot as it is crossing. Luckily the robot is designed to contact other students, and its rounded shape, enclosed wheels (or other moving parts) and softened bumpers prevent harm. The robot does however slightly reduce its pace and makes an audible noise to let the sleepy student know it touched the robot too hard. Mark also notices the collision, partially because the bump makes the robot shake a little and loose a bit of pace, but mainly because his handle clearly and alarmingly notifies him, Mark also knows the robot will still continue, as the feedback of the handle indicates to him that it is not stopping.


The authors continue to explain how the point clouds generated from different locations can be fit together into a single point cloud with an iterative closest point (ICP) algorithm. The point clouds generated by the laser are too dense for good performance on the ICP algorithm. Therefore the algorithm is divided in 3 steps, point selection, registration and calidation. During point selection the amount of points is drastically reduced, by downsampling and removing floor and ceiling. Registration is done by running an existing ICP algorithm on different rotations of the environment. This ICP algorithm returns a transformation matrix that forms the relation between two poses and one that maximizes an optimization function is considered to be the best. The validation step checks if the proposed solution for alignment of the clouds is considered good enough. Finally the calculation of the pose is made depending on the results of the previous 3 steps.
After the robot gets through the crowd, it makes it to the lecture hall. It parks just in front of the door and tells mark to extend his free hand slightly above hip level, telling him they arrived at a closed door that opens towards them swinging to his right, similarly to how a guide would, so mark can grab the door handle, and with support of the robot open the door. The robot proceeds mark slowly into the space, it goes a bit too fast though, and mark applies force to the handle, pulling it slightly in his direction. The robot notices this and waits for Mark.


Lastly, the paper discusses the results of some expirements which show very promising results in building a map of the environment
After they enter the lecture hall, the robot asks the lecturer to guide mark to an empty seat (and may provide instructions on how to do so). When mark is seated, the robot returns to its spot near the entrance, waiting for the next person.


====A review of locomotion mechanisms of urban search and rescue robot====
==Problem statement==
This paper investigates/compiles different locomotion methods for urban search and resque robots. These include:
The previous problem statement was quickly found to be too broad. In this research about state-of-the-art it was found that the problem statement consists out of a plethora of sub problems which all have to work in tandem to create a functional solution. For this reason, it is important to scope the problem as much as possible to create a manageable project. Throughout research on the topic of guidance robots the following problems were identified:


=====Tracks=====
*Localization of the guide
A subgroup of track-based robots are variable geometry tracked (VGT) vehicles. These robots are able to change the shape of the tracks to anything from flat to triangle-shaped, to bend tracks. This is useful to traverse irregular terrain. Some VGT-vehicles which use a single pair of tracks are able to loosen the tension on the track to allow it to adjust its morphology to the terrain (e.g. allow the track to completely cover a half-sphere surface). An example of such a can be seen below.
*Identification of obstacles or other persons
[[File:VGTV.png|center|thumb|Single tracked variable geometry tracked vehicle 2B2P<ref>Paillat, Jean-Luc & Com, Jlpaillat@gmail & Lucidarme, Philippe & Hardouin, Laurent. (2008). Variable Geometry Tracked Vehicle (VGTV) prototype: conception, capability and problems. </ref>]]
*Navigation in sparse crowds
*Navigation in dense crowds
*Overarching strategic planning (e.g. navigating between multiple floors or buildings)
*Interaction with infrastructure (e.g. Doors, elevators, stairs, etc.)
*Effective communication with the user (e.g. user being able to set a goal for the guide)


We decided to focus on ‘Navigation of guidance robots in dense crowds on TU/e campus’. This was chosen because for navigation on campus such a ‘skill’ (an ability the guide can perform) is necessary. Typical scenarios in which such a skill would be useful for a typical student would be during on campus events, navigation in and out of crowded lecture rooms, or simply a crowded bridge or hallway. Besides its necessity it is also an active field of study without a clear final solution yet<ref name=":3">Mavrogiannis, C., Baldini, F., Wang, A., Zhao, D., Trautman, P., Steinfeld, A., & Oh, J. (2021). Core challenges of social robot navigation: A survey. ''arXiv preprint arXiv:2103.05668''.</ref>. Mavrogiannis et al.<ref name=":3" /> defines the task of social navigation as ‘to efficiently reach a goal while abiding by social rules/norms.’.


There also exist track based robots which make use of multiple tracks on each side such as the one illustrated below. It is a very robust system making use of its smaller ‘flipper’ to get over higher obstacles. Such an example is seen in the figure below.
A reformulation of our problem statement thus results in the following research question: ‘How should robots socially navigate through crowded pedestrian spaces while guiding visual impaired users?’
[[File:Two-UGV-with-xed-shape-models.png|center|thumb|VGTV Packbot manufactured by Irobot. Can be seen to have an extra pair of 'flipper' tracks<ref>Paillat, Jean-Luc & Com, Jlpaillat@gmail & Lucidarme, Philippe & Hardouin, Laurent. (2008). Variable Geometry Tracked Vehicle (VGTV) prototype: conception, capability and problems. </ref>]]
<br />


=====Wheels=====
To work on this problem, it is assumed the remaining functions of the previous list are assumed to be working.  
The paper also describes multiple wheel based systems. One of which is a hybrid of wheel and legs working like some sort of human-pulled rickshaw. This system however is complicated since it will need to continuously map its environment and adjust its actions accordingly.  


Furthermore the paper details a wheel-based robot capable of directly grasping a human arm and carrying it to safety.
==='''Scoping the problem'''===
At this time the first meeting with Assistant professor César López was held. Mr. López is part of the control systems technology group of the TUE and focusses on designing navigation and control algorithms for robots operating in a semi-open world. In our meeting the most important recommendation was that the navigation should be split up even further and a more defined crowd should be used to define the guide’s behaviour. He laid out that different crowds have different qualities. These crowds can roughly be split up into chaotic crowds; where there is no exact order and behaviour is less predictable (e.g., an airport where everyone needs to go in different directions), and structured crowds; where behaviour ''is'' predictable, such as crowds found walking in a hallway. The simplest structured crowd is one where all people have a unidirectional walking direction. This kind of behaviour can also be found in a paper from Helbing et al.<ref>Helbing, D., Buzna, L., Johansson, A., & Werner, T. (2005). Self-Organized Pedestrian Crowd Dynamics: Experiments, Simulations, and Design Solutions. ''Transportation Science'', ''39''(1), 1–24. <nowiki>https://doi.org/10.1287/trsc.1040.0108</nowiki></ref> which amongst other things describes crowd dynamics. The same paper also describes how a crowd with only 2 opposing walking directions self-organizes to two side-by-side opposing ‘streams’ of people.


A vehicle using a rover-like configuration as shown below is also discussed. The front wheel makes use of a spring to ensure contact and the middle wheels are mounted on bogies to allow it to passively adhere to the surface. This kind of setup could traverse obstacles as large as 2 times its wheel diameter.  
López then expanded on this finding by saying that the robot, in this crowd, could roughly be in 3 distinct scenarios': The robot could walk along a unidirectional crowd, it could walk in the opposite direction of a unidirectional crowds, or it could walk perpendicular to the unidirectional crowd. All of these have an application when navigating the university. López recommended that our research should be focused on only 1 of these scenarios since they all need different behavioural models unless a general navigation method was found.
[[File:Rover.png|center|thumb|Robot designed for search and rescue work. Uses a bogie system to adhere and traverse rough terrain<ref>Wang, Z. and Gu, H. (2007), "A review of locomotion mechanisms of urban search and rescue robot", ''Industrial Robot'', Vol. 34 No. 5, pp. 400-411. <nowiki>https://doi.org/10.1108/01439910710774403</nowiki></ref>]]
<br />


=====Gap creeping robots=====
To summarize, it can be seen that for the guide to efficiently navigate in tight spaces, like hallways or in a lesser extend doorways, requires it to be able to navigate dense crowds which behave in a unidirectional manner. In navigating such a crowd, different approaches can be taken depending on the walking direction of the crowd and the guide.
I took the liberty of skipping this since it was mainly focussed on robots purely able to move through pipes, vents etc. which is not applicable for our purposes.


=====Serpentine robots=====
On López’s recommendation, it was decided to narrow the behavioural research down to only walking alongside a unidirectional crowd since this was the most standard case.
The first robot which is explored is a multiple degrees of freedom mechanical arm which is capable of holding small objects with the front and has a small camera attached there also. This being a mechanical arm means it is not truly capable of locomotion but it still has its uses in rescue work which has a fragile and sometimes small environment which a small cross section could help with. The robot is controlled using wires which run throughout the body which are actuated at its base.


=====Leg based systems=====
To conclude this section the final research question is defined as ‘How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?’.<br />
The paper describes a few leg based designs. First of which was created for rescue work after the Chernobyl disaster. This robot-like design spans almost 1 metre and is able to climb vertically using the suction cups on its 8 legs. While doing so, it is able to carry up to 25 kg of load. It can also handle transitions between horizontal and vertical terrain. Furthermore it is able to traverse concave surfaces with a minimum radius of 0.5 metres.
==USE analysis of the crowd navigation technology==
This section will discuss the relevance and the impact of a safe crowd-navigation guidance robot, on users, society at large, and enterprises.


=====Conclusion=====
===Users===
This paper concludes by evaluating all prior robots and their real life application in search and rescue work. This however is not relevant for autonomous crawl space scanning. Except it may indicate why none of the prior robots would be suitable for search and rescue work due to the unstructured environment and the limitations of its autonomous workings.
The robot has a number of possible users, but for this design there are two types distinguished in this design:


====Variable Geometry Tracked Vehicle, description, model and behavior====
*The visually impaired handler of the robot
This paper presents a prototype of an unmanned, grounded, variable geometry tracked vehicle (VGTV) called B2P2. The remote controlled vehicle was designed to venture through unstructured environments with rough terrain. The robot is able to adapt its shape to increase its clearing capabilities. Different from traditional tracked vehicles the tracks are actively controlled which allows it more easily clear some obstacles.
*The other persons participating in the crowd


The paper starts off with stating that robots capable of traversing dangerous environments are useful. Particularly ones which are able to clear a wide variety of obstacles. It states that to pass through small passages a modest form factor would be preferable. B2P2 is a tracked vehicle making use of an actuated chassis as seen in a prior image.
In the Netherlands around 2.7% of the population has severe vision loss, including blindness<ref>Country - The International Agency for the Prevention of Blindness (iapb.org)</ref>. This is over 400 thousand people, who do not know which route to walk in a new environment, where only a room number is given. There are aids such as a guide dog or cane, but those make sure blind people do not collide with the environment instead of guiding to an unknown location in a new surrounding. So, a device that guides those visually impaired people to a new location they have never been on campus, such as, meeting room Metaforum 5.199 is needed. To guide them to this meeting room a navigation is needed through crowds.  


The paper states that the localization of the centre of gravity is useful for overcoming obstacles. However, since the shape of the robot isn’t fixed, a model of the robot is a necessity to find it as a function of its actuators. Furthermore the paper explains how the robot geometry is controlled which consists of the angle at the middle and the tension-keeping actuator between the middle and last axes. These are both explained using a closed-loop control diagram.
As mentioned, modern robots have a freezing problem when walking to crowds, which is not optimal when walking with the sometimes-dense crowds on the TU/e campus. That is why nudging and sometime bumping is needed sometimes. The challenge here is to guide the handler as smoothly as possible while sometimes nudging and bumping with third persons.  


Approaching the end of the paper multiple obstacles are discussed and the associated clearing strategy. I would suggest skimming through the paper to view these as they use multiple images. To keep it brief they discuss how to clear: a curb, staircase, and a bumper. The takeaway is that being able to un-tension the track allows it to have more control over its centre of gravity and allows increasing friction on protruding ground elements (avoiding the typical seesaw-like motion associated with more rigid track-based vehicles).
As the plan was to design a physical robot with inspiration taken from the CaBot, a lot of inspiration is taken from their user research for visually impaired people. On top of that research has been done in guide dogs and their ways of guiding.  


In my own opinion these obstacles highlight 2 problems:
For third persons to the robot and handlers research has been done mainly focused on the touching and nudging aspect of the robot. This to see what reactions a touching robot may elicit, the safety of this concept, and the ethics of robotic touch.


*The desired tension on the tracks is extremely situationally dependent. There are 3 situations in which it could be desirable to lower the tension, first is if it allows the robot to avoid flipping over by controlling its centre of gravity (seen in the curb example). Secondly it could allow the robot to more smoothly traverse pointy obstacles (e.g. stairs such as shown in the example). Thirdly, having less tension in the tracks could allow the robot to increase traction by increasing its contact area with the ground.  This context-dependent tension requirement to me makes it seem that fully autonomous control is a complex problem which most likely falls outside of the scope of our application and this course.


*The second problem is that releasing tension could allow the tracks to derail. This problem however could be partially remedied by adding some guide rails/wheels on the front and back. This would confine the problem to only the middle wheels.
Secondary users include institutions that provide the robot for visually impaired people to navigate through their buildings. These users include, universities, government buildings, shopping malls, offices or museums.  


The last thing I would want to note is that if the sensor to map the room is attached to the second axis, it would be possible to alter the sensor’s altitude to create different viewpoints.
===Society===
====Stepper motors: fundamentals, applications and design====
As mentioned above, 2.7% of the population severe from vision loss, however there are many more benefits from a robot that can safely and quickly navigate through a crowd. Any robot that has a mobile function in society, will at some point encounter a crowd. Whether that is a dense or sparse crowd, or simply people blocking an entry or hallway. Consider robots that work in social service such as a restaurant, delivery robots or even guide robots for others than visually impaired people, for example at museums or shopping malls.  
This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore it goes in-depth about how they are controlled.


====Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities====
For these robots, it is important that they can safely traverse through crowds in the quickest way possible. The solution investigated and presented here, is a good step in the right direction. Of course, each of these robots would need a different design in order to properly execute its function, but the strength lies in the social algorithm where the robot moves through a crowd in a different way than robots do now.  
According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power.


The authors approach is by creating a 2d Delaunay triangulation from tracked keypoints, and then projecting this into 3d, this projection can have some issues where points are close in 2d but not in 3d, this is solved by geometric filters. Some algorithms update a mesh for every frame but the authors try to maintain a mesh over multiple frames to reduce computational complexity, capture more of the scene and capture structural regularities.     Using the triangular faces of the mesh they are able to extract geometry non-iteratively.
Specifically for navigating visually impaired people, it helps with their accessibility and inclusivity in society. Implementing a robot such as this, will allow them to be a more integral part of society without having to rely on other people.


In the next part of the paper they talk about optimizing the optimization problem derived from the previously mentioned specifications.
===Enterprise===
For enterprises that might employ these robots there are two advantages. The use of the robot will enable visually impaired customers to have better access to any services the companies might provide. In addition, they will have a competitive advantage over competitors that do not provide such a robot or such a service. For example, a shopping mall would improve their accessibility which would turn into more customers, whereas government buildings improve general satisfaction.  


Finally the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time.
Specifically for universities such as the TU/e, next to attracting more students, it improves their public image that shows the effort to make a higher education possible and easier for all people. An advantage over other solutions such as a human guide, is that no new employees need to be trained. No big infrastructure changes, such as extra cameras or sensors throughout the building are needed for another type of robot or navigator. And lastly, there is no issue of a failing connection with for example a smartphone.  
====Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization====
In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity.


The contributions the authors claim to bring are constructing a pose graph without expressing global pose uncertainty, provide a fully probabilistic derivation of IMU error terms and develop both hardware and software for accurate real-time slam.
==Project Requirements, Preferences, and Constraints==


The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation.
===Creating RPC criteria===


Finally they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled.
===='''Setting requirements'''====
====Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry====
The most important thing in building a robot operating in public spaces is to make it complete its tasks in a safe manner; not harming bystanders or the user themselves. Most hazards in robot-human interactions (or vice versa) in pedestrian spaces are derived from physical contact<ref name=":1">Salvini, P., Paez-Granados, D. & Billard, A. Safety Concerns Emerging from Robots Navigating in Crowded Pedestrian Areas. ''Int J of Soc Robotics'' 14, 441–462 (2022). <nowiki>https://doi.org/10.1007/s12369-021-00796-4</nowiki></ref>. This problem is even more present when working in crowded spaces where physical contact is impractical to avoid or cannot be avoided. Therefore, the robot has to be made physically safe; typical touch, swipes, and collisions are made non-hazardous. This term ‘physically safe’ will be abbreviated to ‘touch safe’ to make its meaning more apparent.
The authors from this paper propose an algorithm that fuses feature tracks from any amount of cameras along with IMU measurements into a single optimization process, handles feature tracking on cameras with overlapping fovs, a subroutine to select the best landmarks for optimization reducing computational time and results from extensive testing.


First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then the it is refined by matching it with to the closest image feature in the camera projected to using Euclidian distance. After this it is explained how feature selection is done, this is done by computing a jacobian matrix and then finding a submatrix that preserves the spectral distribution best.
If the robot somehow exhibits unsafe behaviour the user should be able to easily stop the robot with an emergency stop. Because the robot is able to make physical contact and apply substantial force, it becomes even more paramount that rogue behaviour is easily stopped if it occurs.  


Finally experimental results show that with their system is closer to the ground truth than other similar systems.
When interacting with the user it should make them feel safe and thus allow trust in the robot. If the user does not feel safe, they cannot trust the robot and might become unnecessarily anxious or stressed. With as result that the user may avoid its services. Besides this the users might display unpredictable or counter-productive behaviour, e.g., walking excessively slow, not following the robot, etc. To this end the robot should be able to communicate its intent to the user so that they won’t have to be on-edge all the time.  


====Optical 3D laser measurement system for navigation of autonomous mobile robot====
For the robot to be viable in practice there are some restrictions like making the robot relatively cheap since the budget is not unlimited and competing solutions like human guides exist for a set price; too large of a price would make robot guides obsolete. Our use-case also has restrictions on infra-structural modifications to the campus building of the TU/e as a previous solution was rejected due to this reason; installing waypoints all over the buildings was too much of an investment.
This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT,  four-wheel, four-motor ski-steer robotics platform.


After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle.
===='''Setting preferences'''====
Finally the authors go over their results from simulating this robot in Mathlab as well as analyse its performance.  
The robot should not slow down its user when avoidable, so an average speed of 1 m/s (average walking speed visually impaired users<ref name=":2" />)  would be a good goal.


<s>====Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism====
For the robot to reach its goal efficiently it should avoid stopping for people. Even more reasons to avoid stopping is to make the user able to walk a constant speed, requiring less mental strain on its user, as well as avoiding hazards which occur due to stopping in pedestrian spaces like surprising and hitting the person behind the user<ref name=":1" />.
This paper shows the design process of one fully mechanical track powered by one servo. The mechanism is quite complicated but the design process shows a lot of information on designing for rough terrain. This paper is focused on efficiency so less motors are needed, which is important while the robot can then be smaller.


The robot has an adaptive drive system which when moving over rough terrain its mobile mechanism gets constraint force information directly instead of using sensors. This information can be used to move efficiently by changing locomotion mode.
===='''Setting constraints'''====
[[File:T1f1.gif|center|thumb| Design diagram delivered by the paper <ref>Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism | IEEE Conference Publication | IEEE Xplore</ref>]]
For the robot to operate in our specified use case it should be able to navigate campus. This involves being able to navigate narrow walk bridges and the wide-open spaces with different walk routes. Such things as interaction with elevators or stairs will not be focused on this research.


[[File:T1f2.gif|center|thumb|The concept design with wheel and track on each side<ref>Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism | IEEE Conference Publication | IEEE Xplore</ref>]]
===<u>RPC-list</u>===


Basically it is composed of a transformable track and a drive wheel mechanism. With the following modes:
====Requirements====
[[File:T1f3.gif|center|thumb|The three different ways in which the track can drive and will switch to based on mechanisms<ref>Design and basic experiments of a transformable wheel-track robot with self-adaptive mobile mechanism | IEEE Conference Publication | IEEE Xplore</ref>]]


After this they mainly showed the mechanical details on how their design works. With formulas and CAD models.
*Safety
**Touch proof
**Does not harm bystanders or the user
**Installed emergency stop
*User feedback/interaction
**Should give feedback about intentions to user
**Robot must be able to receive feedback and information from user
**Handler should feel safe based on interaction with robot
*Implementable
**Relatively cheap
**No infrastructural changes in buildings


They used three experiments
====Preferences====
Moving on the Even and Uneven Road
Overcoming Obstacle by Track
Overcoming Obstacle with Different Heights by Wheels
It was able to overcome obstacles of 120 mm height while having itself a max height of 146 mm.


=====Conclusion=====
*1 m/s (3.6 km/h) walking speed should be reached<ref name=":2">CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People (acm.org)</ref>
Basic experiments have proven that the robot is adaptable over different terrain. The full mechanical design shows promise for our work and goals.
*Does not stop for people unnecessarily


====Rough terrain motion planning for actuated, Tracked robots====
====Constraints====
This paper proposes a two step path planning for moving over rough terrain. Firstly consider the robot’s operating limits for a quick initial path. Refine segments that are identified to be through rough areas.


[[File:T2f1.webp|center|thumb|Schematic overview on how path planning for rough terrain can be achieved<ref>Rough Terrain Motion Planning for Actuated, Tracked Robots | SpringerLink</ref>]]
*Environment (TU/e campus)
**Narrow walk bridges/hallways
**Big open spaces


The terrain is seen by using a camera with image processing. Something cannot be overcome if the hill is too high or inclination is too steep. The first path search uses a roughness quantification to prefer less risky routes and is mainly based on different weights. The more detailed planning is done by splitting paths into segments with flat spots and rough spots. After this it uses environmental risk with system safety in a formula to give it a weight.
==The solutions==
In this section the worked-out solution to the problem statement is given. The solution consists of a physical and a behavioural description of the robot. These two factors influence each other: The design has an impact on how the robot should behave while socially navigating through a crowd, while the way it navigates through a crowd makes the specific requirements of the design. These together give a clear answer to the research question on how the robot with this specific design should socially navigate through a unidirectional crowd while guiding visually impaired users.  


Further the paper gives a roadmap (based on A*) and RRT* planner.  
This chapter consist of a detailed explanation of the physical design of the robot. The robot is designed to adhere as closely as possible to the rpc-list. After the design is defined the corresponding behaviour will be defined using scenarios. These scenarios are used to explain the behaviour we would want to see and expect. In a broader sense, this should demonstrate how the method of navigation can be utilised to effectively and safely navigate through dense crowds.


====Realization of a Modular Reconfigurable Robot for Rough Terrain====
===Design===
This paper has a robot for rough terrain to use multiple modular reconfigurable robots. Basically a robot with multiple modules that can be disconnected from each other. Using different modules can make the robot do different tasks better. It looks like this:
In this chapter the design of the robot model is documented. With the design, main focusses are safety and communication of nudging to the visually impaired handler, and third persons.  
[[File:T3f1.png|center|thumb|Different modules connected<ref>IEEE Xplore Full-Text PDF:</ref>]]


It can be used for steps like this:
For the design of the robot the main inspiration is the CaBot<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People" />. This is basically a type of suitcase design (rectangular box with 4 motorized wheels), with in the rectangular box all its hardware. Interestingly, it also has a sensor on its handle for vision (higher perspective). This design is rather simple, and the easy flat terrain on the TU/e campus should be no problem for the wheels. The CaBot excels in guiding people to a new location but does not work through crowds. When looking at safety the body design has been altered for nudging and bumping into people. Also, the handle design has been revamped for better communication to user.
[[File:T3f2.png|center|thumb|How a robot with multiple modules would climb the stairs<ref>IEEE Xplore Full-Text PDF:</ref>]]


The joint between the robots can move and rotate in basically all directions which makes it able to traverse a lot of terrains.
[[File:T3f3.png|center|thumb|This picture show the possibilities of the joint quite nicely<ref>IEEE Xplore Full-Text PDF:</ref>]]</s>


====A mobile robot based system for fully automated thermal 3D mapping====
====Handle design====
This paper showcases a fully autonomous robot, which can create 3D thermal  models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that  the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning.
[[File:Guide arm front.png|thumb|Front view of the arm design of the guide robot to which the guided can grab on. The speed switch can be seen on the left. The settings are denoted using written numbers instead of braille because of limitations of the CAD software.]]
The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360 degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm.
[[File:Guide arm side.png|alt=Back view of the arm design of the guide robot to which the guided can grab on. Interface utilities have not been added yet.|thumb|Back view of the arm design of the guide robot to which the guided can grab on. The upper arm, connecting to the hand-hold, has a suspension mechanism and a hinge.]]
As the robots behaviour is focused on traversing through crowds of people, there is an important function also part of it. How to communicate this direction to its user? Any audible direction will quickly interfere with the sounds from the surroundings, which can result in missing the entire message or allow for confusion. Although a headset might allow for clearer communication, this is still not ideal. Therefore, the easiest way to provide feedback to the user is through the handle. The robot has a few functions that it needs to be able to communicate with the user or be able to be controlled by the user:


====A review of 3D reconstruction techniques in civil engineering and their applications====
*Speed
This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds:
**Setting a faster or slower speed
For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction
**Communicating slowing down or accelerating
feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors.
**Emergency stop
Feature matching- matching feature points of each image pair. Camera motion estimation is used to find out the camera parameters of each image. The Sparse 3D reconstruction step is to compute the 3D location of points using the feature points and camera parameters, generating a point cloud. This is done via the triangulation algorithm. Then the model parameters correction step is to correct the camera parameters of each image. This step leads to precise 3D locations of points in the point cloud.
Absolute scale recovery aims to determine the absolute scale of the sparse point cloud by using the dimensions/points of absolute scale in the sparse point cloud. Finally using all of the above is used to generate a dense point cloud.
For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction.
After this the authors explain how to generate point clouds from video images.
in Techniques for processing data, the authors showcase a couple of algorithms for data data processing. For point cloud processing they use ICP. For Mesh reconstruction- PSR, for point cloud segmentation- they divide the algorithms into two categories- feature-based segmentation (region growth and clustering, K-means clustering) and model-based segmentation (Hough transform and RANSAC). After this the authors go in depth on applications of 3D reconstruction in civil engineering such as reconstructing construction sites and reconstructing pipelines of MEP systems.
Finally the authors go over the issues and challenges of 3D reconstruction.


====Analysis and optimization of geometry of 3D printer part cooling fan duct====
*Direction
This paper researched fan ducts for 3D printers and how to optimise it for maximum airflow. Of Course we are not making a 3D printer but the principles of airflow are mostly the same. The paper analysis it based on inlet, outlet and throat length. And concludes optimised inlet angle of 40 degrees, outlet angle of 20 degrees with a 3 mm throat length. It optimised the fan with 23% more airflow.
**Turning left
**Turning right


Important is that this fan used in this research was 27 mm, which seems feasible for an as small as possible crawler.
All of these functions can be placed inside the handle, while designing for minimal strain on the user's active control. The average breadth of a male adult hand is 8.9 cm<ref>''ANTHROPOMETRY AND BIOMECHANICS''. (n.d.). <nowiki>https://msis.jsc.nasa.gov/sections/section03.htm</nowiki></ref>,  which means that the handle needs to be big enough to allow people to hold on while also incorporating the different sensors and actuators. For white canes, the WHO<ref>WHO. (n.d.). ASSISTIVE PRODUCT SPECIFICATION FOR PROCUREMENT. At ''who.int''. <nowiki>https://www.who.int/docs/default-source/assistive-technology-2/aps/vision/aps24-white-canes-oc-use.pdf?sfvrsn=5993e0dc_2</nowiki></ref> has presented a draft product specification where the handle should have a diameter of 2.5cm. Which will be used for the handle of the robot as well. Since the robot can be seen as functioning similar to a guide dog, the handle will have a design similar to harnesses used for blind dogs, meaning a perpendicular, although not curved, handle that will stop in place if released.<ref>dog-harnesses-store.co.uk. (n.d.). ''Best Guide Dog Harnesses in UK for Mobility Assistance''. <nowiki>https://www.dog-harnesses-store.co.uk/guide-dog-harness-uk-c-101/#descSub</nowiki></ref> To be able to comfortable accommodate the controls and sensors described below, the total size of the handle will be 20 cm.  


Here are the results processed in ANSYS 2021 R1 cfd. First the outlet was done then throat length and finally inlet, because outlet has the biggest impact based on their prior research.
The handle, which is connected to the robot, will provide automatic directional cues, without additional sensors or actuators. This will simplify the robot and act more similar to a guide dog. As for the matter of speed, there are three systems that would be implemented. The emergency stop, feedback about the acceleration and deceleration of the robot and the speed control of the user. The emergency stop can be a simple sensor in the handle that detects if the handle is currently being hold, if not, the robot will automatically stop moving and stay in place. The speed can be regulated via a switch-like control as visible on the CAD render on the right. When walking with a guide dog, the selected walking speed is about 1 m/s <ref name=":2" /> for visually impaired people, which means that with five settings, ranging from 0 m/s, 0.5 m/s, 0.75 m/s, 1.0 m/s and 1.25 m/s, the user can set their own speed preference. In order to give feedback about its current setting, the different numbers will be detailed in braille. Furthermore, changing settings will encounter some resistance and a feelable ‘click’ instead of being a smooth transition. The user can, at any times, use their thumb, or any other finger, to quickly check the position of the device and determine the speed setting. The ‘click’ provides extra security that the speed will not be accidentally adjusted without the user being aware of it. To this end, the settings will only affect the actual walking speed after a short delay to allow the user to have time to revert any changes.
[[File:T4f2.jpg|center|thumb|graph for outlet angle to flowrate of air<ref>Analysis and optimization of geometry of 3D printer part cooling fan duct - ScienceDirect</ref>]]
[[File:T4f3.jpg|center|thumb|Throat length influence on flowrate<ref>Analysis and optimization of geometry of 3D printer part cooling fan duct - ScienceDirect</ref>]]
[[File:T4f1.jpg|center|thumb|Flowrate to inflow angle<ref>Analysis and optimization of geometry of 3D printer part cooling fan duct - ScienceDirect</ref>]]


====2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment====
Lastly, the robot might, for whatever reason have to slow down while walking through the crowd. Either for obstacles, other people, or in order to go properly with the flow of the crowd. Since this falls outside the speed setting, the user must be made aware of the robots' actions. A simple piezo haptic actuator can do the trick. By placing it in the middle of the handle, it will be easily detected. A code for slowing down, for example a pulsating rhythm, and a code for speeding up, a continuous vibration, will convey the actions of the robot. Of course, this is in addition to the physical feel that the user has via the pull on the handle via the arm. However, because trust is so important in human-robot interactions, this is just additional feedback from the part of the robot to increase the confidence of the user when using the robot.
This paper goes over how to effectively  fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for color and texture information, while a 2D LiDAR is used as the range sensor. To calibrate the correspondences between the camera and LiDAR, a planar checkerboard pattern is used to extract corners from the camera image and intensity image of the 2D LiDAR. Thus the authors rely of 2D-2D correspondences. A pinhole camera model is applied to project 3D point clouds to 2D planes. RANSAC is used to estimate the point-to-point correspondence. Using transformation matrices the authors match the color images of the digital camera to with the intensity images. B aligning a 3D color point cloud in different location, the authors generate the 3D model of the environment. Via a  turret widow X servo, the 2D LiDAR is moved in vertical direction for a 180 degree horizontal field of view. The digital camera rotates in both vertical and horizontal directions, to generate panoramas by stitching series of images. In the third paragraph the authors go over how they calibrated the two image sources. To determine the rigid transformation between camera images and 3D points cloud a fidual target is used, RANSAC is used to estimate outliers during calibration process and a checkerboard with 7x9 squares is employed to find correspondences between LiDAR and camera. Finally the authors go over their results.


<br />
<br />
====A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR====
[[File:Arm design sketches.jpg|alt=3 sketches of different designs of an arm for a guidance robot|thumb|3 sketches of different designs of an arm for a guidance robot]]
This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled.
====Arm design====
 
Multiple designs were considered. The arm connects the handle to the body, it is important here that the handle height can be changed. One thing that was added in the name of safety was suspension, so that the movements of the robot would not jerk the arm of the guided if it were to suddenly change speeds, when for example bumping or nudging. Most design iterations went over on how to integrate the suspension. 
 
The first design was a straight pole from the robot body to the guided arm (as can be seen in the top sketch in the figure to the right). A problem we could see was that if the robot were to stop suddenly, it would push the arm slightly up instead of compressing the suspension. To solve this problem a joint was introduced in the middle of the arm (as can be seen in the middle sketch in the figure to the right). An alternative solution was to have the suspension only act horizontally and internalize it (as can be seen in the bottom sketch). This would allow the pole to have the same design as the first sketch without compromising on the suspension behaviour. Another plus would be that the pole would be marginally lighter due to this suspension being moved inwards.
 
We have chosen for the second design as it had the intended suspension behaviour while remaining as simple as possible. This allows the mechanism to be constructed from mostly off-the-self parts, reducing the cost.
 
====Body Design====
For the body three main designs were considered: A square, a cylindrical form, and a cylinder which changes diameter over its height. The square was immediately ruled out due to its sharp corners making it decidedly not touch safe. The more cylindrical shapes could more easily slide through public and had less chance to hit people hard on the front (it allows for a sliding motion instead of head-on collision). This left the choice  between a normal cylinder, a cylinder wide at the bottom, and a cylinder wide at the top.  


Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation.
A bottom-heavy design would help with balance; If the robot would bump it would hit at the lowest point, meaning more stability. However, it may surprise people when it hits as they might not notice the wide bottom. This is where the wide top outperforms, as it hits people around their waist/lower back area where collision can be more easily spotted. Furthermore, this is a more effective place to nudge people for them to get out the way (a lower hit might instead make people lift their leg instead of stepping out of the way). A draw back is that the robot is touched higher and more easily tips over. That is why in the design the best of both worlds is chosen. The body has a big diameter lower with a big bumper to not tip over, and has 'whiskers' of a soft, compressible foam material on top at the front to softly touch, or nudge people if they are in the way. Research has shown that touch by a robot elicits the same response by humans as touch by humans<ref>''Using contact-based inducement for efficient navigation in a congested environment''. (2015, August 1). IEEE Conference Publication | IEEE Xplore. <nowiki>https://ieeexplore.ieee.org/document/7333673</nowiki></ref>. The material for the rest of the body is a plastic as to make it not too hard.
[[File:Guide full.png|center|thumb|This cad design shows the oval body shape of the design. It has its biggest diameter at 30 cm high, and whiskers at 120 cm from the ground.]]
The pole on top of the body has two functions.


In contrast to the speed and low computational complexity of loosely coupled systems, tightly coupled systems sacrifice some of this for greater accuracy. One of the main points of these systems is a derivation of the error term and pre-integration formula for the IMU, this can be used to increase accuracy of the IMU measurements by estimating the IMU bias and noise. For LiDAR-IMU systems this derivation is used for removing distortion in LiDAR scans, optimizing both measurements and many different approached to couple the 2 devices to obtain greater accuracy and computation speed. The LiDAR-Visual-IMU use strong correlation between images and point clouds to produce more accurate pose detection.
*Visibility
*Sensors


The authors then do performance comparisons on SLAM datasets where most recent SLAM systems appear to estimate pose really close to the ground truth even over distances of several 100 meters.
The pole is 100 cm long, making the whole guide robot stand at 220 cm tall. This helps for the sensors which get, from a higher point of view, a better overview of the crowd. This height also helps with noticeability in dense crowds where at eye-level it will still be visible even when the lower body is (partially) obscured.


====An information-based exploration strategy for environment mapping with mobile robots====
===Behavioural description===
This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map.
The behavioural description will concern behaviour in a crowd with a singular, uniform walking direction. As mentioned before, the expected behaviour will be described using scenarios. These will first describe the standard scenario, after which two special cases are discussed. Furthermore, it will be briefly discussed how this behaviour might also benefit other crowd types or behaviour. The purpose of the behaviour is to make the robot guide someone efficiently to reach a goal while abiding by social rules/norms.  


The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360 degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space.
It is important to note that joining, and leaving these crowds require different behaviour (like sparse crowd navigation). These are thus not considered to fall inside of the scope of the research question.


The researches go on to describe how the quality of the proposal is measured, namely in the distance traveled and uncertainty of the map. The uncertainty is a function of the uncertainty in the robot's current position, and the accuracy of the range finder. The robot has a pre-computed expected position of each point, and a post-measurement position of each point, which is then evaluated through relative entropy to compute the increment of the point-information. This and similar equations for the robot's position data are used to select the optimal points for observing the environment.
First, the standard navigation method will be discussed and how it functions in most scenarios.
Lastly, the points of each observation point are combined into one map, by using the robot's position data.
====The standard scenario====
López suggested that to navigate, the guide should check where it ''can'' walk, not where it cannot. He also suggested following a lead of some kind could make navigation in unidirectional crowds easier. These traits have been used to define the standard scenario.


====Mobile Robot Localization Using Landmarks====
In this scenario the robot uses its LIDAR technology to follow a moving point cloud (i.e., the lead) in front of it. This point cloud could be one person or even a whole group. Regardless of this, the point cloud will always indicate the end of the guide's free walking space (space where nothing else stands in its way). It can thus be said that between this lead and the guide, there will in most cases, always be free walking space. As the lead walks in front of the guide it will continuously be creating a space in the crowd behind it, and in front of the robot, where the guide can move.  
The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods.


It highlights the robot's capability to:
The robot cannot see the difference between one person or a group, this will make the robot more robust as small details in people's behaviour will not affect the guide's actions.
- Find landmarks
- Associate landmarks with points on a map
- Use this data to compute its position.


It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image), and solves the correspondence problem (identifying an object from 2 angles) to find its location. In the paper, the technique is tested in an office environment.
===='''Scenario 1: Cut off'''====
While in the standard scenario, someone or something starts to insert itself in between the guide and the previously thought of leading cloud. This has multiple different sub-scenario’s will be discussed. In this scenario we will consider a crowded space with approximately 0.8 persons/m<sup>2</sup> (which nears shoulder-to-shoulder crowds as found in <ref>Trautman, P., Ma, J., Murray, R. M., & Krause, A. (2015). Robot navigation in dense human crowds: Statistical models and experimental studies of human–robot cooperation. ''The International Journal of Robotics Research'', ''34''(3), 335-356.</ref>), where the people move alongside each other. Since the third person is inserting from the side it may not be assumed that only the feelers of the robot make contact. This means more severe consequences may follow.


The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks.
====='''Decision making criteria'''=====
The paper then evaluates the expected distribution in angle-error and position on each axis, to correct for the noise, using the method described above.
The decision making of the guide should depend on the intentions of the third person, the effects of their actions on the guide(d), and the effects on themselves.


====A Staircase and Slope Accessing Reconfigurable Cleaning Robot and its Validation | IEEE Journals & Magazine====
By far the most difficult thing is to determine the intentions of the third person. Are they trying to insert themselves in front of the robot or are they simply drifting in front. Since their mind cannot be read it seems reasonable to base the decision purely on the latter 2 decisive factors, namely, the effects on the guide(d) and the effects on the person inserting themselves.
This mechanism is made for cleaning robots but could also be used for a guide robot. It basically drives on 4 wheels with each wheel having a motor. The front and back can go linear straight up and down. This makes it more stable to ascend for robots with large dimensions because it can stay mostly straight up, and it can descend staircases as shown in the figure below:


[[File:T5f1.gif|center|thumb|Shows the locomotion of this robot type. Picture from<ref>S-Sacrr: A Staircase and Slope Accessing Reconfigurable Cleaning Robot and its Validation | IEEE Journals & Magazine | IEEE Xplore</ref>]]
====='''Guide’s options'''=====
There are 3 options the robot can take in any given scenario:
{| class="wikitable"
|Effects        Action  ->
|Bump
|Make way
|Move to the side
|-
|Effects on the guide(d)
|<nowiki>- Little to no travel  delay</nowiki>


The locomotion mechanism is holonomic, so it can move in any direction. The robot however descends backwards. So basically when moving up or down it has the same rotation in comparison to the floor. As the front is for moving up and the back is for moving down.
- Depending on the severity of the impact  it might result in the robot having a sudden change in speed, inconveniencing  the guided.
|<nowiki>- The robot might have to  slow down temporarily which might inconvenience the guided.</nowiki>


All Hardware and details about the mechanical design is given.
- The robot might have to slow down  permanently due to a change in the leads’ walking speed leading to a higher  travel time.


During the experiments the robot was very stable. It has still to improve mainly with a feedback controller. They however did not give the velocity of climbing stairs or slopes.
- Other people might also try to slip in  front leading to multiple delays.
|<nowiki>- The guided might incur  a travel delay due to the perpendicular movement.</nowiki>


====The Fuzzy Control Approach for a Quadruped Robot Guide Dog====
- Too much  side-to-side movement might lead to sporadic guidance to the guided.
This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people.
This is their design:


[[File:T6f1.webp|center|thumb|How the robot dog looks with mechanical components. image from<ref>The Fuzzy Control Approach for a Quadruped Robot Guide Dog | SpringerLink</ref>]]
- The guide will have to make accurate  decisions when sliding in front of someone else which might lead to  unexpected problems or delays.
|-
|Effects on the person  inserting themselves
|<nowiki>- They make physical  contact with the robot resulting in a risk of injury depending on the  severity.</nowiki>


The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed, it does not show how to guide a blind person.
- They might be surprised by the robot  resulting in unpredictable scenarios.


Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked.
- They might not be able to return to their original  spot in the crowd resulting in unpredictable consequences.
|<nowiki>- None</nowiki>
|<nowiki>- None</nowiki>
|}


====Design of a Portable Indoor Guide Robot for Blind People====
====='''Scenario variables'''=====
This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. This is the design they came up with:
It can be seen that the effect of any action is very much context dependent and as such a well-made decision will only be possible if the guide is well-informed. Assuming this is the case for now we can set up 4 factors which will determine the way the robot should act:


[[File:T7f1.gif|center|thumb|All hardware parts of the guide robot dog. Image from<ref>Design of a Portable Indoor Guide Robot for Blind People | IEEE Conference Publication | IEEE Xplore</ref>]]
1.   The relative normal speed of the third person


This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities.  
2.   Their relative perpendicular speed


[[File:T7f2.gif|center|thumb|Sensor vision of the robot dog. Image from<ref>Design of a Portable Indoor Guide Robot for Blind People | IEEE Conference Publication | IEEE Xplore</ref>]]
3.   The third person’s space to act


This image shows how the robot should be controlled. And they give a whole framework on path planning with cost functions and testing on the traction.  
4.   The robot’s space to act


The conclusion stated that the robot did well and it was a low cost, convenient-to-carry, and strong perception blind guide robot.
From this, 4 behavioural tables can be set up:


====Guiding visually impaired people in the exhibition====
====='''Scenario 1: expected behaviour'''=====
This paper talks about a robotic guide used to help (partially) blind people navigate an exhibition (a noisy, crowded (4 square meters/person), unfamiliar environment). These people are often faced with the challenge of maintaining spatial orientation; ‘the ability to establish awareness of space position relative to landmarks in the surrounding environment’. The paper proposes that supporting functional independence of these people can thus be achieved by ‘providing references and sorts of landmarks to enhance awareness of the surroundings’.
The following scenarios might seem excessive since the robot will most likely not be a rule-based reflex-agent. This detailed model should however be of importance when informing our decision-making process in the design of the robot as well as the evaluation of the simulation. The following behavioural tables have the relative forward speed of the guide, while at the top the speed the third person has in the direction perpendicular to the guide's walking direction is given.


The technology used by this paper to achieve this is a handheld device capable of radio-frequency localization. To prepare the environment a RFID sensor was placed for each 300 square meters (~17x17 m area) at points of interest, services and major areas. The paper does not go into the details of how the localization is done but an educated guess would be that the guiding devices carried by the guided persons are scanned by these fixed sensors which then communicate to calculate the position of the guided. Keep in mind this exhibition took place in 2006, but they found a resolution of 5 meters (minimal distance between distinguishable tags).
'''The third person and the robot are capable of making way'''
{| class="wikitable"
|
|Low perpendicular speed
|Medium perpendicular speed
|High perpendicular speed
|-
|Smaller forward speed
|Robot should make way
|Robot should make way, as people think it shows manners and awareness.
|Robot should make way, as people think it shows manners and awareness.
|-
|Same forward speed
|Robot does not make way
|Robot should make way, but prevent heavy breaking, soft pushing
is still an option if the integration is too narrow
|Robot should make way, but prevent heavy breaking, soft pushing
is still an option if the integration is too narrow.
|-
|Larger forward speed
|Robot does not make way
|Robot does not make way
|Robot does not make way, but tries to soften the by moving along the perpendicular direction of the third person
|}
[[File:Scenario 2 inserting file.png|alt=Depiction of a third person inserting themselves between the guide and the lead.|thumb|Depiction of a third person inserting themselves between the guide and the lead. The circles represent people and the guide. The arrows indicate the direction they are moving. Grey is a normal crowd member, red is the third person cutting of the guide, dark blue is the guide, and light blue is the guided.]]
'''Only the third person is capable of making way (see figure to the right)'''


The interface of the device makes use of hardware buttons, which they find a solution suited for visually impaired people. Apart from standard navigation and audio control buttons, the device was also equipped with a button which gives quick access to an emergency number.
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.
{| class="wikitable"
|
|Low perpendicular speed
|Medium perpendicular speed
|High perpendicular speed
|-
|Smaller forward speed
|The guide should not make way and risk impact to indicate it has no free space.
|The guide should not make way and risk impact to indicate it has no free space.
|The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact.
|-
|Same forward speed
|The guide should not make way and risk impact to indicate it has no free space.
|The guide should not make way and risk impact to indicate it has no free space.
|The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact.
|-
|Larger forward speed
|The guide should not make way and risk impact to indicate it has no free space.
|If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down to soften the impact.
|If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down and move in the same perpendicular direction as the third person to soften the impact.
|}
'''Only the robot is capable of making way'''
{| class="wikitable"
|
|Low perpendicular speed
|medium perpendicular speed
|High perpendicular speed
|-
|Smaller forward speed
|Robot should make way
|Robot should make way
|Robot should make way, trying to prevent heavy breaking
|-
|Same forward speed
|Robot should make way
|Robot should make way
|Robot should make way, trying to prevent heavy breaking
|-
|Larger forward speed
|Robot should make way
|Robot tries to make way, preventing heavy breaking
|Robot tries to make way, preventing heavy breaking
|}
'''Neither are capable of making way'''


In this particular use-case the device guided people using an event-system which would ask the user if they wanted to hear a description of their environment. This event would trigger when the handheld device would recognize signals from local sensors. This description would include:
In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.
{| class="wikitable"
|
|Low perpendicular speed
|Medium perpendicular speed
|High perpendicular speed
|-
|Smaller forward speed
|Robot should try to make as much way as possible before making continuous contact with the person until the third person finds a way to decouple.
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should try to maintain continuous contact with the person until the third person finds a way to decouple.
|Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple.
|-
|Same forward speed
|Robot should try to make as much way as possible
If there is not much room the robot should not bother to bump
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until the third person finds a way to decouple.
|Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple.
|-
|Larger forward speed
|Robot should try to make as much way as possible before making continuous contact with the person until they naturally separate or the third person finds a way to decouple.


*an extended title
|Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until they naturally separate or the third person finds a way to decouple.
|Robot should try to position itself so that the harm from the impact can be minimized.  For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until they naturally decouple or the third person finds a way to decouple.
|}


*the description of the point of interest
===='''Scenario 2: Stalled lead'''====
While in the standard scenario the lead has stopped moving. The guide may avoid the lead or nudge them. The guide is moving unidirectionally with the lead, and it is therefor assumed all impact will occur at the front of the guide.


*one or more extended descriptions
====='''Guide options'''=====
The decision making of the robot should depend on the effects on the guide(d) and on the leads. The robot may in all situations attempt the following options:
{| class="wikitable"
|Effects        Action  ->
|Try alternative route
|Robot nudges using feelers
|Stops
|-
|Effects on the guide(d)
|<nowiki>- The robot has to make  side-to-side movement which results in a more sporadic pathing. This might  inconvenience the guided.</nowiki>


*descriptions to invite and spatially guide the user near the featured flowers and plants.
- Moving aside  in a crowded space may result in the guide, or worse guided, to be pushed by  other people.


The device would also describe near points of interest such as crossroads, entrances, exits, restaurants, toilets etc. such that the user can create their own mental map of their surroundings allowing them to build and follow their own path; being unconstrained by the predefined path.
- Behaviour requires  more complex observational methods and behaviour.
|<nowiki>- Does not  always resolve the problem which leads to more delay. </nowiki>
|<nowiki>- Guide stops.  significant time delay.</nowiki>


To overcome noise the user was provided with headphones. Another problem was that some users were frustrated with the silence of the device when they were not at a point of interest. This was solved by providing a message stating this.
- People  behind guided may walk into or push them.
|-
|Effects on the stalled  lead
|<nowiki>- None</nowiki>
|<nowiki>- Person may  have to step aside or start moving.</nowiki><br />- Person  might be uncomfortable with being nudged or pushed.
|<nowiki>- None</nowiki>
|}


The device was recognized by the visually impaired users to allow them a large degree of freedom which traditional (fixed) guides do not.
====='''Scenario variables'''=====
The main variables are the following:


The authors end with saying the experience would probably be significantly improved with a better localization technology.
1. Space to act for guide


====CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People====
2. Space to act for lead
This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product.  The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance.
=====Design=====
Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, that is placed on the frontal edge of the robot, the map environment is mapped beforehand. Localisation- using wheel odometry and LiDAR scanning it estimates the current location. Compares the real-time scannings and map to previously generated using Monte Carlo localisation (AMCL) package of ROS. In addition odometry information can be computed using the LiDAR and stereo camera. Path Planning- path on the LiDAR map is planned based on the user's starting point and destination. To avoid obstacles, and to navigate a dynamic environment local, low-level pathing is implemented using the navigation packages of ROS. The robot also considers the space that is occupied both by it and the user in its path-finding. This is done via a custom algorithm. The robot also provides haptic feedback. The authors use vibro-tactile feedback(different vibration locations and patterns) on the handle to convey the intent of the robot to the user. Via buttons on the handle one can change the speed of the robot. After this explanation, the paper goes over the conducted user study and its results.


====Tour-Guide Robot====
====='''Scenario 2: expected behaviour'''=====
This paper introduces a tour-guide robot using Kinect technology. The robot follows tourists wherever they go, avoiding obstacles and providing information. The paper begins by naming some previous implementations of such tour guide robots. Such robots are Rhino, Minerva, Asimo, Tawabo, Toyota tour guide robot, Skycall. Using Kinsect to determine gestures and spoken commands as well as facial recognition. Main parts- RGB camera, 3D depth sensing system, multi array microphone. The platform of the robot has ultrasonic sensors to detect obstacles. RFID is used to detect the RFID cards around the museum to correctly identify item and play the corresponding audio file. Base robot platform- Eddie.
<u>Due to the great low-risk results of nudging using the feelers this will in all cases be the first action</u>.  


====Dynamics and stability analysis on stairs climbing of wheel–track mobile robot====
If the attempt fails however, it must be decided if the guide should try to path around the now blocked path or to stop. If it can be seen that the lead ahead is stopping out of its own volition (There is free space in front of the lead), the robot should try to navigate around the lead in most cases. If the lead is expected to start moving in a reasonable timeframe, depending on the amount of time rerouting would take, the guide should stop. Something which hasn’t been taken into account yet is the actual freedom of the guide; a dense surrounding, or a fast-moving crowd could stop the guide being able to safely step aside. In these cases, the specifics and safety of the cross-flow-behaviour are of importance.
This robot is capable of switching between more compact wheel-based locomotion and track-based locomotion (intended for stair climbing). The robot is depicted in the figure below. As seen it is quite a complicated mechanism.
[[File:Tracked-Wheel.png|alt=A wheel-track based vehicle capable of climbing stairs|center|frame|A wheel-track based vehicle capable of transforming between 2 modes of travel (a). An explosive view of the components (b).<ref name=":0" />]]
The wheeled-mode allows it to safe battery while maintaining the advantage of rough-surface/stair traversal capabilities of tracked vehicles. The figure to the right shows its stair-climbing capabilities. The paper also goes in detail about how the mechanisms interact and their purposes. I think the largest problem with this design is actually the expansion of the track itself. The robot in the paper solves this by creating a spring-based track which can also be seen in a figure to the right.
[[File:Stairclimb.png|alt=Drawing of the wheel-track vehicle climbing stairs|thumb|Drawing of the wheel-track vehicle climbing stairs<ref name=":0" />]]
[[File:Springtracks.png|alt=spring based tracked wheel|thumb|Wheel made out of a spring which when expanded puts the folded track under tension<ref name=":0" />]]
This seems to be quite a finicky part which not only has to work smoothly but will also require special care in how it is driven. The robot in the paper makes use of the re component which is driven by the internal teeth. It has a sloped groove over its surface which I assume will jam the spring to increase friction and allowing the tracks to be driven.


The finishing note on the mechanical design is the tail rod, the purpose of which is to allow the robot some extra stability when folded down which is seen in action in some actual photos which show the prototype climbing a staircase.
Assuming the cross-flow-behaviour to be only safe in the limited case of a sparse, normal moving crowd, the following behavioural table can be made:
{| class="wikitable"
|
|Normal moving crowd
|Fast moving crowd
|-
|Sparse crowd
|Try alternative route
|Stop
|-
|Dense crowd
|Stop
|Stop
|}
If the guide has stopped for a while and sees an opportunity for the lead to move, it should play a message asking for the lead to move. This also notifies the guided of the situation.


The rest of the paper goes in-depth about dynamic analyses of the kinematic model and how the robot is controlled.
<br />


====Research on Dynamics and Stability in the Stairs-climbing of a Tracked Mobile Robot====
====Generalisation====
This paper goes over the dynamics of a simple tracked vehicle and it’s stability. The paper’s most applicable part according to me (for our objective of creating a prototype) is the 3 conditions they set for a robot to be able to climb stairs:
The following scenarios pertain to a situation where the guide does not navigate alongside a unidirectional crowd flow. Although this is out of the scope of this research it is useful to look at what a robot with this design can add to other scenarios using touch. First, there will be a short look at the possibilities of physical touch in the other scenarios sketched by López.


'''The geometric condition'''
Opposing a unidirectional crowd is slightly harder for a robot, while moving through an opposing flow the robot is dependent on people moving out of the way. Otherwise, no space might open up where the robot can go. Here a robot that is programmed to not touch people might stall if there is a high enough crowd density. This is where light bumping comes might be useful, if people do not go fully out of the way they will get a light touch.


The robot is capable of touching the stair edge and drive over the edge.
Crossing a unidirectional crowd is the hardest scenario. It might be hard for positions to open up where the robot can go. This due to people coming from the side, and the social implications that come with that. Does the robot give space, or will it walk on? Research has found that people find robots more social, and better if they let people pass first, but this has a problem of the robot stalling. That is why in dense crowds it might be preferred that the robot starts nudging to make a way for the guided person.  


'''The traction condition'''
Integrating into a crowd is an important behaviour of the robot. Inside the TUE, maximum crowd densities are assumed to occur only scarcely in the span of a year. In less dense crowds the guide should be able to integrate into the flow without hitting other people. However, in the scenario that crowds are to be very dense, the guide should be able to act in a more assertive manner due to the increase in safety measures preventing harmful human-robot collisions.


The motor should have enough traction on the track to drive it.
==Simulation==


'''The friction condition'''
'''Goal''':


The robot should not slide on the stairs.
In order for the behaviour description to be relevant, we show that the proposed behaviour is safe to employ in a representative environment. To measure this safety, we first of all measure the collisions and make the reasonable assumption that this is the primary source of harm our robot can afflict. The simulation will gather data about the frequency of collisions, and statistics on the forces applied to the person and robot during the collision. Secondly, we consider the adherence of the robot's behaviour to the ISO guidelines for safety<ref name=":0" /><ref name=":4" />, focussing on the minimum safety gap and the maximum relative velocity guidelines.


The paper goes very in-depth in how to evaluate stability but this would mostly fall outside of the scope of this assignment.
'''Overview of applicable ISO Safety standards'''


====Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques====
According to ISO 10218-2:2011<ref name=":4" />, for a (industrial) robot to operate safely:
This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a heterogeneus computer cluster. SLAM is achieved via laser scanners (LiDAR) or RGBD cameras. The paper names some popular such robots:
TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead.
Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move
REEM-C- ROS-enabled autonomous humanoid robot, using RGBD camera for 3D mapping.
The paper contains useful tables containing information about these robots, as well as popular ROS computing platforms and mapping sensors.
The authors propose the use of lidar measurements on a road's surface to detect road boundaries. based on multiple model method the existence of cubs is determined.
The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account.
As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures.
NTU-1- autonomous tour guide robot that guides o the campus of the National University of Taiwan. It is a big robot, weighting around 80 kg, with a two-wheel differential actuated by a DC brushless motor. It uses multiple sensing technologies such DGPS, dead reckoning and a digital compass, which are all fused by the way of Extended Kalman Filtering.
For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters.
Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the ppaper are ASKA, Urbano, Indigo, LeBlanc, Konard and Suse


*Protective barriers should be included (Which is discussed in the body design)
*Warning labels should indicate potential hazards. As the robot does not operate any manipulators or tools, and is designed not to be able to crush someone, or run someone over, the only danger here is tripping, which is also minimized in the design.
*Light curtains, pressure mats and other safety devices: The robot includes whiskers at the front, that also help with avoiding a direct body collision.
*Others, which do not apply to this robot, as it is not present in an industrial setting.


==Users==
In addition, ISO 15066:2016<ref name=":0" /> indicates requirements for robots in proximity to human operators:
===Problem statement===
Crawl spaces are home to a number of possible dangers and problems for home inspectors. These include animals, toxins, debris, mold, live wiring or even simply height. <ref>Gromicko, N. (n.d.). ''Crawlspace Hazards and Inspection''. InterNACHI®. <nowiki>https://www.nachi.org/crawlspace-hazards-inspection.htm</nowiki></ref> The use robots to help inspect crawl spaces, is already something that is being done. However, there are still some reasons why they are not fully being used. Robots might get stuck due to wires, pipes or ledges, or it might be difficult to control the robot remotely. Lastly, there is the argument that a human is still better equiped to do the inspection, as feel mind  tell more than a camera view. <ref>Cink, A. (2022, 5 april). ''Crawl Bots for Home Inspectors: Are they worth the investment?'' InspectorPro Insurance. <nowiki>https://www.inspectorproinsurance.com/technology/crawl-bots/</nowiki></ref>


To elimante the problem of control, the robot should be an autonomous one, capapble of traversing the crawlspace by itself, without getting stuck or trapped. The overall goal of the robot is to reduce possible harm the a human, which it will dor by creating a 3Dmap of the environment. Because a human inspector needs to enter a crawlspace themselves eventually, they know what they can expect from the crawlspace and beforehand prepare for any dangers or problems.  
*A risk assessment should be made to identify hazards to surrounding personnel.
===Requirements===
*Monitoring systems to keep track of speed and separation, which are included in the form of the LIDAR sensor.
There are a few function that the robot must able to do, in order to work as a general crawlspace robot. Additionally, since we are improving current models that rely on cameras and human control, there are some other requirements too.
*Force and power limits of the robot: The robot is not incredibly high powered, the amount of force it is capable of applying in normal operation depends on the physical implementation of this proposed concept, but is unlikely to be problematic, as there is no need for high-powered actuators for the drive train to function.
*Emergency stop (already covered in the other ISO standard)
*A safety distance gap should be kept to people around the robot, this is however ignored, as we are developing a solution that aims to be safe, without needing to keep clear of humans.
*Force and Pressure limits are imposed on the robot, to prevent serious harm, both during normal operation and collision. These include:
**A limit on the contact force during collision of 150N, '''this will be the focus of the simulation'''.
**A pressure limit during collision of 1.5 kN/m^2, we make the reasonable assumption this pressure limit cannot be reached without violating the previous condition, as our robot is designed to be as smooth as possible, making it incredibly difficult for the robot to apply a lot of force in a very small and local area.
**Force limiting or compliance techniques are to be implemented to reduce the force applied during collision. This comes in the form of whiskers for the compliance aspects, allowing them to deform to reduce the impact, and the behaviour is designed to limit the force by slowing, satisfying the limitation requirement.


Firstly, it should be able to enter the crawlspaces based on its size. In the US, crawlspace typically range from between 18in to 6ft (around 45cm to 180m) <ref>Crawl Pros. (2021, 12 March). ''The Low Down: Crawl Spaces vs. Basements 2022''. <nowiki>https://crawlpros.com/the-low-down-crawl-spaces-vs-basements/</nowiki></ref>, in the BENELUX (Belgium, Netherlands, Luxembourg), the average lies between 40cm and 80cm <ref>G. (2022, 19 December). ''Kruipruimte uitdiepen (10 - 70 cm)''. De Kruipruimte Specialist. <nowiki>https://de-kruipruimte-specialist.nl/kruipruimte-uitdiepen/</nowiki></ref>, but sometimes even being smaller than 35cm. Entrances are of course even smaller.
As a performance measure for the simulation, we consider the maximum force applied in a collision during the duration of the simulation. This is the element of the ISO standards that is not negated by the behaviour design, or design of the body itself, and thus the part that remains to show the robot is safe to operate.


The robot must also be protected from the dangers of the environment. Protective casing, protection against live wires, reasonably waterproof (the robot is not designed to work under water, but humidity or leaks should not shut it down) and a way to be save from animals.
This data can also inform the design of the robot and its behaviour, as it can test various form factors, and navigation algorithms to optimize. In the end the simulation results act as assistance in design iteration, and ultimately inform us about the viability of the robot in crowds.


Next, it must also be able to traverse the space, regardless of pipes, or small sets of debris.
'''Why a simulation:'''


Following from that, it must also be able to travel autonomously, while keeping track of its position, to make sure it has been through the entire crawlspace.
Testing which techniques have an impact require a setting with a lot of people to form a crowd, which can be controlled precise enough to eliminate outside or 'luck' factors.
The performance needs to be a function of measurable starting conditions, and the behaviour of the robot.
When using a real robot, we would need to work in an iterative approach, where we can alternate the appearance and workings of the robot after each simulation, to simulate different scenarios. This would require re-building the robot
each time, which is something we simply don't have time for. Additionally, to obtain a large enough crowd (think of more than a 100 students) would become tricky in such a short notice. Using a real-world crowd (by going to the buildings in-between lectures) would present the most accurate situation but is not controllable and not reproducible. There is also the ethical dilemma of testing a potentially hazardous robot in a real crowd, and logistically, organizing a controlled experiment with a crowd of students is not an option.


Lastly, the most important added feature, it must be able to complete a 3D mapping of the environment.
===Simulation: situation analysis===
The real world would have the robot guide a blind person through the atlas building, to a goal. This situation can broadly be dissected as:


To be able to perform these tasks, there are some technical requirements.
*Performance Measure: The maximum force applied during collision with a person, which cannot exceed 150 Newton.
*Environment: Dynamic, partially unknown interior room, designed for human navigation.
*Actuators: wheels.
*Sensors: LIDAR & Camera, abstracted to General purpose vision and environment mapping sensors, but are assumed to be limited range and accuracy, systems capable of deducing depth, position and dynamic- or static obstacles.


The robot must have enough data storage or a way to transmit it quickly, to handle the 3D modelling.
The environment is assumed to be:


Next, it must have enough processing power to navigate through the environment at a reasonable speed.
*Partially Observable
*Stochastic
*Competitive and Collaborative (humans aid each other in navigation, but are also their own obstacles)
*Multi-agent
*Dynamic
*Sequential
*Unknown


It must have a power supply strong enough to make sure it can complete the mapping of a full crawlspace.
===Considered Simulation Design variants.===
Simulating the robot may take various shapes, each with their own advantages. When considering the type of simulation we will make, we considered the following aspects:
Environment Model:


Lastly, it must have
*Mathematical: Building a model of the environment, purely based on mathematical expressions of the real world.
===Current crawlers===
*Geometrical: Building a 3d version of the environment, using a 3d virtual representation of the environment.
As mentioned before, there are some robots already in use to help inspectors with their job.
*2D: The environment does not consider depth
*3D: The environment does consider depth


First, we have [https://www.inspectioncrawlers.com/ inspectioncrawlers], where different crawler robots can be bought. All of their robots have basic specifications, consisting of at least hours of runtime, high quality cameras, protective covers, wireless (distant) control, waterproof electronics and good lighting. The main advantage of these robots are that they are capable of providing an almost 360 camera view of their surroundings which allows an inspector to see most of the environment. However, control is still necessary by the a human operator. [[File:Inspectorcrawlers.jpg|center|thumb|Three different crawler robots by Inspectioncrawlers. <ref>https://www.inspectioncrawlers.com/</ref>]]Next there is the [https://www.superdroidrobots.com/store/industries/pest-control/product=2729 GPK-32 Tracked Inspection] Robot from SupderDroid Robots. With dimensions of only 32cm by 24cm and height of 19cm (12.5" X 9.5" X 7.25"), it can easily fit in most crawlspaces. Included are several different protective items such as a Wheelie Bar to protect from flipping, a Roll Cage to protect the camera and a Debris Deflector. Their biggest disadvantage is the fact that they require line of sight or proximity in order to wirelessly control the robot. [[File:Tracked inspection.jpg|center|thumb|The GPK-32 Tracked Inspection Robot by SuperDroid Robots. <ref>https://www.superdroidrobots.com/store/industries/pest-control/product=2729</ref>]]Lastly, there is a [https://www.superdroidrobots.com/store/industries/pest-control/product=2452 tethered inspection robot] from SupderDroid Robots. The entire system is waterproof, has a longer runtime and the camer allows for a 360° pan with a -10°/+90° tilt, which allows for a clear visioni. There are two main disadvantages, one being its size and the other the fact that it requires tethering to be able to be controlled. With dimensions of 48cm by 80cm and a height of 40cm (18.9" X 31.2" X 15.7"), it is a bit big to be used in some crawl spaces, which might cause it to get stuck easier. Lastly, the fact that it is tethered means that the cable can also easily be stuck and that the robot requires more precise control.[[File:Tethered bot.jpg|center|thumb|The LT2-F-W Watertight Tethered Inspection Robot by SuperDroid Robots.<ref>https://www.superdroidrobots.com/store/industries/pest-control/product=2452</ref>]]
Robot Agent:


===Guide dog research===
*Global awareness: The robot model has access to all information across the entire environment.
This text is based on multiple papers with the communication between robot and handler being central. The most sought after results were:
*Sensory awareness: Observing the Simulated environment with virtual (imperfect) sensors. The robot only has access to the observed information.
Selecting method of steering guided: (how to indicate sharpness of turns/what kind of obstacles/indicate confidence of free path)
*Mechanics simulation: The detail at which the robot's body is modelled. Factors include whether the precise shape is considered, the accuracy of actuators and other systems, and delay between command and response.


For this research I mainly looked at the function of guide dogs. This being that other options such as humans can communicate clearer with the person with impaired vision. The tasks of a guide dog are<ref>What Guide Dogs Do - How Guide Dogs Work | HowStuffWorks</ref><ref>What A Guide Dog Is Trained To Do | Guide Dogs</ref><ref>How Do Guide Dogs for the Blind Work? Everything You need to Know (mypetneedsthat.com)</ref>:
Crowd Behaviour Model:


*Walk centrally along the pavement whilst avoiding (dynamic) obstacles on the route
*Boid: Boids are a common method of simulating herd behaviour in animals (particularly fish)
*Maintain a steady pace
*Social Forces: The desire to approach a goal and avoid and follow the crowd is captured in vectors, which determine the velocity of each agent in the crowd.
*Not turn corners unless told so
*Stop at kerbs and steps
*Find doors, crossings and places which are visited regularly
*Bring the handler to elevator buttons
*Judge height and width so you do not bump your head or shoulder
*Help keep you straight when crossing a road - but it is up to you to decide where and when to cross safely
*Move on command, but obediently ignore when dangerous


Dogs obey commands using hand and vocal signals. On our importance of communication for sharpness of turns, what kind of obstacles, and confidence of free path the following answers are provided.
===Simulation: Crowd implementation===
To test the robot's capabilities in crowds through a simulation, the simulation must include a realistic model of how crowds behave. In the 1970s Henderson already related a macro view of the crowds with fluid-dynamics with great success<ref>Henderson LF. The statistics of crowd fluids. Nature. 1971 Feb 5;229(5284):381-3. doi: 10.1038/229381a0. PMID: 16059256.</ref>. For the local interactions the robot would experience in real life, this macro view is not realistic enough to model these interactions. Therefore, we have to use a more micro level description of crowds. We came across the social fore model created by D. Helbing and P. Molnár<ref>Helbing, D., & Molnar, P. (1995). Social force model for pedestrian dynamics. ''Physical review'', ''51''(5), 4282–4286. <nowiki>https://doi.org/10.1103/physreve.51.4282</nowiki></ref> in 1998. This model is well acclaimed and even though it has it's drawbacks like a full stop of pedestrians not working well in the model, we have decided to use the original formulation suggested in 1998.


====Indicating sharp corners====
The social force model is a physical description of pedestrian behaviour, it models pedestrians as point masses with physical forces working upon them. Each pedestrian experiences a few different forces, which will be shortly explained. First, there is a driving force, this force models the internal desire of a pedestrian to go somewhere, it is represented as a direction and the pedestrians desired walking speed. The desired walking speed used is the same that the paper suggests namely a normally distributed random variable with mean of 1.34 m/s and a standard deviation of 0.26 m/s. The direction is calculated by using Unity's navmesh which generates paths through the environment given a start and end. Second, every pedestrian experiences some repulsive force generated by other pedestrians. These repulsive forces are calculated using the fact that humans, want to keep enough distance to each other and instinctively take into account the step size of others. This is calculated by creating an ellipse which is as big as the step the other pedestrian is taking. Then depending on this ellipse it's turned into a force which grows exponentially the closer to the other pedestrian you are, this is called the territorial effect and it points away from the other pedestrian. This is done for every pedestrian in the vicinity. Third, there is another repulsive force from walls and obstacles, this is far simpler as it can be described by an exponential force the closer you get to an obstacle, which points away from the obstacle. Finally, there is an attractive force, this force can be used for multiple things, either for friends who you would want to walk closer to or interesting objects or people in the vicinity. This force decreases over time as people lose interest, however this force is not applied in our model. Both the repulsive and attractive forces are weighted depending on if the object applying the force is inside the field of vision of a pedestrian. The net force applied to a pedestrian is the summation of all these forces and can be applied as an acceleration where the maximum attainable speed of a pedestrian is capped by its desired speed. For performance reasons most of this calculation is done in parallel on the GPU, because of this a trade-off was made. For the repulsive force generated by the walls only the closest object is taken into account since passing all the objects to the GPU creates too much overhead for the CPU loading the data to it. If everything would have been handled by the CPU however, the possible amount of simulated people would have been too little to form a crowd.
Nothing really useful has been found on turning sharp corners. Basically dogs guide their handlers by going through first and staying close to the handlers side. That is how the handler goes to go a little bit left or right. An important note is that guide dogs should not turn unless the handler says so. This is important for a guide robot because the handler does not know how to turn. Therefore this should be neglected but communication can be the other way round. Such as the robot saying or using vibrations to show it is about to turn left or right. However it is to be researched if this communication is necessary or the robot simply turning before the blind is enough.


====Indicating (dynamic) obstacles====
===Simulation: Robot agent===
Guide dogs already are told to stop at different static obstacles such as: kerbs, stairs, and lifts. Therefore are protocols and they can be copied. Dynamic obstacles, dogs probably  can avoid just like humans. So, it is hard to really say how dogs do it or how they communicate it apart from simply steering.
The robot agent was implemented using Unity. The body of the robot was created by importing the CAD model into Blender and then importing it into Unity. To this model a mesh collider is added to try and make collisions more precise. Attaching a rigid body to the robot agent allowed it to interact with its environment as well as follow the laws of physics (or at least the physics of the Unity engine).
The behaviour of the robot was implemented in the following way:


====Indicate confidence of free path====
====Map of the environment====
Guide dogs do not really indicate its confidence that the path is free, they simply make sure as well as possible that dynamic objects do not hit their handler. So it is not clear if the robot guide should indicate this. It may have an adverse effect.  
One of our base assumptions was that the robot has a map of the environment it is in, with landmarks placed. Thus, it would know how the base environment is structured according to the floor plan for example. Thus, it knows where there are walls as well as points of interests, which are the goals to which it will guide people. This was implemented into the simulation via Unity's Navmesh. It allows us to create a mesh of the environment, dividing the space into places where the robot can and cannot move. Then using the default path-finding algorithm of Navmesh, the robot agent will calculate a path using this mesh, thus moving though the environment, while also keeping in mind the overlay of the map. The only issue with this approach is that the algorithm used for pathfinding is A* which, while it will calculate the shortest path to the goal, sometimes the shortest path is not the best path overall.


Basically, when indicating corners and static obstacles. There is no free wiggle room because some things are already in place with guide dogs. They however can be replaced with other ways but keep this in mind.
====Sensing the environment====
An important thing for corners is that guide dogs are trained not to take corners unless told so. In the CaBot the robot uses vibration interval to show the sharpness of the turn.
Our robot agent is supposed to use a combination of a LiDAR, a camera and a thermal camera to recognize obstacles in its path that are not in the built-in map, or in other words more dynamic obstacles. While we have stated in our report how one could try and detect dynamic obstacles, in the simulation due to constraints, we rather than using point clouds to create a map of the close environment around the robot and then combine that map with the thermal camera vision to try and detect humans, we instead make use of Unity's raycast functionality, which allows us to cast light beams from our agent. Thus, using multiple beams from these raycasts we emulate a 2D LiDAR. Using this LiDAR as the main sensor we created two versions.
With dynamic obstacles and indicating confidence free path (which is kind of the same as dynamic obstacles), there is a lot to play with indicating this to the handler. The guide dog would simply walk around this.
The first version has better obstacle avoidance and overall smoothness of movement. Using the raycasts, based on whether the beam hits an object that as been tagged as an "undiscovered human", or  "undiscovered obstacle" it convers the tag to discovered, which will then carve a space around the object on the mesh, which makes out the agent move around the object in the path it must take is near the obstacle. This version has some limitations, however. Due to the implementation of NavMesh and the movement ai in Unity, it does not allow us to follow the regular laws of physics, thus the robot could not interact with its environment correctly. Thus, we created a second version.


However information on how guide dogs exactly guide is sparse on the internet. It mostly shows a list of what guide dogs are trained to do. So therefore it is not yet clear of what visually impaired people find important when guided. Therefore I would reach out to Visio. There is also some literature I (Wouter) found, but did not had time to reach about guide robots.
The second version makes use of NavMesh to calculate a path much like the first version, however the way the agent moves is different. Rather than depend on the navigation ai, it uses a movement function of the rigid body component to traverse the environment to follow that path. This allows the agent to have physics in its interactions with its environment. The obstacle detection and avoidance are also done in a different way. Rather than carve out the mesh, we use three different sets of beams. Left, right and front. Based on where the obstacle is detected the agent reacts by slightly deviating from its path. The issue with this version however was that the movements of the robot were not smooth, thus while it could interact better with its environment, its movements when turning for example were not realistic. That is why we used the first version for the macro simulation.


==User scenario's==
Finally it must be noted that the follow and bump behaviour in the implementation of the robot has some issues. Mainly that the robot would sometimes follow during times where it should not follow, as well as not following in moments where such behaviour would be the most efficient. The reasons for these issues is the fact that it is difficult to determine which person would be an ideal candidate to follow. For example our implementation depends of the direction the agent is looking at, as well as the rotation of the humans around the robot. If both the robot and a human have the same rotation, then that human is seen as a potential candidate. While on paper the idea seems good, in certain situations, for example when the robot is turning around corners, or making small adjustments in its path, it will then not be looking at the final goal. This would mean that it is possible it starts following a person, going in the wrong direction, granted there is no other option but to initiate the follow behaviour (when there are obstacles detected that prevent the robot from moving to the left, right, and forwards). This leads the robot to sometimes take inefficient paths.


=== Physical contact through crowded spaces ===
===Simulation: Environment===
Jack is partially sighted and can see only a small part of what is in front of him. He has recently been helping out fellow students with their field tests which tests a robot guide. Last month he worked with a robot called Visior which helps steer him through his surroundings. Visior is a robot which is inspired and shares its physical features with CaBot.
The environment is a 3D geometry-based replica of the first floor of the ATLAS building in terms of large collision parameters.
It has been constructed by tracing the edges a floorplan of Atlas, provided by the RE department, with collision objects.


When Jack used Visior to get to the library to pick up a print request he had to pass through a mediumly-crowded Atlas building since there was an event going on. This went mostly as expected; not too fast and having to stop semi-periodically because of people walking or stopping in front of Visior. The robot was strictly disallowed to purposely make physical contact with other humans. Jack knows this so he learned to step up in these situations and try to kindly ask for the people in front to make way. This used to happen less when he uses his white cane since people would easily identify him and his needs. After Jack arrived at printing room in MetaForum he picked up his print request. He handily put his batch of paper on top of his guiding robot so he didn’t have to carry it himself.  
After the model was constructed, it was re-scaled in the unity engine to match the metric of the Atlas building.
It should be noted that not all elements of the floorplan are accurate, as the layout of Atlas changes frequently to accommodate events.


On his way back he almost fell over his guiding robot when it suddenly stopped when a hurried student ran by. Luckily he did not get hurt. When Jack came home after this errand he crashed on his couch after an exhausting trip of anticipating the robot’s quirky behavior.
The model has various abstraction to accommodate constrains of the simulation.
Entry ways have been blocked of, to avoid the crowd of walking outside of the defined perimeter, and doors are considered to be closed.
The stairs have also been omitted, or remodelled to be impassable, as we do not consider other floors of the Atlas building in this simulation.
Only the lower portion of this floor is considered, as there will be no walking crowd that collides with anything higher than 2 meters.


The next day the researchers and developers of Visior came to ask about his experiences. Jack told them about his experience with Visior and their trip to the library. The developers thanked him for his feedback and started working on improving Visior.
===Simulation: Results===
'''Parameters:'''


This week they came back with the now new and improved Visior-robot. This version has been installed with a softer exterior and now rides in front of Jack instead of by his side. The developers have made it capable of safely bumping into people without causing harm. They also made it capable of communicating with Jack if it thinks it might have to stop suddenly to make Jack a bit more at ease when traveling together.
To obtain the results, the simulation was run with the robot starting at the north side of Atlas, moving towards the goal on the opposing south side of Atlas. The crowd was setup to contain 1500 agents, which is the maximum number of people the ground floor of Atlas is designed for, according to the Real Estate department.
[[File:CROWD.png|alt=CROWD|thumb|1056x1056px|Screenshot of the crowd simulation in ATLAS. The robot is about to approach a chokepoint.]]
'''Expected results:'''


The next day Jack used it to again make a trip to the printing space in MetaForum to compare the experience. When passing through the crowded Atlas again (there somehow always seems to be an event there) he was pleasantly surprised. He found it easier to trust Visior now that it was able to communicate the points in the trip where Visior thought they might have to stop or bump into other pedestrians. For example when they came across a slightly more crowded space Visior had guided Jack to walk alongside a flow of other pedestrians. Jack was made aware of the slightly unknown nature of their surroundings by Visior. Then when student suddenly tried to cross their path without looking Visior had unfortunately bumped into their side. Visior gradually slowed their pace down to a halt. Jack obviously felt the bump but was easily able to stay stable due to the prior warning and the less drastic decrease in speed. The student who was now naturally aware of the something moving in their blind spot immediately stepped out of the way and looked at Jack and Visior; seeing the sticker stating that Jack was visually impaired. Jack asked them if they were alright, to which they responded with saying they were fine after which they both went on their way. After picking up his print he went back home. On his way back he had to pass through the small bridge between MetaForum and Atlas in which a group of people were now talking; blocking a large part of the walking space. Visior guided Jack to a small traversable path open besides the group; taking the risk that the person there would slightly move and come onto their path. Visior and Jack could luckily squeeze by without any trouble and their way back home was further uneventful.
The expect result is for the Social Force model to generate a crowd that is typical of a very busy day in Atlas. With this comes:


When the Developers of Visior came back the next day to check up on him Jack told them the experience was leagues better then before. He told them he found walking with Visior less exhausting then it had been before and found the behavior of it more human-like making it easier to work with.
*The generation of dense 'streams' of agents moving in a similar path from goal to goal.
*The existence of sparse and dense pockets of space, where some areas are move heavily congested.


=== Familiar guidance advantage ===
We do not expect the social force model to generate agents stationary near goals (such as real students, buying a drink at a machine, creating congestion around a coffee machine), as the model is focused on the movement of pedestrians.
Meet Mark from Croatia
He is a Minor Student following Mathematics courses, and lives on (or near) campus
Mark is severely near-sighted, being born with the condition he has never seen very well. Mark is optimistic but chaotic.
Mark likes his study, and likes playing piano.


Notable details:
In order to behave safely in accordance to the ISO 10218-2:2011 and 15066:2016 requirements<ref name=":0">ISO 15066:2016(EN) Robots and robotic devices — Collaborative robots, International Organization for Standardization.  https://www.iso.org/standard/62996.html, 2016</ref><ref name=":4">ISO 10218-2:2011 Robots and robotic devices — Safety requirements for industrial robots — Part 2: Robot systems and integration, International Organization for Standardization, https://www.iso.org/standard/41571.html, 2011-07</ref>, We expect the robot to:
Mark makes use of a white cain and audio-visual aids to assist with his near-sightedness.
He just transferred to TU/e for a minor, and doesn't know many people yet. Mark will only be here a short time for his minor. He has a service dog at home, but does not have the resources, time or connections to provide for it here, and so he left it at home.


Indoors, mark finds it hard to use his cane because of crowded hallways and he dislikes apologizing when hitting someone with his stick, or being an inconvenience to his fellow students. Mark can read and write English fine, but still feels the language barrier.
*Avoid collisions in the sparsely populated areas and follow its own path.
*Follow crowd-agents to prevent collisions in adequately dense areas, where there is still enough space to avoid agents, but not enough to find your own path.
*Follow its own path when the currently followed agent deviates too much from the optimal path.
*Bump into crowd-agents when there is insufficient space to avoid them.
*When bumping, the force should be minimal: The robot should ensure a relative velocity low enough to not cause pain or major discomfort.


In a world without our robot mark might have to navigate like this:
'''Result:'''
Mark has just arrived for his 2nd day of lectures. And will be going to the wide lecture hall at Atlas -0.820. Mark again managed to walk to Atlas (as we will not be tackling exterior navigation), and uses his cane and experience to navigate the stairs and rotary door of Atlas, using it to determine the speed and size of the revolving element to get in, and using the cane to determine the position of the doors and opening (https://youtu.be/mh5L3l_7FqE).


Once inside, he is greeted by a fellow student who has noticed him navigating the door. Mark had already started concealing the use of his cane, as he doesn't like the attention and so the university staff didn't notice him. Luckly, his fellow student is more than willing to help him get to his lecture hall. Unfortunately, the student is not well versed in guiding visually impaired around, and it has gotten busy with students changing rooms.
*We observed that central spaces, such as the centre of the main hall, are indeed very calm. The crowd that formed was very sparse here,


Mark is dragged along to the lecture hall by his fellow student, bumping into other students who don't notice he cannot see them, as his guider is hastily pulling him past the students. Mark almost loses his balance when his guide slips past some other students, narrowly avoiding the trashcan while dragging mark by his arm. Mark didn't see the trashcan, which is not at eye level, and collides with the metal frame while trying to copy the movement of his guider to dodge the other students. He is luckily unharmed, and manages to follow his guide again, untill he is finally able to sit in the lecture hall, ready to listen for another day.
and as such the robot could use standard avoidance and pathfinding algorithms, A* in this case, to avoid the agents of the crowd and reach the goal without making a single collision.


With our robot however:
*We also observed that there tend to be congested areas around hallway entries and more narrow spaces. Here the crowd would become very dense, with agents themselves bumping into each other or narrowly avoiding each other.
Mark arrives inside Atlas, and is greeted by a fellow student, who noticed him struggling with the door. The student knows there are guidance robots at this building, and helps Mark to a guidance robot clearly waiting by the entrance. He helps mark enter his destination in the interface, and leaves him to go to his own lecture.
*We observed that the robot generally crosses a stream of densely packed agents, rather than that the stream moves in the same direction as the robot, so that in can follow it. While doing so, it does attempt to avoid agents, or reduce impact.
*We observed that the robot does indeed bump into agents that are in the way, but it is hard to definitively state the robots uses bumping as a last resort.
*We observed that the robot bumps through congested areas, instead of avoiding them, if its path requires it to get through this area.


The robot greets mark, and tells him instructions in order to follow him. The robot tells him to extend his arm slowly at belly level, until he can feel the robot. The textured design alongside the specifically shaped exterior naturally guide Mark's hand to the specially designated handhold. The robot tells him to grab the hold, using touch technology to confirm this action, the robot proceeds by demonstrating the different haptic and kinetic indicators mark could pay attention to, in order to know the situation around him, as sensed by the robot.
A video of a single iteration: https://www.youtube.com/watch?v=YAjKelmA9mM


Then, the robot starts the task of getting mark to the lecture hall. It starts moving, its next intended speed and direction through the feedback in the handle. As a result, Mark can anticipate the route the robot will take, similar as to how a guide would apply force to Mark's hand to change direction (find that video again).
'''Conclusions'''


The robot has reached the crowd of students moving through the busy part of Atlas. It's primary objective is to get Mark through this, and even though many students notice the robot going through, it still uses clear audio indications to warn students it will be moving through, and notifies mark it goes into some alternate mode through the handle. Mark notices, and thus becomes alert as he also feels that the robot reduces the number of turns it makes, Navigating through the crowd in the most straightforward route it can take. Mark likes this, it is making it easy for him to follow it, and also for others to avoid them.
We observed that the crowd generated by the Social Force model was indeed indicative of a typical crowd in Atlas. This caused the problem that the crowd, although it is representative, does not comply with the assumptions the robot makes in order to navigate a dense crowd. As the scope of the described behaviour ends at these assumptions, the implemented behaviour of the robot, which could only work inside the scope of this project, simply does not generate adequate results in terms of safety and performance. The robot behaviour earlier described assumes a laminar flow of people to navigate, and the streams that occur in the Atlas setting are often only partially laminar. Especially when streams cross, and around congested chokepoints, this assumption simply does not hold. Additional implementation would be required, to deal with non-laminar or generally omni-directional movement of crowd flows.


Still, a sleepy student bumps into the robot as it is crossing. Luckily the robot is designed to contact other students, and it's rounded shape, enclosed wheels (or other moving parts) and softened bumpers prevent harm. The robot does however slightly reduce its pace, and makes an audible noise to let the sleepy student know it touched the robot too hard. Mark also notices the collision, partially because the bump makes the robot shake a little and loose a bit of pace, but mainly because his handle clearly and alarmingly notifies him, Mark also knows the robot will still continue, as the feedback of the handle indicates to him that it is not stopping.
We conclude that this is the reason why the robot does not always tend to follow flows and avoid bumps. As the scenarios we chose to focus the behaviour of this project on, are mixed with other scenarios such as crossing a crowd, that are not explicitly considered. As such the robot resorts to its basic non-crowd routine of attempting to follow the most efficient path. This effectively bypasses the behaviour we wish to test the safety for. As such we can only conclude from this simulation, that simplistic pathfinding behaviour with obstacle avoidance is sufficient to generate safe behaviour for navigating sparsely populated areas of Atlas.


After the robot made a straight line through the crowd, it makes it to the lecture hall. It parks just in front of the door, and tells mark to extend his free hand slightly above hip level, telling him they arrived at a closed door that opens towards them swinging to his right, similarly to how a guide would, so mark can grab the door handle, and with support of the robot open the door. The robot preceeds mark slowly into the space, it goes a bit too fast though, and mark applies force to the handle, pulling it slightly in his direction. The robot notices this, and waits for mark.
To show the safety of the behaviour itself, we thus decided to create more focussed environments, that force compliance with the robots expected crowd.


After they enter the lecture hall, the robot asks the lecturer to guide mark to an empty seat (and may provide instructions on how to do so). When mark is seated, the robot returns to its spot near the entrance, waiting for the next person.
===Simulation: Micro-simulations===
[[File:SUDDEN STOP.png|thumb|1036x1036px|Screenshot showing the micro simulation, where the robot is following a person that is suddenly stopping]]
To test the safety of the robot's behaviour implementation, we created specific scenario's, which are better suited to showcase the intended behaviour of the robot, which together cover a large subset of problem the robot can solve.
These scenarios are specifically created after running the Social Force simulation and are controlled instances of situations the robot agent encountered during its navigation in that simulation.


==On Campus Navigation Assistance Robot: User interview==
The advantage of these scenarios, is that they are altered to force compliance with the robot's assumptions about the crowd, as described in the scenario's and behaviour sections of this wiki.
After feedback in week 1, we decided to explore additional options. One of these options was an on-campus navigation robot, that would guide you to your room.
As a consequence, the robot will show the behaviour that is described, and thus the safety of the robot, in situations encountered in the Atlas representative model, can be tested with the correct behaviour of the robot.
For this, we decided to determine if there is enough precedence to develop such an idea further by interviewing students of the campus as well as consulting Real Estate.


For the student interview, the following enquete was created:
Each scenario was run a total of 10 times, 5 times to observe the robot behaviour, and another 5 times to obtain force measurements during collisions that may occur in the scenarios. In each iteration, the parameters are the exact same.
Unfortunately, spreading such an enquete to sufficient students is not easy. As such we send an email to the board of GEWIS, asking them for assistance, however so far (Week 2, 26th of February), there is no response.


Additionally, the Real Estate department of TU/e has also been send an email asking about there findings. The email thread is available below. The response yielded many items to consider, which can be discussed during the next meeting.
'''Micro scenario - sudden stop'''


Lastly, a draft email is ready to optionally send to companies, to ask them about their experiences in this field as well, the goal being to find which challenges large public spaces have to solve when it comes to navigation, and where our robot could fit in.
This micro scenario focuses on our second scenario "Stalled lead", in which the robot is following a person, and this person suddenly stops. In the scenario, the robot is forced to slow its pace and bump into the person, by placing a row of persons on each side to prevent it from avoiding the person. The robot slows it pace, and eventually bumps into the person, until they move.


<!-- Currently, the file upload system is not permitting me to upload the emails! -->{| class="wikitable mw-collapsible"
A video of the scenario in development is shown here: https://www.youtube.com/watch?v=rcPF2ZiYqlw
|+emails
{| class="wikitable"
!email
|+Simulation collision measurements
!file
!duration [frames]
!impulse Magnitude [Newton-seconds]
!nr of collisions
!average force [Newton]
|-
|44
|67.0125
|8
|91.3807
|-
|38
|56.0443
|6
|88.4910
|-
|-
|to board of GEWIS
|41
|
|62.1002
|8
|90.8783
|-
|-
|to/from Real Estate
|41
|
|63.2706
|8
|92.5911
|-
|-
|[draft] to companies
|40
|
|59.3228
|6
|88.9842
|}
|}
EMAILS HAVE BEEN PASTED BELOW AS A TEMPORARY SOLLUTION INSTEAD:
'''Micro scenario - intersecting agents'''


During this scenario, the robot is following a person, while another person is crossing the space between the robot and its lead. The robot shows it is capable of detecting the crossing person in time and reacts by allowing the crossing person to pass in front of it by slowing to a near halt. When the person has passed, the robot accelerates, and we observe that it returns the to the same distance it was following the person at earlier.
[[File:CROSSING.png|thumb|1042x1042px|Screenshot showing the second micro simulation, where the robot is cut of by a crossing person while following a lead.]]
We observed that the robot now indeed is capable of avoiding collision, instead of resorting immediately to bumping. As a result, during all 10 iterations run on this scenario did not result in any collisions.


Hello Board of GEWIS,
A video of the scenario in development is shown here: https://youtu.be/J_IOsJ16ifs


'''Micro scenario - Results'''


Currently we are taking the Project Robots Everywhere course, for which we need to do a bit of user study to validate our idea of building a robot for assistance in guiding student to locations in campus buildings.
During the above scenario's, a script was attached to the robot that measured the number of collision events, the largest duration in frames (where 60 frames are 1 second) of the collision events, and the largest Impulse measured during the collisions.


The evaluating script shows that for 5 separate iterations of the first scenario, the average force applied is well below the 150 Newton threshold. It should be noted that the script yields the largest and longest corresponding force and time a collision occurred during each run. The number of collision events is also computed using the convex rigid body of the robot mesh, which means that the number of collisions is likely to be higher, as the convex hull encapsulated space below the whiskers that is not actually occupied by the robot.


Of course, GEWIS students are the best users for this, so we want to ask students what they think through a small survey.
'''Conclusion'''


The micro simulations show that, if the assumptions on the robot behaviour are met, the total average force applied during the simulation is below the 150N threshold laid out in the ISO standard. In addition, the simulation shows that the behaviour of the robot is successful in avoiding contact in crowds unless required, satisfying the force limitation requirements in the ISO standard. We thus conclude that the proposed behaviour in this concept is safe, if the crowd behaviour is captured by the scenario's previously discussed in this document.


Can you help us? We would love to send the following message to some of the GEWIS group chats to spread the survey:
<br />


==Conclusion==


“”
===Project findings===
To the research question 'How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?' we have given an answer in the form of the provided various behavioural descriptions under the scenarios. From the micro-simulations it can be seen that it is safe to act in accordance with those for at least some of the behavioural rules. The simulations should however not be seen as definitive proof because it uses Unity's physics engine which lacks any kind of material simulations. To verify the claims made in this project regarding safety it would be best to run actual material simulations to find exact pressures. Furthermore, most of the behaviour has not been tested.


Hello, as part of the Robot’s Everywhere project we are designing a robot for on campus navigation. As part of our research we would like to know how confusing (or organised) you believe the campus to be. So we’ve created a short survey that will help us understand what you think. It takes about a minute or 2, and we’d of course be very grateful!
Overall, this behaviour has its uses, making a navigation method like this which is not reliant on perfect information, allows the robot to neglect some observations simplifying what sensors are necessary. It also makes the robot more robust for small changes. For example, a non-living entity will not change how the robot behaves.


<nowiki>https://willthiswork.limesurvey.net/367636?lang=en</nowiki>
===Future research===
The behaviour as described in the scenario's should be implemented in a more advanced simulation. This can be done in a discrete manner (rule-based agent) or a more inspired manner (Utility-based or learning agent, for which the descriptions would act more like a guideline).  


“”
The acceptance of the design in crowds and users should be verified, this is a point which was lacking in this research. César López has mentioned that this can be designed for using established researched as a guideline but is finally verified with a physical prototype and a survey designed for such research.


The design could also be made more detailed by adding any of the assumed working pieces mentioned in the problem scoping including adding behaviour for different kinds of dense crowds:


We could use your help!
*Localization of the guide
*Identification of obstacles or other persons
*Navigation in sparse crowds
*Navigation in dense crowds
*Overarching strategic planning (e.g., navigating between multiple floors or buildings)
*Interaction with infrastructure (e.g., Doors, elevators, stairs, etc.)
*Effective communication with the user (e.g., user being able to set a goal for the guide)


Any of the behavioural changes or additions would require some kind of transitional system to switch between them. López mentioned that this can be done by selecting the behavioural model for which all conditions are met but implementing a general navigation method is a good way to make sure the guide always has something to fall back on.


Have a nice day,
Finally, the risks and hazards of this design should be worked out in even more detail (like mechanical failure).


===Project evaluation===
First it is important to note that what is presented in this report is not a full 8 weeks of work for 6 students. This is due to the change of subject after 2 weeks, and another 2 weeks it took to have a clear problem statement narrowed down enough to work on. This gave us a last 4 weeks in which a lot of work was done. During these remaining weeks, after the second meeting with López, it became clear that the scenarios that were worked were too extensive and fell outside the scope of the project; walking along a unidirectional crowd. 


Jelmer Schuttert
After the final presentation there was a final meeting with César López in which the end result was evaluated and some of the main points will be discussed here.


[[Mailto:j.schuttert@student.tue.nl|j.schuttert@student.tue.nl]]
For this type of research safety is usually taken care of in the design process before development by using predetermined safety standards for such products. Due to time constraints small safety research was done alongside the making of the simulation. At the moment there is not an in-depth safety analysis done where possible hazards are identified, risks are determined, and consequences are determined. The main focus of the design is based on research on what might work when navigating a robot through a crowd.


Furthermore, the simulation that was designed should have been more constrained from the beginning, to fit the chosen problem. This again shows how the scoping of the research question should have been done earlier in the project. This would allow the assumptions for the behaviour to be met. Making a simulation with clear assumptions that are met allows the behaviour of the design to be more intelligently formed, using a more iterative process, instead of the current methods.


Hello Mr. Verheijen,
==Appendix==


===Code:===
The code for the simulation can be found in the following github page: https://github.com/JJellie/VirtualCrowdSim


First of all thank you for your response to our mail, its nice to hear that there is interest in this direction!
Here some papers used in the research to the guide robot are summarized. These papers are mostly the state of the art of the hard- and software of guide robots, and crowd navigation. These summaries could be read to get a deeper understanding of the state of the art.


I’ve shared our correspondence with the rest of the group, and our quick discussion online was very enthusiastic.
===Literature Research===
{| class="wikitable"
|+Overview
!Paper Title
!Reference
!Reader
|-
|Modelling an accelerometer for robot position estimation
|<ref>Z. Kowalczuk and T. Merta, "Modelling an accelerometer for robot position estimation," 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 2014, pp. 909-914, doi: 10.1109/MMAR.2014.6957478.</ref>
|Jelmer S
|-
|An introduction to inertial navigation
|<ref>Woodman, O. J. (2007). ''An introduction to inertial navigation'' (No. UCAM-CL-TR-696). University of Cambridge, Computer Laboratory.</ref>
|Jelmer S
|-
|Position estimation for mobile robot using in-plane 3-axis IMU and active beacon
|<ref>T. Lee, J. Shin and D. Cho, "Position estimation for mobile robot using in-plane 3-axis IMU and active beacon," 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South), 2009, pp. 1956-1961, doi: 10.1109/ISIE.2009.5214363.</ref>
|Jelmer S
|-
|Stepper motors: fundamentals, applications and design
|<ref>Athani, V. V. (1997). ''Stepper motors: fundamentals, applications and design''. New Age International.<br /></ref>
|Joaquim
|-
|Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities
|<ref>https://arxiv.org/pdf/1903.01067v2.pdf</ref>
|Jelmer L
|-
|Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization
|<ref><nowiki>http://www.roboticsproceedings.org/rss09/p37.pdf</nowiki></ref>
|Jelmer L
|-
|Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry
|<ref>https://www.robots.ox.ac.uk/~mobile/drs/Papers/2022RAL_zhang.pdf</ref>
|Jelmer L
|-
|Optical 3D laser measurement system for navigation of autonomous mobile robot
|<ref> Luis C. Básaca-PreciadoOleg Yu. SergiyenkoJulio C. Rodríguez-QuinonezXochitl GarcíaVera V. TyrsaMoises Rivas-LopezDaniel Hernandez-BalbuenaPaolo MercorelliMikhail PodrygaloAlexander GurkoIrina TabakovaOleg Starostenko (2013),
Optical 3D laser measurement system for navigation of autonomous
mobile robot,  https://www.sciencedirect.com/science/article/pii/S0143816613002480</ref>
|Boril
|-
|A mobile robot based system for fully automated thermal 3D mapping
|<ref> Dorit Borrmann, Andreas Nüchter, Marija Ðakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, Jasmin Velagić,  A mobile robot based system for fully automated thermal 3D mapping (2014), https://www.sciencedirect.com/science/article/pii/S1474034614000408 </ref>
|Boril
|-
|A review of 3D reconstruction techniques in civil engineering and their applications
|<ref> Zhiliang Ma, Shilong Liu, 2018,  A review of 3D reconstruction techniques in civil engineering and their
applications (2014), https://www.sciencedirect.com/science/article/pii/S1474034617304275?casa_token=Bv6W7b-GeUAAAAAA:nGuyojclQld2SMnIeHougCByarFJX7eu049kMp_IWrnU5e8ljX9RMao-U4vs6cB3nREk8JP3qIA </ref>
|Boril
|-
|2D LiDAR and Camera Fusion in 3D Modeling of Indoor Environment
|<ref> Juan Li, Xiang He, Jia L,  2D LiDAR and camera fusion in 3D modeling of indoor environment (2015), https://ieeexplore.ieee.org/document/7443100 </ref>
|Boril
|-
|A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR
|<ref>https://www.mdpi.com/2072-4292/14/12/2835</ref>
|Jelmer L
|-
|An information-based exploration strategy for environment mapping with mobile robots
|<ref>Francesco Amigoni, Vincenzo Caglioti,
An information-based exploration strategy for environment mapping with mobile robots,
Robotics and Autonomous Systems,
Volume 58, Issue 5,
2010,
Pages 684-699,
ISSN 0921-8890,
<nowiki>https://doi.org/10.1016/j.robot.2009.11.005</nowiki>.
(<nowiki>https://www.sciencedirect.com/science/article/pii/S0921889009002024</nowiki>)</ref>
|Jelmer S
|-
|Mobile robot localization using landmarks
|<ref>M. Betke and L. Gurvits, "Mobile robot localization using landmarks," in IEEE Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251-263, April 1997, doi: 10.1109/70.563647.</ref>
|Jelmer S
|-
|The Fuzzy Control Approach for a Quadruped Robot Guide Dog
|<ref name="The Fuzzy Control Approach for a Quadruped Robot Guide Dog">https://link.springer.com/article/10.1007/s40815-020-01046-x?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot</ref>
|Wouter
|-
|Design of a Portable Indoor Guide Robot for Blind People
|<ref name="Design of a Portable Indoor Guide Robot for Blind People">https://ieeexplore.ieee.org/document/9536077</ref>
|Wouter
|-
|Guiding visually impaired people in the exhibition
|<ref name="Guiding visually impaired people in the exhibition">Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. ''Mobile Guide'', ''6'', 1-6.</ref>
|Joaquim
|-
|CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People
|<ref name="CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People"> João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771 </ref>
|Boril
|-
|Tour-Guide Robot
|<ref name="Tour-Guide Robot"> Asraa Al-Wazzan , Farah Al-Ali, Rawan Al-Farhan , Mohammed El-Abd, Tour-Guide Robot  (2016), https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7462397 </ref>
|Boril
|-
|Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques
|<ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques  (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref>
|Boril
|-
|Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques
|<ref name="Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques"> Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques  (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button </ref>
|Boril
|}


I can’t answer for the group just yet, as we still need to discuss yours and other’s feedback in more detail.
====Modelling an accelerometer for robot position estimation====


We’ll be discussing everything Monday, so I’ll be able to tell you more about our decisions by then!
The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems).


It highlights sensors that we may also need:
" In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. "


Thank you again for your responses.
It mentions that, in order to derive position data from acceleration, it needs to be doubly integrated, which tents to yield great inaccuracy.


Kindly,
drawback: the robot needs to stop after a short time (to re-calibrate) when using double-integration to minimize error-accumulation:
“Double integration of an acceleration error of 0.1g would mean a position error of more than 350 m at the end of the test”.


An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration.
The paper modelled acceleration, and rotation according to various lengthy math equations and matrices and applied noise and other real-word modifiers to the generated data.


Jelmer Schuttert
It notably uses cartesian and homogeneous coordinates in order to separate and combine different components of their final model, such as rotation and translation. These components are shown in matrix form and are derived from specification of real-world sensors, known and common effects, and mathematical derivations of the latter two.


on behalf of Group 5 Project Robot’s Everywhere
The proposed model can be used to test code for our robot's position computations.


[[Mailto:j.schuttert@student.tue.nl|j.schuttert@student.tue.nl]]
====An introduction to inertial navigation====
This paper (as report) is meant to be a guide towards determining positional and other navigation data from interior based sensors like gyroscopes, accelerometers and IMU's in general.  


It starts by explaining the inner workings of a general IMU and gives an overview of an algorithm used to determine position from said sensors' readings using integration, showing what intermitted values represent using pictograms.


'''From:''' [[Mailto:btdhverh@tue.nl|Verheijen, Bert]]
It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices.


'''Sent:''' 21 February 2023 09:46
In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using Allan variance and shows how this effects the values from a gyroscope.


'''To:''' [[Mailto:j.schuttert@student.tue.nl|Schuttert, Jelmer]]
Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems.


'''Cc:''' [[Mailto:frouckdeboer@visio.org|Frouck de Boer]]
Lastly, it shows how to simulate data from the earlier discussed sensors. Notably though the previous paper already discussed a more accurate and recent algorithm (building on this paper).


'''Subject:''' RE: campus navigation Project Robot's Everywhere
====Position estimation for mobile robot using in-plane 3-axis IMU and active beacon====
The paper highlights 2 types of positioning determination: Absolute (does not depend on previous location) and Relative (does depend on previous location). It goes on to highlight advantages and disadvantages of several location determination systems. It then proposes a navigation system that mitigates as much of the flaws as possible.


The paper continues by describing the sensors used to construct the in plane 3 axis IMU:
- x/y accelerometer,
- z-axis gyroscope


Hello Jelmer,  
Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation.


Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a Karman filter to combat noise and drift. The final processing pipeline discussed is immensely complex due to the inclusion of bounce, collision and beacon-failure handling.


This morning I spoke to Mrs. Frouck de Boer of the company Visio www.visio.org. For you may be interesting they are developing indoor navigation systems for people with visual impairments. They find the idea of a robot that shows the way very interesting and would like to get in touch with you. The e-mail address is indicated in the attachment.  
Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost.




Question: For TU/e we want to do an inspection in the Flux building on 4 or 5 October with regard to accessibility. And prior to this, in the morning, a meeting for colleagues from universities and colleges involved in the accessibility component, hold a symposium in Eindhoven. The main theme here is visual impairment, in which indoor navigation certainly comes up. It would be nice if you could show what you are doing in terms of research. If you are interested in this, please let me know.  
====Stepper motors: fundamentals, applications and design====
This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore, it goes in-depth about how they are controlled.


====Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities====
According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power.


Kind regards,  
The author's approach is by creating a 2d Delaunay triangulation from tracked key points, and then projecting this into 3d, this projection can have some issues where points are close in 2d but not in 3d, this is solved by geometric filters. Some algorithms update a mesh for every frame, but the authors try to maintain a mesh over multiple frames to reduce computational complexity, capture more of the scene and capture structural regularities. Using the triangular faces of the mesh they are able to extract geometry non-iteratively.


Bert Verheijen.
In the next part of the paper, they talk about optimizing the optimization problem derived from the previously mentioned specifications.


Finally, the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time.
====Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization====
In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity.


'''Van:''' Verheijen, Bert
The contributions the authors claim to bring are constructing a pose graph without expressing global pose uncertainty, provide a fully probabilistic derivation of IMU error terms and develop both hardware and software for accurate real-time slam.


'''Verzonden:''' Monday, 20 February 2023 09:48
The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation.


'''Aan:''' Schuttert, Jelmer <j.schuttert@student.tue.nl>
Finally, they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled.
====Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry====
The authors from this paper propose an algorithm that fuses feature tracks from any number of cameras along with IMU measurements into a single optimization process, handles feature tracking on cameras with overlapping fovs, a subroutine to select the best landmarks for optimization reducing computational time and results from extensive testing.


'''CC:''' RE Secretariat <REsecretariat@tue.nl>
First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then it is refined by matching it with to the closest image feature in the camera projected to using Euclidian distance. After this it is explained how feature selection is done, this is done by computing a Jacobian matrix and then finding a submatrix that preserves the spectral distribution best.


'''Onderwerp:''' FW: campus navigation Project Robot's Everywhere
Finally experimental results show that with their system is closer to the ground truth than other similar systems.


====Optical 3D laser measurement system for navigation of autonomous mobile robot====
This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method, the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT,  four-wheel, four-motor ski-steer robotics platform.


Hello Jelmer,  
After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus, after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle.
Finally, the authors go over their results from simulating this robot in Matlab as well as analyse its performance.


====A mobile robot based system for fully automated thermal 3D mapping====
This paper showcases a fully autonomous robot, which can create 3D thermal  models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that  the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning.
The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360-degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally, the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm.


The development of a robot that supports interior navigation is very welcome for people with disabilities, both students and employees/visitors. Especially people with a visual impairment can then report to a secretariat, where the robot will guide them to the desired location.
====A review of 3D reconstruction techniques in civil engineering and their applications====
This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore, 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds:
For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction
feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors.
Feature matching- matching feature points of each image pair. Camera motion estimation is used to find out the camera parameters of each image. The Sparse 3D reconstruction step is to compute the 3D location of points using the feature points and camera parameters, generating a point cloud. This is done via the triangulation algorithm. Then the model parameters correction step is to correct the camera parameters of each image. This step leads to precise 3D locations of points in the point cloud.
Absolute scale recovery aims to determine the absolute scale of the sparse point cloud by using the dimensions/points of absolute scale in the sparse point cloud. Finally using all of the above is used to generate a dense point cloud.
For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction.
After this the authors explain how to generate point clouds from video images.
in Techniques for processing data, the authors showcase a couple of algorithms for data processing. For point cloud processing they use ICP. For Mesh reconstruction- PSR, for point cloud segmentation- they divide the algorithms into two categories- feature-based segmentation (region growth and clustering, K-means clustering) and model-based segmentation (Hough transform and RANSAC). After this the authors go in depth on applications of 3D reconstruction in civil engineering such as reconstructing construction sites and reconstructing pipelines of MEP systems.
Finally, the authors go over the issues and challenges of 3D reconstruction.




In the past we have made attempts with interior navigation, but the tu/e had to install relays for this, which made the system too expensive. Currently there are other systems, where the navigation can also be designed internally based on "google street view" with 360-degree photos.  
====2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment====
This paper goes over how to effectively  fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for colour and texture information, while a 2D LiDAR is used as the range sensor. To calibrate the correspondences between the camera and LiDAR, a planar checkerboard pattern is used to extract corners from the camera image and intensity image of the 2D LiDAR. Thus, the authors rely of 2D-2D correspondences. A pinhole camera model is applied to project 3D point clouds to 2D planes. RANSAC is used to estimate the point-to-point correspondence. Using transformation matrices, the authors match the colour images of the digital camera to with the intensity images. yB aligning a 3D colour point cloud in different location, the authors generate the 3D model of the environment. Via a  turret widow X servo, the 2D LiDAR is moved in vertical direction for a 180-degree horizontal field of view. The digital camera rotates in both vertical and horizontal directions, to generate panoramas by stitching series of images. In the third paragraph the authors go over how they calibrated the two image sources. To determine the rigid transformation between camera images and 3D points cloud a fidual target is used, RANSAC is used to estimate outliers during calibration process and a checkerboard with 7x9 squares is employed to find correspondences between LiDAR and camera. Finally, the authors go over their results.


<br />
====A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR====
This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components, and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled.


A robot as guidance has not yet been considered, that is new. The indoor navigation system as I mentioned above does.  
Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation.


We (TU/e) strive to make all areas accessible on the Campus, so that the accessibility of areas for the robot should not pose any problems. As indicated, I see the support of a robot in particular as an aid for people with a visual impairment.  
In contrast to the speed and low computational complexity of loosely coupled systems, tightly coupled systems sacrifice some of this for greater accuracy. One of the main points of these systems is a derivation of the error term and pre-integration formula for the IMU, this can be used to increase accuracy of the IMU measurements by estimating the IMU bias and noise. For LiDAR-IMU systems this derivation is used for removing distortion in LiDAR scans, optimizing both measurements and many different approached to couple the 2 devices to obtain greater accuracy and computation speed. The LiDAR-Visual-IMU use strong correlation between images and point clouds to produce more accurate pose detection.


The authors then do performance comparisons on SLAM datasets where most recent SLAM systems appear to estimate pose really close to the ground truth even over distances of several 100 meters.


At the Faculty of BE, Mrs. Masi Mohammadi holds the chair of empathic living environment and she may also be interested in this development.  
====An information-based exploration strategy for environment mapping with mobile robots====
This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map.


The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360-degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space.


Kind regards,
The researchers go on to describe how the quality of the proposal is measured, namely in the distance travelled and uncertainty of the map. The uncertainty is a function of the uncertainty in the robot's current position, and the accuracy of the range finder. The robot has a pre-computed expected position of each point, and a post-measurement position of each point, which is then evaluated through relative entropy to compute the increment of the point-information. This and similar equations for the robot's position data are used to select the optimal points for observing the environment.
Lastly, the points of each observation point are combined into one map, by using the robot's position data.


Bert Verheijen
====Mobile Robot Localization Using Landmarks====
The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia-based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods.


It highlights the robot's capability to:
- Find landmarks
- Associate landmarks with points on a map
- Use this data to compute its position.


'''From:''' Schuttert, Jelmer <[[Mailto:j.schuttert@student.tue.nl|j.schuttert@student.tue.nl]]>
It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image) and solves the correspondence problem (identifying an object from 2 angles) to find its location. In the paper, the technique is tested in an office environment.


'''Sent:''' Friday, 17 February 2023 13:03
The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks.
The paper then evaluates the expected distribution in angle-error and position on each axis, to correct for the noise, using the method described above.


'''To:''' RE Secretariat <[[Mailto:REsecretariat@tue.nl|REsecretariat@tue.nl]]>


'''Subject:''' campus navigation Project Robot's Everywhere
====The Fuzzy Control Approach for a Quadruped Robot Guide Dog <ref>The Fuzzy Control Approach for a Quadruped Robot Guide Dog | SpringerLink</ref>====
This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people. Based on its low viewpoint.


The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed; it does not show how to guide a blind person.


Dear Real Estate department of TU/e,
Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked.


====Design of a Portable Indoor Guide Robot for Blind People====
This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. The design is basically a motorized walker with sensors on it.


As part of the Robot's Everywhere Course, we are looking into developing a robot oriented solution to interior navigation.
This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities.  


The conclusion stated that the robot did well, and it was a low cost, convenient-to-carry, and strong perception blind guide robot.


We are focussing our efforts on creating a prototype robot for navigation of the TU/e campus.
====Guiding visually impaired people in the exhibition====
This paper talks about a robotic guide used to help (partially) blind people navigate an exhibition (a noisy, crowded (4 square meters/person), unfamiliar environment). These people are often faced with the challenge of maintaining spatial orientation; ‘the ability to establish awareness of space position relative to landmarks in the surrounding environment’. The paper proposes that supporting functional independence of these people can thus be achieved by ‘providing references and sorts of landmarks to enhance awareness of the surroundings’.


Our current idea can be summarized as “a robot that will guide students and visitors to their destination. “ Users would follow the robot while it would navigate to the correct room.
The technology used by this paper to achieve this is a handheld device capable of radio-frequency localization. To prepare the environment a RFID sensor was placed for each 300 square meters (~17x17 m area) at points of interest, services and major areas. The paper does not go into the details of how the localization is done but an educated guess would be that the guiding devices carried by the guided persons are scanned by these fixed sensors which then communicate to calculate the position of the guided. Keep in mind this exhibition took place in 2006, but they found a resolution of 5 meters (minimal distance between distinguishable tags).


The interface of the device makes use of hardware buttons, which they find a solution suited for visually impaired people. Apart from standard navigation and audio control buttons, the device was also equipped with a button which gives quick access to an emergency number.


Via our lecturer, we were pointed to this department for further information and decisions about such campus-related matters as navigation.
In this particular use-case the device guided people using an event-system which would ask the user if they wanted to hear a description of their environment. This event would trigger when the handheld device would recognize signals from local sensors. This description would include:


*an extended title


We were wondering if such an robot or system was ever considered before as a system for the campus, and if there are any resources and insights that you could provide us with in regards to such a system.
*the description of the point of interest


*one or more extended descriptions


Furthermore we are wondering if you could think of any obstacles that the robot would need to overcome, or requirements that such a robot would need to match, in order for it to be viable.
*descriptions to invite and spatially guide the user near the featured flowers and plants.


The device would also describe near points of interest such as crossroads, entrances, exits, restaurants, toilets etc. such that the user can create their own mental map of their surroundings allowing them to build and follow their own path; being unconstrained by the predefined path.


Lastly, we are wonder if there are any previous projects that were implemented on campus, in order to improve navigation. We'd love to know which aspects have previously been the focus of attention.
To overcome noise the user was provided with headphones. Another problem was that some users were frustrated with the silence of the device when they were not at a point of interest. This was solved by providing a message stating this.


The device was recognized by the visually impaired users to allow them a large degree of freedom which traditional (fixed) guides do not.


We'd love to hear from you,
The authors end with saying the experience would probably be significantly improved with a better localization technology.


====CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People====
This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product.  The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance.
=====Design=====
Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, which is placed on the frontal edge of the robot, the map environment is mapped beforehand. Localisation- using wheel odometry and LiDAR scanning it estimates the current location. Compares the real-time scanning and map to previously generated using Monte Carlo localisation (AMCL) package of ROS. In addition, odometry information can be computed using the LiDAR and stereo camera. Path Planning- path on the LiDAR map is planned based on the user's starting point and destination. To avoid obstacles, and to navigate a dynamic environment local, low-level pathing is implemented using the navigation packages of ROS. The robot also considers the space that is occupied both by it and the user in its pathfinding. This is done via a custom algorithm. The robot also provides haptic feedback. The authors use vibro-tactile feedback (different vibration locations and patterns) on the handle to convey the intent of the robot to the user. Via buttons on the handle one can change the speed of the robot. After this explanation, the paper goes over the conducted user study and its results.


Sincerely,
====Tour-Guide Robot====
This paper introduces a tour-guide robot using Kinect technology. The robot follows tourists wherever they go, avoiding obstacles and providing information. The paper begins by naming some previous implementations of such tour guide robots. Such robots are Rhino, Minerva, Asimo, Tawabo, Toyota tour guide robot, Skycall. Using Kinsect to determine gestures and spoken commands as well as facial recognition. Main parts- RGB camera, 3D depth sensing system, multi array microphone. The platform of the robot has ultrasonic sensors to detect obstacles. RFID is used to detect the RFID cards around the museum to correctly identify item and play the corresponding audio file. Base robot platform- Eddie.




Jelmer Schuttert
====Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques====
 
This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a heterogeneous computer cluster. SLAM is achieved via laser scanners (LiDAR) or RGBD cameras. The paper names some popular such robots:
on behalf of Group 5 Project Robot’s Everywhere
TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead.
 
Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move
[[Mailto:j.schuttert@student.tue.nl|j.schuttert@student.tue.nl]]
REEM-C- ROS-enabled autonomous humanoid robot, using RGBD camera for 3D mapping.
 
The paper contains useful tables containing information about these robots, as well as popular ROS computing platforms and mapping sensors.
The authors propose the use of lidar measurements on a road's surface to detect road boundaries. based on multiple model method the existence of cubs is determined.
The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account.
As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures.
NTU-1- autonomous tour guide robot that guides on the campus of the National University of Taiwan. It is a big robot, weighting around 80 kg, with a two-wheel differential actuated by a DC brushless motor. It uses multiple sensing technologies such DGPS, dead reckoning and a digital compass, which are all fused by the way of Extended Kalman Filtering.
For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters.  
Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the paper are ASKA, Urbano, Indigo, LeBlanc, Konard and Suse.


Dear {company name}/{department name}, I'm writing to you on behalf of a student robotics project group of the TU Eindhoven. This quartile our group is tasked with developing a robotics project for navigation in public spaces, and thus we are currently researching the requirements and challenges of such systems. To this end, I was hoping to get your opinion and experience on public navigation. In your market, making your buildings navigable for clients and visitors is obviously a concern, and we were wondering how public flow is managed? We are looking into developing a solution for guiding small groups and individuals through complex to navigate environments, and thus are wondering about what makes a space easier to navigate. Having been inside your establishments, we are wondering what considerations had to be made to make your available space so easy to navigate. First of we are curious to your experience with the intuition of your environment. Do you believe that it is easy to navigate to a desired location in your building without needing many indicators such as signs, directions or arrows? We are also interested in how effective you perceive signs to be. Do you believe that adding signs to a space makes it easier to navigate? Though we do believe this to be the case, we are specifically interested in how effective the sings are in guiding users to a space. So despite there being visual instructions on how to get to a location, do you feel like users still require directions to easily reach that location? Then, as we are developing a robotics oriented solution to spatial navigation, we are wonder how users experience being guided to a location. We are wondering if users of your environment would rather spend more time to figure out the route themselves, or if they would rather be guided to a location. To close up, we are also curious if your company ever considered using robots to guide users to a specific location. We are specifically interested in which problems and/or advantages you believe an autonomous guidance robot would need to overcome/have to be a viable addition to the navigability of your establishments. We would appreciate any and all feedback and insights you could give us. Sincerely Jelmer Schuttert, Student Eindhoven University of Technology j.schuttert@student.tue.nl


<references />
<references />

Latest revision as of 18:00, 10 April 2023

Group members

Name Student id Role
Vincent van Haaren 1626736 Human Interaction Specialist
Jelmer Lap 1569570 LIDAR & Environment mapping Specialist
Wouter Litjens 1751808 Chassis & Drivetrain Specialist
Boril Minkov 1564889 Data Processing Specialist
Jelmer Schuttert 1480731 Robotic Motion Tracking Specialist
Joaquim Zweers 1734504 Actuation and Locomotive Systems Specialist



Introduction

In this project we have been allowed to pursue a self-defined project. Of course, the focus should be on USE; user, society, and Enterprise. Our chosen project is the design of a product. Taking inspiration from our personal experiences we’ve chosen to find a solution to solve the navigation problems we encounter in the campus buildings in the TU/e. After some research about the topic and contacting TU/e Real Estate department we found out that guidance robots for people with visual impairment had demand. This was thus chosen as our topic. More specifically defined, the problem statement is: ‘Visually impaired people have ineffective means of navigating through the, at times, confusing pathways of campus buildings.’. When researching state-of-the-art electronic travel aids, we found 3 distinct solutions: Robotic Navigation Aids, Smartphone solutions, wearable attachments. The pros and cons are described in the table below:

Types of ETA Implementation Advantages Negatives
Robotic Navigation Aids Smart Cane Offers portability and can be used as a normal white cane should the electronics cease to function Needs to be compact and lightweight

Lacks obstacle information because of restricted sensing ability offers little information on wayfinding and navigation purposes as it requires bigger and bulkier hardware

Robotic Navigation Aids Robotic guide dog/mobile robot The system gives room for larger hardware, as it does not require a user to carry it Complicated mechanicals while manoeuvering through stairs and terrain
Robotic Navigation Aids Robotic Wheelchair Suitable for the elderly and people who have a physical limitation provides navigation and mobility assistance for elderly visually impaired who cannot walk on their own, multi-handicapped, or people who have more than one disabling condition Safety remains an issue as user mobility fully depends on the robotic wheelchair navigation, road-crossing and stair climbing are difficult circumstances where the reliability of the wheelchair is of extreme necessity
Smartphone solutions Android apps

Maps Image Processing

Mobility/portability

No load or invasive factor as the only device is the smartphone

The system depends on sensors available on the smartphone.

May communicate with an outer sensor such as beacon or external server but then it limits the usage for indoor requires certain orientation for image processing or internet signal for online maps

Wearable Attachments Eyeglasses

Glove Belt Headgear Backpack

Gives a natural appearance to the visually impaired user when navigating outdoor Too much attention is required, thus giving a cognitive load to the user

These devices are intrusive as they cover ears and involve the use of hands users are burdened with the system’s weight.

Requires an extended period of training

Sourced from: [1]

Furthermore, another state-of-the-art solution for guiding devices was found; a device which would use electronic waypoints installed in the building, to localise the user and relay directions and information about the surroundings[2].

A previous attempt was made at the TU/e (our case study) to use this method. But because it required infrastructure to be created in all the buildings in which it would work, it was never implemented. Therefore, we’ve decided to discard all solutions which would require such infrastructure.

Wearable attachments have been discarded as it is inherently invasive meaning the user will have to equip it themselves. Furthermore, larger attachments with many sensors are made impossible due to weight-limits, and lastly wearing such a device in extended meetings is impractical. Any such device will require some prior knowledge on how to operate it. Due to all these reasons, we’ve chosen not to pursue wearable attachments.

We’ve decided against smartphone solutions because it would be difficult to make a one-size-fits-all solution due to differing phones and sensors. A slightly more biased reason is that half of our group members are not at all adept at creating such applications and have no interest in the field. We also worried that we would struggle creating a practical app due to the limitations of the phone hardware.

Robotic wheelchair was decided against due to its invasive nature and concerns for the user’s autonomy. Furthermore, this solution would be very bulky which makes it unsuited for crowded spaces. The user base which will most likely consist of furthermore well-abled students which do not need such support and might feel uncomfortable using such a device.

A Smart Cane is not well-suited to guide the user due to the small form factor and weight requirement which would make inside-out localisation difficult.

The mobile platform guide robot has a few problems besides its price. The most important one is that it has trouble navigating stairs and rough terrain. Luckily, the robot will (for now) only be operating indoors in TU/e buildings. The presented use case of the TU/e campus has walk bridges connecting buildings and elevators in (almost) all buildings which mitigates most of the solution’s downsides. These factors make it the perfect place to implement such a guidance robot.


In summary we chose a robotic guide due to its user accessibility and potential for future improvements. It is a good way for people (with visual impairment or not) to be navigated through buildings.

State of the art

It is commonly known that the most common tools used by visually impaired people are the white cane and the guide dog. The white cane is used to navigate and identify. With its help these people get tactile information about their environment, allowing the visually impaired to explore their surroundings and detect obstacles. However, the use of this can be cumbersome, as it can get stuck in cracks, or tiny spaces. Its efficiency is also limited in the event of bad weather conditions or a crowd.[3] The guide dog on the other hand can guide the user through familiar paths, while also avoiding obstacles. They can also assist with locating steps, curbs and even elevator buttons. They can also keep their user centred, when crossing sidewalks for example.[4][5] There are a couple of issues with guide dogs however. They can only work for 6 to 8 years and have a very high cost of training.[6] They also require constant work on maintaining that training. The dog can also get sick. Another potential issue is bystanders that pet or take interest in the dog while it is working, which is a detriment to the handler.[7]

None of these tools can efficiently assist the person in navigating to a specific landmark in an unknown environment.[8] That is why currently a human assistant is preferred/needed to perform such a task, for example when walking in a museum.[9] In regards to the technological means there is currently no robot that is capable of efficiently performing such a task, especially is the environment is a crowded building. However, there are multiple robots that have implemented parts of this function. In the following paragraph we have divided them into their own sections for ease of reading.

Tour-guide robots

We first begin with the tour-guide robots. These robots are used in places such as museums, university campuses, workplaces and more. The objective of these robots is to guide a user to a destination. Once at the destination these robots will most often relay information about the object, exhibition or room of the destination. In terms of implementation, these robots use a predefined map of the environment, where digital beacons are placed to mark the landmarks and points of interest. These robots also often make use of ways to detect and avoid obstacles such as using laser scanners (such as LiDAR), RGB cameras, kinetic cameras or sonars. This research paper [10] goes in depth on the advances in this field in the recent 20 years, the most notable of which are "Cate", "Konard and Suse". As our goal is to guide visually impaired people throughout the TU/e campus, this field of robotics is of upmost interest for the navigation system of a guidance robot.

Aid technology for the visually impaired

This section is split into two. First, we cover guidance robots for the visually impaired, after which we cover other technological aids that have been created for this user group.

Guidance robots

Guidance robots for the visually impaired are very similar to the tour guide robots. They often use much the same technology to navigate through the environment (predefined map with landmarks and obstacle detection and avoidance). What differentiates these robots from the tour-guide robots is the adaptation of the shape and functionality of the robots to better suit the needs of the visually impaired. The robots have handles, or leashes, which the visually impaired can hold, much the same as a guide dog or a white cane. As the user cannot see, the designs incorporate ways of communicating the intent of the robot to the user as well as ways of guiding the user around obstacles together with the robot. Examples of such designs are the Cabot[11]- a suitcase shaped robot, that stands in front of the user. It uses a LiDAR to analyse its environment and incorporates haptic feedback to inform the user of its intended movement pattern. Another possible design is the quadruped robot guide dog[12], which based on Spot could be used as a robotic guide dog, given some adjustments. Finally there is also this design of a portable indoor guide robot[13], which is a low-cost guidance robot, which also alerts the user of obstacles in the air.

Crowd-navigation robots

As our design has the objective of guiding the user through a university campus it is reasonable to expect that there will be crowds of students at certain times of the day. For our design to be helpful, it needs to handle such situations in an efficient way. Thus, we took inspiration from the minor field of crowd-navigation of robotics. The goal of these robots is exactly that- enabling the robot to continue moving through a crowd, rather than freeze up, every time there is an obstacle in front of it. Some relevant research are these papers "Unfreezing the Robot: Navigation in Dense, Interacting Crowds"[14], a robot that can navigate crowds with deep reinforcement learning[15].

User scenarios

To get a better feeling of the problem, and the possible solutions two user scenarios are made that show the impact of the guide robot on visually impaired people that want to move through unknown crowded spaces. The design mentioned in these stories are both not what we ended up making, but the intended goal is the same; these stories and the solution we ended up making both try to expand the navigational tools a guidance robot has in crowded spaces. It is important to note that some parts of the robot here described fall out of the scope of the exact problem solved.

Physical contact through crowded spaces

Jack is partially sighted and can see only a small part of what is in front of him. He has recently been helping out fellow students with their field tests which tests a robot guide. Last month he worked with a robot called Visior which helps steer him through his surroundings. Visior is a robot which is inspired and shares its physical features with CaBot.

When Jack used Visior to get to the library to pick up a print request he had to pass through a mediumly-crowded Atlas building since there was an event going on. This went mostly as expected; not too fast and having to stop semi-periodically because of people walking or stopping in front of Visior. The robot was strictly disallowed to purposely make physical contact with other humans. Jack knows this so he learned to step up in these situations and try to kindly ask for the people in front to make way. This used to happen less when he used his white cane since people would easily identify him and his needs. After Jack arrived at printing room in MetaForum he picked up his print request. He handily put his batch of paper on top of his guiding robot, so he didn’t have to carry it himself.

On his way back he almost fell over his guiding robot when it suddenly stopped when a hurried student ran by. Luckily, he did not get hurt. When Jack came home after this errand he crashed on his couch after an exhausting trip of anticipating the robot’s quirky behaviour.

The next day the researchers and developers of Visior came to ask about his experiences. Jack told them about his experience with Visior and their trip to the library. The developers thanked him for his feedback and started working on improving Visior.

This week they came back with the now new and improved Visior-robot. This version has been installed with a softer exterior and now rides in front of Jack instead of by his side. The developers have made it capable of safely bumping into people without causing harm. They also made it capable of communicating with Jack if it thinks it might have to stop suddenly to make Jack a bit more at ease when traveling together.

The next day Jack used it to again make a trip to the printing space in MetaForum to compare the experience. When passing through the crowded Atlas again (there somehow always seems to be an event there) he was pleasantly surprised. He found it easier to trust Visior now that it was able to communicate the points in the trip where Visior thought they might have to stop or bump into other pedestrians. For example, when they came across a slightly more crowded space Visior had guided Jack to walk alongside a flow of other pedestrians. Jack was made aware of the slightly unknown nature of their surroundings by Visior. Then when a student suddenly tried to cross their path without looking, Visior had unfortunately bumped into their side. Visior gradually slowed their pace down to a halt. Jack obviously felt the bump but was easily able to stay stable due to the prior warning and the less drastic decrease in speed. The student who was now naturally aware of the something moving in their blind spot immediately stepped out of the way and looked at Jack and Visior; seeing the sticker stating that Jack was visually impaired. Jack asked them if they were alright, to which they responded with saying they were fine after which they both went on their way. After picking up his print he went back home. On his way back he had to pass through the small bridge between MetaForum and Atlas in which a group of people were now talking, blocking a large part of the walking space. Visior guided Jack to a small traversable path open besides the group; taking the risk that the person there would slightly move and come onto their path. Visior and Jack could luckily squeeze by without any trouble and their way back home was further uneventful.

When the Developers of Visior came back the next day to check up on him Jack told them the experience was leagues better then before. He told them he found walking with Visior less exhausting than it had been before and found the behaviour of it more human-like making it easier to work with.

Familiar guidance advantage

Meet Mark from Croatia He is a Minor Student following Mathematics courses, and lives on (or near) campus Mark is severely near-sighted, being born with the condition he has never seen very well. Mark is optimistic but chaotic. Mark likes his study and likes playing piano.

Notable details: Mark makes use of a white cane and audio-visual aids to assist with his near-sightedness. He just transferred to TU/e for a minor and doesn't know many people yet. Mark will only be here a short time for his minor. He has a service dog at home, but does not have the resources, time or connections to provide for it here, and so he left it at home.

Indoors, mark finds it hard to use his cane because of crowded hallways and he dislikes apologizing when hitting someone with his stick or being an inconvenience to his fellow students. Mark can read and write English fine, but still feels the language barrier.

In a world without our robot mark might have to navigate like this: Mark has just arrived for his 2nd day of lectures. And will be going to the wide lecture hall at Atlas -0.820. Mark again managed to walk to Atlas (as we will not be tackling exterior navigation), and uses his cane and experience to navigate the stairs and rotary door of Atlas, using it to determine the speed and size of the revolving element to get in, and using the cane to determine the position of the doors and opening[16].

Once inside, he is greeted by a fellow student who has noticed him navigating the door. Mark had already started concealing the use of his cane, as he doesn't like the attention and so the university staff didn't notice him. Luckily, his fellow student is more than willing to help him get to his lecture hall. Unfortunately, the student is not well versed in guiding visually impaired around, and it has gotten busy with students changing rooms.

Mark is dragged along to the lecture hall by his fellow student, bumping into other students who don't notice he cannot see them, as his guider is hastily pulling him past the students. Mark almost loses his balance when his guide slips past some other students, narrowly avoiding the trashcan while dragging mark by his arm. Mark didn't see the trashcan, which is not at eye level, and collides with the metal frame while trying to copy the movement of his guider to dodge the other students. He is luckily unharmed, and manages to follow his guide again, until he is finally able to sit in the lecture hall, ready to listen for another day.

The next day a student sees Mark struggling with the door and shows Mark a guide robot. The robot has the task of getting mark to the lecture hall Mark needs to be. It starts moving and communicates its intended speed and direction through the feedback in the handle. As a result, Mark can anticipate the route the robot will take, similar as to how a guide would apply force to Mark's hand to change direction.

The robot has reached the crowd of students moving through the busy part of Atlas. Its primary objective is to get Mark through this, and even though many students notice the robot going through, it still uses clear audio indications to warn students it will be moving through and notifies Mark it goes into some alternate mode through the handle. Mark notices, and thus becomes alert as he also feels that the robot reduces the number of turns it makes, navigating through the crowd in the most straightforward route it can take. Mark likes this, it is making it easy for him to follow it, and also for others to avoid them.

Still, a sleepy student bumps into the robot as it is crossing. Luckily the robot is designed to contact other students, and its rounded shape, enclosed wheels (or other moving parts) and softened bumpers prevent harm. The robot does however slightly reduce its pace and makes an audible noise to let the sleepy student know it touched the robot too hard. Mark also notices the collision, partially because the bump makes the robot shake a little and loose a bit of pace, but mainly because his handle clearly and alarmingly notifies him, Mark also knows the robot will still continue, as the feedback of the handle indicates to him that it is not stopping.

After the robot gets through the crowd, it makes it to the lecture hall. It parks just in front of the door and tells mark to extend his free hand slightly above hip level, telling him they arrived at a closed door that opens towards them swinging to his right, similarly to how a guide would, so mark can grab the door handle, and with support of the robot open the door. The robot proceeds mark slowly into the space, it goes a bit too fast though, and mark applies force to the handle, pulling it slightly in his direction. The robot notices this and waits for Mark.

After they enter the lecture hall, the robot asks the lecturer to guide mark to an empty seat (and may provide instructions on how to do so). When mark is seated, the robot returns to its spot near the entrance, waiting for the next person.

Problem statement

The previous problem statement was quickly found to be too broad. In this research about state-of-the-art it was found that the problem statement consists out of a plethora of sub problems which all have to work in tandem to create a functional solution. For this reason, it is important to scope the problem as much as possible to create a manageable project. Throughout research on the topic of guidance robots the following problems were identified:

  • Localization of the guide
  • Identification of obstacles or other persons
  • Navigation in sparse crowds
  • Navigation in dense crowds
  • Overarching strategic planning (e.g. navigating between multiple floors or buildings)
  • Interaction with infrastructure (e.g. Doors, elevators, stairs, etc.)
  • Effective communication with the user (e.g. user being able to set a goal for the guide)

We decided to focus on ‘Navigation of guidance robots in dense crowds on TU/e campus’. This was chosen because for navigation on campus such a ‘skill’ (an ability the guide can perform) is necessary. Typical scenarios in which such a skill would be useful for a typical student would be during on campus events, navigation in and out of crowded lecture rooms, or simply a crowded bridge or hallway. Besides its necessity it is also an active field of study without a clear final solution yet[17]. Mavrogiannis et al.[17] defines the task of social navigation as ‘to efficiently reach a goal while abiding by social rules/norms.’.

A reformulation of our problem statement thus results in the following research question: ‘How should robots socially navigate through crowded pedestrian spaces while guiding visual impaired users?’

To work on this problem, it is assumed the remaining functions of the previous list are assumed to be working.

Scoping the problem

At this time the first meeting with Assistant professor César López was held. Mr. López is part of the control systems technology group of the TUE and focusses on designing navigation and control algorithms for robots operating in a semi-open world. In our meeting the most important recommendation was that the navigation should be split up even further and a more defined crowd should be used to define the guide’s behaviour. He laid out that different crowds have different qualities. These crowds can roughly be split up into chaotic crowds; where there is no exact order and behaviour is less predictable (e.g., an airport where everyone needs to go in different directions), and structured crowds; where behaviour is predictable, such as crowds found walking in a hallway. The simplest structured crowd is one where all people have a unidirectional walking direction. This kind of behaviour can also be found in a paper from Helbing et al.[18] which amongst other things describes crowd dynamics. The same paper also describes how a crowd with only 2 opposing walking directions self-organizes to two side-by-side opposing ‘streams’ of people.

López then expanded on this finding by saying that the robot, in this crowd, could roughly be in 3 distinct scenarios': The robot could walk along a unidirectional crowd, it could walk in the opposite direction of a unidirectional crowds, or it could walk perpendicular to the unidirectional crowd. All of these have an application when navigating the university. López recommended that our research should be focused on only 1 of these scenarios since they all need different behavioural models unless a general navigation method was found.

To summarize, it can be seen that for the guide to efficiently navigate in tight spaces, like hallways or in a lesser extend doorways, requires it to be able to navigate dense crowds which behave in a unidirectional manner. In navigating such a crowd, different approaches can be taken depending on the walking direction of the crowd and the guide.

On López’s recommendation, it was decided to narrow the behavioural research down to only walking alongside a unidirectional crowd since this was the most standard case.

To conclude this section the final research question is defined as ‘How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?’.

USE analysis of the crowd navigation technology

This section will discuss the relevance and the impact of a safe crowd-navigation guidance robot, on users, society at large, and enterprises.

Users

The robot has a number of possible users, but for this design there are two types distinguished in this design:

  • The visually impaired handler of the robot
  • The other persons participating in the crowd

In the Netherlands around 2.7% of the population has severe vision loss, including blindness[19]. This is over 400 thousand people, who do not know which route to walk in a new environment, where only a room number is given. There are aids such as a guide dog or cane, but those make sure blind people do not collide with the environment instead of guiding to an unknown location in a new surrounding. So, a device that guides those visually impaired people to a new location they have never been on campus, such as, meeting room Metaforum 5.199 is needed. To guide them to this meeting room a navigation is needed through crowds.

As mentioned, modern robots have a freezing problem when walking to crowds, which is not optimal when walking with the sometimes-dense crowds on the TU/e campus. That is why nudging and sometime bumping is needed sometimes. The challenge here is to guide the handler as smoothly as possible while sometimes nudging and bumping with third persons.

As the plan was to design a physical robot with inspiration taken from the CaBot, a lot of inspiration is taken from their user research for visually impaired people. On top of that research has been done in guide dogs and their ways of guiding.

For third persons to the robot and handlers research has been done mainly focused on the touching and nudging aspect of the robot. This to see what reactions a touching robot may elicit, the safety of this concept, and the ethics of robotic touch.


Secondary users include institutions that provide the robot for visually impaired people to navigate through their buildings. These users include, universities, government buildings, shopping malls, offices or museums.

Society

As mentioned above, 2.7% of the population severe from vision loss, however there are many more benefits from a robot that can safely and quickly navigate through a crowd. Any robot that has a mobile function in society, will at some point encounter a crowd. Whether that is a dense or sparse crowd, or simply people blocking an entry or hallway. Consider robots that work in social service such as a restaurant, delivery robots or even guide robots for others than visually impaired people, for example at museums or shopping malls.

For these robots, it is important that they can safely traverse through crowds in the quickest way possible. The solution investigated and presented here, is a good step in the right direction. Of course, each of these robots would need a different design in order to properly execute its function, but the strength lies in the social algorithm where the robot moves through a crowd in a different way than robots do now.

Specifically for navigating visually impaired people, it helps with their accessibility and inclusivity in society. Implementing a robot such as this, will allow them to be a more integral part of society without having to rely on other people.

Enterprise

For enterprises that might employ these robots there are two advantages. The use of the robot will enable visually impaired customers to have better access to any services the companies might provide. In addition, they will have a competitive advantage over competitors that do not provide such a robot or such a service. For example, a shopping mall would improve their accessibility which would turn into more customers, whereas government buildings improve general satisfaction.

Specifically for universities such as the TU/e, next to attracting more students, it improves their public image that shows the effort to make a higher education possible and easier for all people. An advantage over other solutions such as a human guide, is that no new employees need to be trained. No big infrastructure changes, such as extra cameras or sensors throughout the building are needed for another type of robot or navigator. And lastly, there is no issue of a failing connection with for example a smartphone.

Project Requirements, Preferences, and Constraints

Creating RPC criteria

Setting requirements

The most important thing in building a robot operating in public spaces is to make it complete its tasks in a safe manner; not harming bystanders or the user themselves. Most hazards in robot-human interactions (or vice versa) in pedestrian spaces are derived from physical contact[20]. This problem is even more present when working in crowded spaces where physical contact is impractical to avoid or cannot be avoided. Therefore, the robot has to be made physically safe; typical touch, swipes, and collisions are made non-hazardous. This term ‘physically safe’ will be abbreviated to ‘touch safe’ to make its meaning more apparent.

If the robot somehow exhibits unsafe behaviour the user should be able to easily stop the robot with an emergency stop. Because the robot is able to make physical contact and apply substantial force, it becomes even more paramount that rogue behaviour is easily stopped if it occurs.

When interacting with the user it should make them feel safe and thus allow trust in the robot. If the user does not feel safe, they cannot trust the robot and might become unnecessarily anxious or stressed. With as result that the user may avoid its services. Besides this the users might display unpredictable or counter-productive behaviour, e.g., walking excessively slow, not following the robot, etc. To this end the robot should be able to communicate its intent to the user so that they won’t have to be on-edge all the time.

For the robot to be viable in practice there are some restrictions like making the robot relatively cheap since the budget is not unlimited and competing solutions like human guides exist for a set price; too large of a price would make robot guides obsolete. Our use-case also has restrictions on infra-structural modifications to the campus building of the TU/e as a previous solution was rejected due to this reason; installing waypoints all over the buildings was too much of an investment.

Setting preferences

The robot should not slow down its user when avoidable, so an average speed of 1 m/s (average walking speed visually impaired users[21])  would be a good goal.

For the robot to reach its goal efficiently it should avoid stopping for people. Even more reasons to avoid stopping is to make the user able to walk a constant speed, requiring less mental strain on its user, as well as avoiding hazards which occur due to stopping in pedestrian spaces like surprising and hitting the person behind the user[20].

Setting constraints

For the robot to operate in our specified use case it should be able to navigate campus. This involves being able to navigate narrow walk bridges and the wide-open spaces with different walk routes. Such things as interaction with elevators or stairs will not be focused on this research.

RPC-list

Requirements

  • Safety
    • Touch proof
    • Does not harm bystanders or the user
    • Installed emergency stop
  • User feedback/interaction
    • Should give feedback about intentions to user
    • Robot must be able to receive feedback and information from user
    • Handler should feel safe based on interaction with robot
  • Implementable
    • Relatively cheap
    • No infrastructural changes in buildings

Preferences

  • 1 m/s (3.6 km/h) walking speed should be reached[21]
  • Does not stop for people unnecessarily

Constraints

  • Environment (TU/e campus)
    • Narrow walk bridges/hallways
    • Big open spaces

The solutions

In this section the worked-out solution to the problem statement is given. The solution consists of a physical and a behavioural description of the robot. These two factors influence each other: The design has an impact on how the robot should behave while socially navigating through a crowd, while the way it navigates through a crowd makes the specific requirements of the design. These together give a clear answer to the research question on how the robot with this specific design should socially navigate through a unidirectional crowd while guiding visually impaired users.

This chapter consist of a detailed explanation of the physical design of the robot. The robot is designed to adhere as closely as possible to the rpc-list. After the design is defined the corresponding behaviour will be defined using scenarios. These scenarios are used to explain the behaviour we would want to see and expect. In a broader sense, this should demonstrate how the method of navigation can be utilised to effectively and safely navigate through dense crowds.

Design

In this chapter the design of the robot model is documented. With the design, main focusses are safety and communication of nudging to the visually impaired handler, and third persons.

For the design of the robot the main inspiration is the CaBot[11]. This is basically a type of suitcase design (rectangular box with 4 motorized wheels), with in the rectangular box all its hardware. Interestingly, it also has a sensor on its handle for vision (higher perspective). This design is rather simple, and the easy flat terrain on the TU/e campus should be no problem for the wheels. The CaBot excels in guiding people to a new location but does not work through crowds. When looking at safety the body design has been altered for nudging and bumping into people. Also, the handle design has been revamped for better communication to user.


Handle design

Front view of the arm design of the guide robot to which the guided can grab on. The speed switch can be seen on the left. The settings are denoted using written numbers instead of braille because of limitations of the CAD software.
Back view of the arm design of the guide robot to which the guided can grab on. Interface utilities have not been added yet.
Back view of the arm design of the guide robot to which the guided can grab on. The upper arm, connecting to the hand-hold, has a suspension mechanism and a hinge.

As the robots behaviour is focused on traversing through crowds of people, there is an important function also part of it. How to communicate this direction to its user? Any audible direction will quickly interfere with the sounds from the surroundings, which can result in missing the entire message or allow for confusion. Although a headset might allow for clearer communication, this is still not ideal. Therefore, the easiest way to provide feedback to the user is through the handle. The robot has a few functions that it needs to be able to communicate with the user or be able to be controlled by the user:

  • Speed
    • Setting a faster or slower speed
    • Communicating slowing down or accelerating
    • Emergency stop
  • Direction
    • Turning left
    • Turning right

All of these functions can be placed inside the handle, while designing for minimal strain on the user's active control. The average breadth of a male adult hand is 8.9 cm[22], which means that the handle needs to be big enough to allow people to hold on while also incorporating the different sensors and actuators. For white canes, the WHO[23] has presented a draft product specification where the handle should have a diameter of 2.5cm. Which will be used for the handle of the robot as well. Since the robot can be seen as functioning similar to a guide dog, the handle will have a design similar to harnesses used for blind dogs, meaning a perpendicular, although not curved, handle that will stop in place if released.[24] To be able to comfortable accommodate the controls and sensors described below, the total size of the handle will be 20 cm.

The handle, which is connected to the robot, will provide automatic directional cues, without additional sensors or actuators. This will simplify the robot and act more similar to a guide dog. As for the matter of speed, there are three systems that would be implemented. The emergency stop, feedback about the acceleration and deceleration of the robot and the speed control of the user. The emergency stop can be a simple sensor in the handle that detects if the handle is currently being hold, if not, the robot will automatically stop moving and stay in place. The speed can be regulated via a switch-like control as visible on the CAD render on the right. When walking with a guide dog, the selected walking speed is about 1 m/s [21] for visually impaired people, which means that with five settings, ranging from 0 m/s, 0.5 m/s, 0.75 m/s, 1.0 m/s and 1.25 m/s, the user can set their own speed preference. In order to give feedback about its current setting, the different numbers will be detailed in braille. Furthermore, changing settings will encounter some resistance and a feelable ‘click’ instead of being a smooth transition. The user can, at any times, use their thumb, or any other finger, to quickly check the position of the device and determine the speed setting. The ‘click’ provides extra security that the speed will not be accidentally adjusted without the user being aware of it. To this end, the settings will only affect the actual walking speed after a short delay to allow the user to have time to revert any changes.

Lastly, the robot might, for whatever reason have to slow down while walking through the crowd. Either for obstacles, other people, or in order to go properly with the flow of the crowd. Since this falls outside the speed setting, the user must be made aware of the robots' actions. A simple piezo haptic actuator can do the trick. By placing it in the middle of the handle, it will be easily detected. A code for slowing down, for example a pulsating rhythm, and a code for speeding up, a continuous vibration, will convey the actions of the robot. Of course, this is in addition to the physical feel that the user has via the pull on the handle via the arm. However, because trust is so important in human-robot interactions, this is just additional feedback from the part of the robot to increase the confidence of the user when using the robot.


3 sketches of different designs of an arm for a guidance robot
3 sketches of different designs of an arm for a guidance robot

Arm design

Multiple designs were considered. The arm connects the handle to the body, it is important here that the handle height can be changed. One thing that was added in the name of safety was suspension, so that the movements of the robot would not jerk the arm of the guided if it were to suddenly change speeds, when for example bumping or nudging. Most design iterations went over on how to integrate the suspension.

The first design was a straight pole from the robot body to the guided arm (as can be seen in the top sketch in the figure to the right). A problem we could see was that if the robot were to stop suddenly, it would push the arm slightly up instead of compressing the suspension. To solve this problem a joint was introduced in the middle of the arm (as can be seen in the middle sketch in the figure to the right). An alternative solution was to have the suspension only act horizontally and internalize it (as can be seen in the bottom sketch). This would allow the pole to have the same design as the first sketch without compromising on the suspension behaviour. Another plus would be that the pole would be marginally lighter due to this suspension being moved inwards.

We have chosen for the second design as it had the intended suspension behaviour while remaining as simple as possible. This allows the mechanism to be constructed from mostly off-the-self parts, reducing the cost.

Body Design

For the body three main designs were considered: A square, a cylindrical form, and a cylinder which changes diameter over its height. The square was immediately ruled out due to its sharp corners making it decidedly not touch safe. The more cylindrical shapes could more easily slide through public and had less chance to hit people hard on the front (it allows for a sliding motion instead of head-on collision). This left the choice between a normal cylinder, a cylinder wide at the bottom, and a cylinder wide at the top.

A bottom-heavy design would help with balance; If the robot would bump it would hit at the lowest point, meaning more stability. However, it may surprise people when it hits as they might not notice the wide bottom. This is where the wide top outperforms, as it hits people around their waist/lower back area where collision can be more easily spotted. Furthermore, this is a more effective place to nudge people for them to get out the way (a lower hit might instead make people lift their leg instead of stepping out of the way). A draw back is that the robot is touched higher and more easily tips over. That is why in the design the best of both worlds is chosen. The body has a big diameter lower with a big bumper to not tip over, and has 'whiskers' of a soft, compressible foam material on top at the front to softly touch, or nudge people if they are in the way. Research has shown that touch by a robot elicits the same response by humans as touch by humans[25]. The material for the rest of the body is a plastic as to make it not too hard.

This cad design shows the oval body shape of the design. It has its biggest diameter at 30 cm high, and whiskers at 120 cm from the ground.

The pole on top of the body has two functions.

  • Visibility
  • Sensors

The pole is 100 cm long, making the whole guide robot stand at 220 cm tall. This helps for the sensors which get, from a higher point of view, a better overview of the crowd. This height also helps with noticeability in dense crowds where at eye-level it will still be visible even when the lower body is (partially) obscured.

Behavioural description

The behavioural description will concern behaviour in a crowd with a singular, uniform walking direction. As mentioned before, the expected behaviour will be described using scenarios. These will first describe the standard scenario, after which two special cases are discussed. Furthermore, it will be briefly discussed how this behaviour might also benefit other crowd types or behaviour. The purpose of the behaviour is to make the robot guide someone efficiently to reach a goal while abiding by social rules/norms.

It is important to note that joining, and leaving these crowds require different behaviour (like sparse crowd navigation). These are thus not considered to fall inside of the scope of the research question.

First, the standard navigation method will be discussed and how it functions in most scenarios.

The standard scenario

López suggested that to navigate, the guide should check where it can walk, not where it cannot. He also suggested following a lead of some kind could make navigation in unidirectional crowds easier. These traits have been used to define the standard scenario.

In this scenario the robot uses its LIDAR technology to follow a moving point cloud (i.e., the lead) in front of it. This point cloud could be one person or even a whole group. Regardless of this, the point cloud will always indicate the end of the guide's free walking space (space where nothing else stands in its way). It can thus be said that between this lead and the guide, there will in most cases, always be free walking space. As the lead walks in front of the guide it will continuously be creating a space in the crowd behind it, and in front of the robot, where the guide can move.

The robot cannot see the difference between one person or a group, this will make the robot more robust as small details in people's behaviour will not affect the guide's actions.

Scenario 1: Cut off

While in the standard scenario, someone or something starts to insert itself in between the guide and the previously thought of leading cloud. This has multiple different sub-scenario’s will be discussed. In this scenario we will consider a crowded space with approximately 0.8 persons/m2 (which nears shoulder-to-shoulder crowds as found in [26]), where the people move alongside each other. Since the third person is inserting from the side it may not be assumed that only the feelers of the robot make contact. This means more severe consequences may follow.

Decision making criteria

The decision making of the guide should depend on the intentions of the third person, the effects of their actions on the guide(d), and the effects on themselves.

By far the most difficult thing is to determine the intentions of the third person. Are they trying to insert themselves in front of the robot or are they simply drifting in front. Since their mind cannot be read it seems reasonable to base the decision purely on the latter 2 decisive factors, namely, the effects on the guide(d) and the effects on the person inserting themselves.

Guide’s options

There are 3 options the robot can take in any given scenario:

Effects        Action -> Bump Make way Move to the side
Effects on the guide(d) - Little to no travel delay

- Depending on the severity of the impact it might result in the robot having a sudden change in speed, inconveniencing the guided.

- The robot might have to slow down temporarily which might inconvenience the guided.

- The robot might have to slow down permanently due to a change in the leads’ walking speed leading to a higher travel time.

- Other people might also try to slip in front leading to multiple delays.

- The guided might incur a travel delay due to the perpendicular movement.

- Too much side-to-side movement might lead to sporadic guidance to the guided.

- The guide will have to make accurate decisions when sliding in front of someone else which might lead to unexpected problems or delays.

Effects on the person inserting themselves - They make physical contact with the robot resulting in a risk of injury depending on the severity.

- They might be surprised by the robot resulting in unpredictable scenarios.

- They might not be able to return to their original spot in the crowd resulting in unpredictable consequences.

- None - None
Scenario variables

It can be seen that the effect of any action is very much context dependent and as such a well-made decision will only be possible if the guide is well-informed. Assuming this is the case for now we can set up 4 factors which will determine the way the robot should act:

1.   The relative normal speed of the third person

2.   Their relative perpendicular speed

3.   The third person’s space to act

4.   The robot’s space to act

From this, 4 behavioural tables can be set up:

Scenario 1: expected behaviour

The following scenarios might seem excessive since the robot will most likely not be a rule-based reflex-agent. This detailed model should however be of importance when informing our decision-making process in the design of the robot as well as the evaluation of the simulation. The following behavioural tables have the relative forward speed of the guide, while at the top the speed the third person has in the direction perpendicular to the guide's walking direction is given.

The third person and the robot are capable of making way

Low perpendicular speed Medium perpendicular speed High perpendicular speed
Smaller forward speed Robot should make way Robot should make way, as people think it shows manners and awareness. Robot should make way, as people think it shows manners and awareness.
Same forward speed Robot does not make way Robot should make way, but prevent heavy breaking, soft pushing

is still an option if the integration is too narrow

Robot should make way, but prevent heavy breaking, soft pushing

is still an option if the integration is too narrow.

Larger forward speed Robot does not make way Robot does not make way Robot does not make way, but tries to soften the by moving along the perpendicular direction of the third person
Depiction of a third person inserting themselves between the guide and the lead.
Depiction of a third person inserting themselves between the guide and the lead. The circles represent people and the guide. The arrows indicate the direction they are moving. Grey is a normal crowd member, red is the third person cutting of the guide, dark blue is the guide, and light blue is the guided.

Only the third person is capable of making way (see figure to the right)

In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.

Low perpendicular speed Medium perpendicular speed High perpendicular speed
Smaller forward speed The guide should not make way and risk impact to indicate it has no free space. The guide should not make way and risk impact to indicate it has no free space. The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact.
Same forward speed The guide should not make way and risk impact to indicate it has no free space. The guide should not make way and risk impact to indicate it has no free space. The guide should not make way. If the impact is impending, it should try to soften it by moving in the same perpendicular direction as the third person to soften the impact.
Larger forward speed The guide should not make way and risk impact to indicate it has no free space. If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down to soften the impact. If impact is impending, it should move the whiskers in the direction of the third person if possible. If this is not possible, the guide should slow down and move in the same perpendicular direction as the third person to soften the impact.

Only the robot is capable of making way

Low perpendicular speed medium perpendicular speed High perpendicular speed
Smaller forward speed Robot should make way Robot should make way Robot should make way, trying to prevent heavy breaking
Same forward speed Robot should make way Robot should make way Robot should make way, trying to prevent heavy breaking
Larger forward speed Robot should make way Robot tries to make way, preventing heavy breaking Robot tries to make way, preventing heavy breaking

Neither are capable of making way

In all these scenarios the robot should use an audio cue to alarm the third alarm that the robot cannot evade them itself.

Low perpendicular speed Medium perpendicular speed High perpendicular speed
Smaller forward speed Robot should try to make as much way as possible before making continuous contact with the person until the third person finds a way to decouple. Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should try to maintain continuous contact with the person until the third person finds a way to decouple. Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple.
Same forward speed Robot should try to make as much way as possible

If there is not much room the robot should not bother to bump

Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until the third person finds a way to decouple. Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until the third person finds a way to decouple.
Larger forward speed Robot should try to make as much way as possible before making continuous contact with the person until they naturally separate or the third person finds a way to decouple. Robot should try to minimize the risk of harm by moving slightly to the side to soften the impact. Afterwards it should maintain contact with the person until they naturally separate or the third person finds a way to decouple. Robot should try to position itself so that the harm from the impact can be minimized. For the same reason they should move slightly to the side to soften the impact. Furthermore it should maintain the continuous contact with the person until they naturally decouple or the third person finds a way to decouple.

Scenario 2: Stalled lead

While in the standard scenario the lead has stopped moving. The guide may avoid the lead or nudge them. The guide is moving unidirectionally with the lead, and it is therefor assumed all impact will occur at the front of the guide.

Guide options

The decision making of the robot should depend on the effects on the guide(d) and on the leads. The robot may in all situations attempt the following options:

Effects        Action -> Try alternative route Robot nudges using feelers Stops
Effects on the guide(d) - The robot has to make side-to-side movement which results in a more sporadic pathing. This might inconvenience the guided.

- Moving aside in a crowded space may result in the guide, or worse guided, to be pushed by other people.

- Behaviour requires more complex observational methods and behaviour.

- Does not always resolve the problem which leads to more delay. - Guide stops. significant time delay.

- People behind guided may walk into or push them.

Effects on the stalled lead - None - Person may have to step aside or start moving.
- Person might be uncomfortable with being nudged or pushed.
- None
Scenario variables

The main variables are the following:

1. Space to act for guide

2. Space to act for lead

Scenario 2: expected behaviour

Due to the great low-risk results of nudging using the feelers this will in all cases be the first action.

If the attempt fails however, it must be decided if the guide should try to path around the now blocked path or to stop. If it can be seen that the lead ahead is stopping out of its own volition (There is free space in front of the lead), the robot should try to navigate around the lead in most cases. If the lead is expected to start moving in a reasonable timeframe, depending on the amount of time rerouting would take, the guide should stop. Something which hasn’t been taken into account yet is the actual freedom of the guide; a dense surrounding, or a fast-moving crowd could stop the guide being able to safely step aside. In these cases, the specifics and safety of the cross-flow-behaviour are of importance.

Assuming the cross-flow-behaviour to be only safe in the limited case of a sparse, normal moving crowd, the following behavioural table can be made:

Normal moving crowd Fast moving crowd
Sparse crowd Try alternative route Stop
Dense crowd Stop Stop

If the guide has stopped for a while and sees an opportunity for the lead to move, it should play a message asking for the lead to move. This also notifies the guided of the situation.


Generalisation

The following scenarios pertain to a situation where the guide does not navigate alongside a unidirectional crowd flow. Although this is out of the scope of this research it is useful to look at what a robot with this design can add to other scenarios using touch. First, there will be a short look at the possibilities of physical touch in the other scenarios sketched by López.

Opposing a unidirectional crowd is slightly harder for a robot, while moving through an opposing flow the robot is dependent on people moving out of the way. Otherwise, no space might open up where the robot can go. Here a robot that is programmed to not touch people might stall if there is a high enough crowd density. This is where light bumping comes might be useful, if people do not go fully out of the way they will get a light touch.

Crossing a unidirectional crowd is the hardest scenario. It might be hard for positions to open up where the robot can go. This due to people coming from the side, and the social implications that come with that. Does the robot give space, or will it walk on? Research has found that people find robots more social, and better if they let people pass first, but this has a problem of the robot stalling. That is why in dense crowds it might be preferred that the robot starts nudging to make a way for the guided person.

Integrating into a crowd is an important behaviour of the robot. Inside the TUE, maximum crowd densities are assumed to occur only scarcely in the span of a year. In less dense crowds the guide should be able to integrate into the flow without hitting other people. However, in the scenario that crowds are to be very dense, the guide should be able to act in a more assertive manner due to the increase in safety measures preventing harmful human-robot collisions.

Simulation

Goal:

In order for the behaviour description to be relevant, we show that the proposed behaviour is safe to employ in a representative environment. To measure this safety, we first of all measure the collisions and make the reasonable assumption that this is the primary source of harm our robot can afflict. The simulation will gather data about the frequency of collisions, and statistics on the forces applied to the person and robot during the collision. Secondly, we consider the adherence of the robot's behaviour to the ISO guidelines for safety[27][28], focussing on the minimum safety gap and the maximum relative velocity guidelines.

Overview of applicable ISO Safety standards

According to ISO 10218-2:2011[28], for a (industrial) robot to operate safely:

  • Protective barriers should be included (Which is discussed in the body design)
  • Warning labels should indicate potential hazards. As the robot does not operate any manipulators or tools, and is designed not to be able to crush someone, or run someone over, the only danger here is tripping, which is also minimized in the design.
  • Light curtains, pressure mats and other safety devices: The robot includes whiskers at the front, that also help with avoiding a direct body collision.
  • Others, which do not apply to this robot, as it is not present in an industrial setting.

In addition, ISO 15066:2016[27] indicates requirements for robots in proximity to human operators:

  • A risk assessment should be made to identify hazards to surrounding personnel.
  • Monitoring systems to keep track of speed and separation, which are included in the form of the LIDAR sensor.
  • Force and power limits of the robot: The robot is not incredibly high powered, the amount of force it is capable of applying in normal operation depends on the physical implementation of this proposed concept, but is unlikely to be problematic, as there is no need for high-powered actuators for the drive train to function.
  • Emergency stop (already covered in the other ISO standard)
  • A safety distance gap should be kept to people around the robot, this is however ignored, as we are developing a solution that aims to be safe, without needing to keep clear of humans.
  • Force and Pressure limits are imposed on the robot, to prevent serious harm, both during normal operation and collision. These include:
    • A limit on the contact force during collision of 150N, this will be the focus of the simulation.
    • A pressure limit during collision of 1.5 kN/m^2, we make the reasonable assumption this pressure limit cannot be reached without violating the previous condition, as our robot is designed to be as smooth as possible, making it incredibly difficult for the robot to apply a lot of force in a very small and local area.
    • Force limiting or compliance techniques are to be implemented to reduce the force applied during collision. This comes in the form of whiskers for the compliance aspects, allowing them to deform to reduce the impact, and the behaviour is designed to limit the force by slowing, satisfying the limitation requirement.

As a performance measure for the simulation, we consider the maximum force applied in a collision during the duration of the simulation. This is the element of the ISO standards that is not negated by the behaviour design, or design of the body itself, and thus the part that remains to show the robot is safe to operate.

This data can also inform the design of the robot and its behaviour, as it can test various form factors, and navigation algorithms to optimize. In the end the simulation results act as assistance in design iteration, and ultimately inform us about the viability of the robot in crowds.

Why a simulation:

Testing which techniques have an impact require a setting with a lot of people to form a crowd, which can be controlled precise enough to eliminate outside or 'luck' factors. The performance needs to be a function of measurable starting conditions, and the behaviour of the robot. When using a real robot, we would need to work in an iterative approach, where we can alternate the appearance and workings of the robot after each simulation, to simulate different scenarios. This would require re-building the robot each time, which is something we simply don't have time for. Additionally, to obtain a large enough crowd (think of more than a 100 students) would become tricky in such a short notice. Using a real-world crowd (by going to the buildings in-between lectures) would present the most accurate situation but is not controllable and not reproducible. There is also the ethical dilemma of testing a potentially hazardous robot in a real crowd, and logistically, organizing a controlled experiment with a crowd of students is not an option.

Simulation: situation analysis

The real world would have the robot guide a blind person through the atlas building, to a goal. This situation can broadly be dissected as:

  • Performance Measure: The maximum force applied during collision with a person, which cannot exceed 150 Newton.
  • Environment: Dynamic, partially unknown interior room, designed for human navigation.
  • Actuators: wheels.
  • Sensors: LIDAR & Camera, abstracted to General purpose vision and environment mapping sensors, but are assumed to be limited range and accuracy, systems capable of deducing depth, position and dynamic- or static obstacles.

The environment is assumed to be:

  • Partially Observable
  • Stochastic
  • Competitive and Collaborative (humans aid each other in navigation, but are also their own obstacles)
  • Multi-agent
  • Dynamic
  • Sequential
  • Unknown

Considered Simulation Design variants.

Simulating the robot may take various shapes, each with their own advantages. When considering the type of simulation we will make, we considered the following aspects: Environment Model:

  • Mathematical: Building a model of the environment, purely based on mathematical expressions of the real world.
  • Geometrical: Building a 3d version of the environment, using a 3d virtual representation of the environment.
  • 2D: The environment does not consider depth
  • 3D: The environment does consider depth

Robot Agent:

  • Global awareness: The robot model has access to all information across the entire environment.
  • Sensory awareness: Observing the Simulated environment with virtual (imperfect) sensors. The robot only has access to the observed information.
  • Mechanics simulation: The detail at which the robot's body is modelled. Factors include whether the precise shape is considered, the accuracy of actuators and other systems, and delay between command and response.

Crowd Behaviour Model:

  • Boid: Boids are a common method of simulating herd behaviour in animals (particularly fish)
  • Social Forces: The desire to approach a goal and avoid and follow the crowd is captured in vectors, which determine the velocity of each agent in the crowd.

Simulation: Crowd implementation

To test the robot's capabilities in crowds through a simulation, the simulation must include a realistic model of how crowds behave. In the 1970s Henderson already related a macro view of the crowds with fluid-dynamics with great success[29]. For the local interactions the robot would experience in real life, this macro view is not realistic enough to model these interactions. Therefore, we have to use a more micro level description of crowds. We came across the social fore model created by D. Helbing and P. Molnár[30] in 1998. This model is well acclaimed and even though it has it's drawbacks like a full stop of pedestrians not working well in the model, we have decided to use the original formulation suggested in 1998.

The social force model is a physical description of pedestrian behaviour, it models pedestrians as point masses with physical forces working upon them. Each pedestrian experiences a few different forces, which will be shortly explained. First, there is a driving force, this force models the internal desire of a pedestrian to go somewhere, it is represented as a direction and the pedestrians desired walking speed. The desired walking speed used is the same that the paper suggests namely a normally distributed random variable with mean of 1.34 m/s and a standard deviation of 0.26 m/s. The direction is calculated by using Unity's navmesh which generates paths through the environment given a start and end. Second, every pedestrian experiences some repulsive force generated by other pedestrians. These repulsive forces are calculated using the fact that humans, want to keep enough distance to each other and instinctively take into account the step size of others. This is calculated by creating an ellipse which is as big as the step the other pedestrian is taking. Then depending on this ellipse it's turned into a force which grows exponentially the closer to the other pedestrian you are, this is called the territorial effect and it points away from the other pedestrian. This is done for every pedestrian in the vicinity. Third, there is another repulsive force from walls and obstacles, this is far simpler as it can be described by an exponential force the closer you get to an obstacle, which points away from the obstacle. Finally, there is an attractive force, this force can be used for multiple things, either for friends who you would want to walk closer to or interesting objects or people in the vicinity. This force decreases over time as people lose interest, however this force is not applied in our model. Both the repulsive and attractive forces are weighted depending on if the object applying the force is inside the field of vision of a pedestrian. The net force applied to a pedestrian is the summation of all these forces and can be applied as an acceleration where the maximum attainable speed of a pedestrian is capped by its desired speed. For performance reasons most of this calculation is done in parallel on the GPU, because of this a trade-off was made. For the repulsive force generated by the walls only the closest object is taken into account since passing all the objects to the GPU creates too much overhead for the CPU loading the data to it. If everything would have been handled by the CPU however, the possible amount of simulated people would have been too little to form a crowd.

Simulation: Robot agent

The robot agent was implemented using Unity. The body of the robot was created by importing the CAD model into Blender and then importing it into Unity. To this model a mesh collider is added to try and make collisions more precise. Attaching a rigid body to the robot agent allowed it to interact with its environment as well as follow the laws of physics (or at least the physics of the Unity engine). The behaviour of the robot was implemented in the following way:

Map of the environment

One of our base assumptions was that the robot has a map of the environment it is in, with landmarks placed. Thus, it would know how the base environment is structured according to the floor plan for example. Thus, it knows where there are walls as well as points of interests, which are the goals to which it will guide people. This was implemented into the simulation via Unity's Navmesh. It allows us to create a mesh of the environment, dividing the space into places where the robot can and cannot move. Then using the default path-finding algorithm of Navmesh, the robot agent will calculate a path using this mesh, thus moving though the environment, while also keeping in mind the overlay of the map. The only issue with this approach is that the algorithm used for pathfinding is A* which, while it will calculate the shortest path to the goal, sometimes the shortest path is not the best path overall.

Sensing the environment

Our robot agent is supposed to use a combination of a LiDAR, a camera and a thermal camera to recognize obstacles in its path that are not in the built-in map, or in other words more dynamic obstacles. While we have stated in our report how one could try and detect dynamic obstacles, in the simulation due to constraints, we rather than using point clouds to create a map of the close environment around the robot and then combine that map with the thermal camera vision to try and detect humans, we instead make use of Unity's raycast functionality, which allows us to cast light beams from our agent. Thus, using multiple beams from these raycasts we emulate a 2D LiDAR. Using this LiDAR as the main sensor we created two versions. The first version has better obstacle avoidance and overall smoothness of movement. Using the raycasts, based on whether the beam hits an object that as been tagged as an "undiscovered human", or "undiscovered obstacle" it convers the tag to discovered, which will then carve a space around the object on the mesh, which makes out the agent move around the object in the path it must take is near the obstacle. This version has some limitations, however. Due to the implementation of NavMesh and the movement ai in Unity, it does not allow us to follow the regular laws of physics, thus the robot could not interact with its environment correctly. Thus, we created a second version.

The second version makes use of NavMesh to calculate a path much like the first version, however the way the agent moves is different. Rather than depend on the navigation ai, it uses a movement function of the rigid body component to traverse the environment to follow that path. This allows the agent to have physics in its interactions with its environment. The obstacle detection and avoidance are also done in a different way. Rather than carve out the mesh, we use three different sets of beams. Left, right and front. Based on where the obstacle is detected the agent reacts by slightly deviating from its path. The issue with this version however was that the movements of the robot were not smooth, thus while it could interact better with its environment, its movements when turning for example were not realistic. That is why we used the first version for the macro simulation.

Finally it must be noted that the follow and bump behaviour in the implementation of the robot has some issues. Mainly that the robot would sometimes follow during times where it should not follow, as well as not following in moments where such behaviour would be the most efficient. The reasons for these issues is the fact that it is difficult to determine which person would be an ideal candidate to follow. For example our implementation depends of the direction the agent is looking at, as well as the rotation of the humans around the robot. If both the robot and a human have the same rotation, then that human is seen as a potential candidate. While on paper the idea seems good, in certain situations, for example when the robot is turning around corners, or making small adjustments in its path, it will then not be looking at the final goal. This would mean that it is possible it starts following a person, going in the wrong direction, granted there is no other option but to initiate the follow behaviour (when there are obstacles detected that prevent the robot from moving to the left, right, and forwards). This leads the robot to sometimes take inefficient paths.

Simulation: Environment

The environment is a 3D geometry-based replica of the first floor of the ATLAS building in terms of large collision parameters. It has been constructed by tracing the edges a floorplan of Atlas, provided by the RE department, with collision objects.

After the model was constructed, it was re-scaled in the unity engine to match the metric of the Atlas building. It should be noted that not all elements of the floorplan are accurate, as the layout of Atlas changes frequently to accommodate events.

The model has various abstraction to accommodate constrains of the simulation. Entry ways have been blocked of, to avoid the crowd of walking outside of the defined perimeter, and doors are considered to be closed. The stairs have also been omitted, or remodelled to be impassable, as we do not consider other floors of the Atlas building in this simulation. Only the lower portion of this floor is considered, as there will be no walking crowd that collides with anything higher than 2 meters.

Simulation: Results

Parameters:

To obtain the results, the simulation was run with the robot starting at the north side of Atlas, moving towards the goal on the opposing south side of Atlas. The crowd was setup to contain 1500 agents, which is the maximum number of people the ground floor of Atlas is designed for, according to the Real Estate department.

CROWD
Screenshot of the crowd simulation in ATLAS. The robot is about to approach a chokepoint.

Expected results:

The expect result is for the Social Force model to generate a crowd that is typical of a very busy day in Atlas. With this comes:

  • The generation of dense 'streams' of agents moving in a similar path from goal to goal.
  • The existence of sparse and dense pockets of space, where some areas are move heavily congested.

We do not expect the social force model to generate agents stationary near goals (such as real students, buying a drink at a machine, creating congestion around a coffee machine), as the model is focused on the movement of pedestrians.

In order to behave safely in accordance to the ISO 10218-2:2011 and 15066:2016 requirements[27][28], We expect the robot to:

  • Avoid collisions in the sparsely populated areas and follow its own path.
  • Follow crowd-agents to prevent collisions in adequately dense areas, where there is still enough space to avoid agents, but not enough to find your own path.
  • Follow its own path when the currently followed agent deviates too much from the optimal path.
  • Bump into crowd-agents when there is insufficient space to avoid them.
  • When bumping, the force should be minimal: The robot should ensure a relative velocity low enough to not cause pain or major discomfort.

Result:

  • We observed that central spaces, such as the centre of the main hall, are indeed very calm. The crowd that formed was very sparse here,

and as such the robot could use standard avoidance and pathfinding algorithms, A* in this case, to avoid the agents of the crowd and reach the goal without making a single collision.

  • We also observed that there tend to be congested areas around hallway entries and more narrow spaces. Here the crowd would become very dense, with agents themselves bumping into each other or narrowly avoiding each other.
  • We observed that the robot generally crosses a stream of densely packed agents, rather than that the stream moves in the same direction as the robot, so that in can follow it. While doing so, it does attempt to avoid agents, or reduce impact.
  • We observed that the robot does indeed bump into agents that are in the way, but it is hard to definitively state the robots uses bumping as a last resort.
  • We observed that the robot bumps through congested areas, instead of avoiding them, if its path requires it to get through this area.

A video of a single iteration: https://www.youtube.com/watch?v=YAjKelmA9mM

Conclusions

We observed that the crowd generated by the Social Force model was indeed indicative of a typical crowd in Atlas. This caused the problem that the crowd, although it is representative, does not comply with the assumptions the robot makes in order to navigate a dense crowd. As the scope of the described behaviour ends at these assumptions, the implemented behaviour of the robot, which could only work inside the scope of this project, simply does not generate adequate results in terms of safety and performance. The robot behaviour earlier described assumes a laminar flow of people to navigate, and the streams that occur in the Atlas setting are often only partially laminar. Especially when streams cross, and around congested chokepoints, this assumption simply does not hold. Additional implementation would be required, to deal with non-laminar or generally omni-directional movement of crowd flows.

We conclude that this is the reason why the robot does not always tend to follow flows and avoid bumps. As the scenarios we chose to focus the behaviour of this project on, are mixed with other scenarios such as crossing a crowd, that are not explicitly considered. As such the robot resorts to its basic non-crowd routine of attempting to follow the most efficient path. This effectively bypasses the behaviour we wish to test the safety for. As such we can only conclude from this simulation, that simplistic pathfinding behaviour with obstacle avoidance is sufficient to generate safe behaviour for navigating sparsely populated areas of Atlas.

To show the safety of the behaviour itself, we thus decided to create more focussed environments, that force compliance with the robots expected crowd.

Simulation: Micro-simulations

Screenshot showing the micro simulation, where the robot is following a person that is suddenly stopping

To test the safety of the robot's behaviour implementation, we created specific scenario's, which are better suited to showcase the intended behaviour of the robot, which together cover a large subset of problem the robot can solve. These scenarios are specifically created after running the Social Force simulation and are controlled instances of situations the robot agent encountered during its navigation in that simulation.

The advantage of these scenarios, is that they are altered to force compliance with the robot's assumptions about the crowd, as described in the scenario's and behaviour sections of this wiki. As a consequence, the robot will show the behaviour that is described, and thus the safety of the robot, in situations encountered in the Atlas representative model, can be tested with the correct behaviour of the robot.

Each scenario was run a total of 10 times, 5 times to observe the robot behaviour, and another 5 times to obtain force measurements during collisions that may occur in the scenarios. In each iteration, the parameters are the exact same.

Micro scenario - sudden stop

This micro scenario focuses on our second scenario "Stalled lead", in which the robot is following a person, and this person suddenly stops. In the scenario, the robot is forced to slow its pace and bump into the person, by placing a row of persons on each side to prevent it from avoiding the person. The robot slows it pace, and eventually bumps into the person, until they move.

A video of the scenario in development is shown here: https://www.youtube.com/watch?v=rcPF2ZiYqlw

Simulation collision measurements
duration [frames] impulse Magnitude [Newton-seconds] nr of collisions average force [Newton]
44 67.0125 8 91.3807
38 56.0443 6 88.4910
41 62.1002 8 90.8783
41 63.2706 8 92.5911
40 59.3228 6 88.9842

Micro scenario - intersecting agents

During this scenario, the robot is following a person, while another person is crossing the space between the robot and its lead. The robot shows it is capable of detecting the crossing person in time and reacts by allowing the crossing person to pass in front of it by slowing to a near halt. When the person has passed, the robot accelerates, and we observe that it returns the to the same distance it was following the person at earlier.

Screenshot showing the second micro simulation, where the robot is cut of by a crossing person while following a lead.

We observed that the robot now indeed is capable of avoiding collision, instead of resorting immediately to bumping. As a result, during all 10 iterations run on this scenario did not result in any collisions.

A video of the scenario in development is shown here: https://youtu.be/J_IOsJ16ifs

Micro scenario - Results

During the above scenario's, a script was attached to the robot that measured the number of collision events, the largest duration in frames (where 60 frames are 1 second) of the collision events, and the largest Impulse measured during the collisions.

The evaluating script shows that for 5 separate iterations of the first scenario, the average force applied is well below the 150 Newton threshold. It should be noted that the script yields the largest and longest corresponding force and time a collision occurred during each run. The number of collision events is also computed using the convex rigid body of the robot mesh, which means that the number of collisions is likely to be higher, as the convex hull encapsulated space below the whiskers that is not actually occupied by the robot.

Conclusion

The micro simulations show that, if the assumptions on the robot behaviour are met, the total average force applied during the simulation is below the 150N threshold laid out in the ISO standard. In addition, the simulation shows that the behaviour of the robot is successful in avoiding contact in crowds unless required, satisfying the force limitation requirements in the ISO standard. We thus conclude that the proposed behaviour in this concept is safe, if the crowd behaviour is captured by the scenario's previously discussed in this document.


Conclusion

Project findings

To the research question 'How should robots socially navigate through unidirectional pedestrian crowds while guiding visual impaired users?' we have given an answer in the form of the provided various behavioural descriptions under the scenarios. From the micro-simulations it can be seen that it is safe to act in accordance with those for at least some of the behavioural rules. The simulations should however not be seen as definitive proof because it uses Unity's physics engine which lacks any kind of material simulations. To verify the claims made in this project regarding safety it would be best to run actual material simulations to find exact pressures. Furthermore, most of the behaviour has not been tested.

Overall, this behaviour has its uses, making a navigation method like this which is not reliant on perfect information, allows the robot to neglect some observations simplifying what sensors are necessary. It also makes the robot more robust for small changes. For example, a non-living entity will not change how the robot behaves.

Future research

The behaviour as described in the scenario's should be implemented in a more advanced simulation. This can be done in a discrete manner (rule-based agent) or a more inspired manner (Utility-based or learning agent, for which the descriptions would act more like a guideline).

The acceptance of the design in crowds and users should be verified, this is a point which was lacking in this research. César López has mentioned that this can be designed for using established researched as a guideline but is finally verified with a physical prototype and a survey designed for such research.

The design could also be made more detailed by adding any of the assumed working pieces mentioned in the problem scoping including adding behaviour for different kinds of dense crowds:

  • Localization of the guide
  • Identification of obstacles or other persons
  • Navigation in sparse crowds
  • Navigation in dense crowds
  • Overarching strategic planning (e.g., navigating between multiple floors or buildings)
  • Interaction with infrastructure (e.g., Doors, elevators, stairs, etc.)
  • Effective communication with the user (e.g., user being able to set a goal for the guide)

Any of the behavioural changes or additions would require some kind of transitional system to switch between them. López mentioned that this can be done by selecting the behavioural model for which all conditions are met but implementing a general navigation method is a good way to make sure the guide always has something to fall back on.

Finally, the risks and hazards of this design should be worked out in even more detail (like mechanical failure).

Project evaluation

First it is important to note that what is presented in this report is not a full 8 weeks of work for 6 students. This is due to the change of subject after 2 weeks, and another 2 weeks it took to have a clear problem statement narrowed down enough to work on. This gave us a last 4 weeks in which a lot of work was done. During these remaining weeks, after the second meeting with López, it became clear that the scenarios that were worked were too extensive and fell outside the scope of the project; walking along a unidirectional crowd.

After the final presentation there was a final meeting with César López in which the end result was evaluated and some of the main points will be discussed here.

For this type of research safety is usually taken care of in the design process before development by using predetermined safety standards for such products. Due to time constraints small safety research was done alongside the making of the simulation. At the moment there is not an in-depth safety analysis done where possible hazards are identified, risks are determined, and consequences are determined. The main focus of the design is based on research on what might work when navigating a robot through a crowd.

Furthermore, the simulation that was designed should have been more constrained from the beginning, to fit the chosen problem. This again shows how the scoping of the research question should have been done earlier in the project. This would allow the assumptions for the behaviour to be met. Making a simulation with clear assumptions that are met allows the behaviour of the design to be more intelligently formed, using a more iterative process, instead of the current methods.

Appendix

Code:

The code for the simulation can be found in the following github page: https://github.com/JJellie/VirtualCrowdSim

Here some papers used in the research to the guide robot are summarized. These papers are mostly the state of the art of the hard- and software of guide robots, and crowd navigation. These summaries could be read to get a deeper understanding of the state of the art.

Literature Research

Overview
Paper Title Reference Reader
Modelling an accelerometer for robot position estimation [31] Jelmer S
An introduction to inertial navigation [32] Jelmer S
Position estimation for mobile robot using in-plane 3-axis IMU and active beacon [33] Jelmer S
Stepper motors: fundamentals, applications and design [34] Joaquim
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities [35] Jelmer L
Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization [36] Jelmer L
Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry [37] Jelmer L
Optical 3D laser measurement system for navigation of autonomous mobile robot [38] Boril
A mobile robot based system for fully automated thermal 3D mapping [39] Boril
A review of 3D reconstruction techniques in civil engineering and their applications [40] Boril
2D LiDAR and Camera Fusion in 3D Modeling of Indoor Environment [41] Boril
A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR [42] Jelmer L
An information-based exploration strategy for environment mapping with mobile robots [43] Jelmer S
Mobile robot localization using landmarks [44] Jelmer S
The Fuzzy Control Approach for a Quadruped Robot Guide Dog [12] Wouter
Design of a Portable Indoor Guide Robot for Blind People [13] Wouter
Guiding visually impaired people in the exhibition [45] Joaquim
CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People [11] Boril
Tour-Guide Robot [46] Boril
Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques [10] Boril
Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques [10] Boril

Modelling an accelerometer for robot position estimation

The paper discusses the need for high-precision models of location and rotation sensors in specific robot and imaging use-cases, specifically highlighting SLAM systems (Simultaneous Localization and Mapping systems).

It highlights sensors that we may also need: " In this system the orientation data rely on inertial sensors. Magnetometer, accelerometer and gyroscope placed on a single board are used to determine the actual rotation of an object. "

It mentions that, in order to derive position data from acceleration, it needs to be doubly integrated, which tents to yield great inaccuracy.

drawback: the robot needs to stop after a short time (to re-calibrate) when using double-integration to minimize error-accumulation: “Double integration of an acceleration error of 0.1g would mean a position error of more than 350 m at the end of the test”.

An issue in modelling the sensors is that rotation is measured by gravity, which is not influenced by for example yaw, and gets more complicated under linear acceleration. The paper modelled acceleration, and rotation according to various lengthy math equations and matrices and applied noise and other real-word modifiers to the generated data.

It notably uses cartesian and homogeneous coordinates in order to separate and combine different components of their final model, such as rotation and translation. These components are shown in matrix form and are derived from specification of real-world sensors, known and common effects, and mathematical derivations of the latter two.

The proposed model can be used to test code for our robot's position computations.

An introduction to inertial navigation

This paper (as report) is meant to be a guide towards determining positional and other navigation data from interior based sensors like gyroscopes, accelerometers and IMU's in general.

It starts by explaining the inner workings of a general IMU and gives an overview of an algorithm used to determine position from said sensors' readings using integration, showing what intermitted values represent using pictograms.

It then proceeds to discuss various types of gyroscopes, their ways of measuring rotation (such as light inference), and resulting effects on measurements, which are neatly summarized in equations and tables. It takes a similar for Linear acceleration measurement devices.

In the latter half the paper, concepts and methods relevant to processing the introduced signals are explained, and most importantly it is discussed how to partially account for some of the errors of such sensors. It starts by explaining how to account for noise using Allan variance and shows how this effects the values from a gyroscope.

Next, the paper introduces the theory behind tracking orientation, velocity and position. It talks about how errors in previous steps propagate through the process, resulting in the infamously dangerous accumulation of inaccuracy that plagues such systems.

Lastly, it shows how to simulate data from the earlier discussed sensors. Notably though the previous paper already discussed a more accurate and recent algorithm (building on this paper).

Position estimation for mobile robot using in-plane 3-axis IMU and active beacon

The paper highlights 2 types of positioning determination: Absolute (does not depend on previous location) and Relative (does depend on previous location). It goes on to highlight advantages and disadvantages of several location determination systems. It then proposes a navigation system that mitigates as much of the flaws as possible.

The paper continues by describing the sensors used to construct the in plane 3 axis IMU: - x/y accelerometer, - z-axis gyroscope

Then, the ABS is described. It consists of 4 beacons mounted to the ceiling, and 2 ultrasonic sensors attached to the robot. The technique essentially uses radio frequency triangulation to determine the absolute position of the robot. The last sensor described is an odometer, which needs no further explanation.

Then, the paper discusses the model used to represent the system in code. Most notably the system is somewhat easier to understand, as the in-plane measurements mean that much of the robot position's complexity is restricted to 2 dimensions. The paper also discusses the used filtering and processing techniques such as a Karman filter to combat noise and drift. The final processing pipeline discussed is immensely complex due to the inclusion of bounce, collision and beacon-failure handling.

Lastly, the paper discusses the result of their tests on the accuracy of the system, which shown a very accurate system, even when the beacon is lost.


Stepper motors: fundamentals, applications and design

This book goes over what stepper motors are, variations of stepper motors as well as their make-up. Furthermore, it goes in-depth about how they are controlled.

Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities

According to the authors advances in Visual-Inertial odometry (VIO), which is the process of determining pose and velocity (state) of an agent using the input of cameras has opened up a range of applications like AR drone navigation. Most of VIO systems use point clouds and to provide real-time estimates of the agent’s state they create sparse maps of the surroundings using power heavy GPU operations. In the paper the authors propose a method to incrementally create 3D mesh of the VIO optimization while bounding memory and computational power.

The author's approach is by creating a 2d Delaunay triangulation from tracked key points, and then projecting this into 3d, this projection can have some issues where points are close in 2d but not in 3d, this is solved by geometric filters. Some algorithms update a mesh for every frame, but the authors try to maintain a mesh over multiple frames to reduce computational complexity, capture more of the scene and capture structural regularities. Using the triangular faces of the mesh they are able to extract geometry non-iteratively.

In the next part of the paper, they talk about optimizing the optimization problem derived from the previously mentioned specifications.

Finally, the authors share some benchmarking results on the EuRoC dataset which are promising as in environments with regularities like walls and floors it performs optimally. The pipeline proposed in this paper provides increased accuracy at the cost of some calculation time.

Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization

In the robotics community visual and inertial cues have long been used together with filtering however this requires linearity while non-linear optimization for visual SLAM increases quality, performance and reduces computational complexity.

The contributions the authors claim to bring are constructing a pose graph without expressing global pose uncertainty, provide a fully probabilistic derivation of IMU error terms and develop both hardware and software for accurate real-time slam.

The paper describes in high detail how the optimization objectives were reached and how the non-linear SLAM can be integrated with the IMU using a chi-square test instead of a ransac computation.

Finally, they show results of a test with their developed prototype which shows that tightly integrating the IMU with a visual SLAM system really improves performance and decreases the deviation from the ground truth to close to zero percent after 90m distance travelled.

Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry

The authors from this paper propose an algorithm that fuses feature tracks from any number of cameras along with IMU measurements into a single optimization process, handles feature tracking on cameras with overlapping fovs, a subroutine to select the best landmarks for optimization reducing computational time and results from extensive testing.

First the authors give the optimization objective after which they give the factor graph formulation with residuals and covariances of the IMU and visual factors. Then they explain how they approach cross camera feature tracking. This is done by projecting the location from 1 camera to the other using either stereo camera depth or IMU estimation, then it is refined by matching it with to the closest image feature in the camera projected to using Euclidian distance. After this it is explained how feature selection is done, this is done by computing a Jacobian matrix and then finding a submatrix that preserves the spectral distribution best.

Finally experimental results show that with their system is closer to the ground truth than other similar systems.

Optical 3D laser measurement system for navigation of autonomous mobile robot

This paper presents an autonomous mobile robot, which using a 3D laser navigation system can detect and avoid obstacles in its path to a goal. The paper starts by describing in high detail the navigation system- TVS. The system uses a rotatable laser and scanning aperture to form laser light triangles, which are formed due to the reflected light of the obstacle. Using this method, the authors were able to obtain the information necessary to calculate the 3D coordinates. For the robot base, the authors used Pioneer 3-AT, four-wheel, four-motor ski-steer robotics platform.

After this the authors go in-depth on how the robot avoids obstacles. Via the usage of optical encoders on the wheels and a 3-axis accelerometer, the robot keeps track of its travelled distance and orientation. Via IR sensors the robot can detect obstacles that are a certain distance in front of it, after which it performs a TVS scan to avoid the obstacle. The trajectory the robot follows to avoid the obstacle is calculated using 50 points in the space in front of it, which are used to form a curve, which the robot then follows. Thus, after the robot starts-up, it calculates an initial trajectory to the goal location, after which it recalculates the trajectory, whenever it encounters an obstacle. Finally, the authors go over their results from simulating this robot in Matlab as well as analyse its performance.

A mobile robot based system for fully automated thermal 3D mapping

This paper showcases a fully autonomous robot, which can create 3D thermal models of rooms. The authors begin by describing what components the robot uses, as well as how the 3d sensor (a Riegl VZ-400 laser scanner from terrestrial laser scanning) and the thermal camera (optris PI160) are mutually calibrated. Both cameras are mounted on top of the robot, together with a Logitech QuickCam Pro 9000 webcam. After acquiring the 3D data, it is merged with the thermal and digital image via geometric camera calibration. After that the authors explain the sensor placement. The approach of the paper to the memory-intensive issue of 3 planning is to combine 2D and 3D planning- the robot would start off by only using 2D measurements, once it detects an enclosed space however it would switch to 3D NBV (next best view) planning. The 2d NBV algorithm starts off with a blank map, and explores based on the initial scan, where all inputs are range values parallel to the floor, distributed on the 360-degree field of view. A grid map is used to store the static and dynamic obstacle information. A polygonal representation of the environment stores the environment edges (walls, obstacles). This NBV process is composed of three consecutive steps- vectorization (obtaining line segments from input range data), creation of exploration polygon, selection of the NBV sensor position- choosing the next goal. The room detection is grounded in the detection of closed spaces in the 2D map of the environment. Finally, the authors showcase their results from their experiments with the robot, showcasing 2D and 3D thermal maps of building floors. The 3D reconstruction of which is done using Marching Cubes algorithm.

A review of 3D reconstruction techniques in civil engineering and their applications

This paper presents and reviews techniques to create 3D reconstructions of objects from the outputs of data collection equipment. First the authors researched the currently most used equipment for getting the 3D data- laser scanners (LiDAR), monocular and binocular cameras, video cameras, which is also the equipment that the paper focuses on. From this they classify two categories for 3D reconstruction based on cameras- point-based and line-based. Furthermore, 3D reconstruction techniques are divided into two steps in the paper - generating point clouds and processing those point clouds. For generating the point clouds: For monocular images - feature extraction, feature matching, camera motion estimation, sparse 3D reconstruction, model parameters correction, absolute scale recovery and dense 3D reconstruction feature extraction- gaining feature points, which reflect the initial structure of the scene. Algorithms used for this are Feature point detectors and feature point descriptors. Feature matching- matching feature points of each image pair. Camera motion estimation is used to find out the camera parameters of each image. The Sparse 3D reconstruction step is to compute the 3D location of points using the feature points and camera parameters, generating a point cloud. This is done via the triangulation algorithm. Then the model parameters correction step is to correct the camera parameters of each image. This step leads to precise 3D locations of points in the point cloud. Absolute scale recovery aims to determine the absolute scale of the sparse point cloud by using the dimensions/points of absolute scale in the sparse point cloud. Finally using all of the above is used to generate a dense point cloud. For stereo images, the camera motion estimation and absolute scale recovery steps are skipped, and instead we need to calibrate the camera before feature extraction. After this the authors explain how to generate point clouds from video images. in Techniques for processing data, the authors showcase a couple of algorithms for data processing. For point cloud processing they use ICP. For Mesh reconstruction- PSR, for point cloud segmentation- they divide the algorithms into two categories- feature-based segmentation (region growth and clustering, K-means clustering) and model-based segmentation (Hough transform and RANSAC). After this the authors go in depth on applications of 3D reconstruction in civil engineering such as reconstructing construction sites and reconstructing pipelines of MEP systems. Finally, the authors go over the issues and challenges of 3D reconstruction.


2D LiDAR and Camera Fusion in 3D Modelling of Indoor Environment

This paper goes over how to effectively fuse data from multiple sensors in order to create a 3D model. An entry level camera is used for colour and texture information, while a 2D LiDAR is used as the range sensor. To calibrate the correspondences between the camera and LiDAR, a planar checkerboard pattern is used to extract corners from the camera image and intensity image of the 2D LiDAR. Thus, the authors rely of 2D-2D correspondences. A pinhole camera model is applied to project 3D point clouds to 2D planes. RANSAC is used to estimate the point-to-point correspondence. Using transformation matrices, the authors match the colour images of the digital camera to with the intensity images. yB aligning a 3D colour point cloud in different location, the authors generate the 3D model of the environment. Via a turret widow X servo, the 2D LiDAR is moved in vertical direction for a 180-degree horizontal field of view. The digital camera rotates in both vertical and horizontal directions, to generate panoramas by stitching series of images. In the third paragraph the authors go over how they calibrated the two image sources. To determine the rigid transformation between camera images and 3D points cloud a fidual target is used, RANSAC is used to estimate outliers during calibration process and a checkerboard with 7x9 squares is employed to find correspondences between LiDAR and camera. Finally, the authors go over their results.


A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR

This paper is a review of multiple SLAM systems from which their main vision component is a 3D LiDAR which is integrated with other sensors. LiDAR, camera and IMU are the 3 most used components, and all have their advantages and disadvantages. The paper discusses LiDAR-IMU coupled systems and Visual-LiDAR-IMU coupled system, both tightly and loosely coupled.

Most loosely coupled systems are based on the original LOAM algorithm by J. Zhang et all, these systems are new in terms of that the paper by Zhang is from 2014, but there have been many advancements in this technology. The LiDAR-IMU systems often use the IMU to increase the accuracy of the LiDAR measurements and new developments involve speeding up to ICP algorithm to combine point clouds with clever tricks and/or GPU acceleration. The LiDAR-Visual-IMU systems use the complementary properties of LiDAR and camera’s, LiDAR needs textured environments while visions sensors lack the ability to perceive depth, thus the cameras are used for feature tracking and together with LiDAR data allow for more accurate pose estimation.

In contrast to the speed and low computational complexity of loosely coupled systems, tightly coupled systems sacrifice some of this for greater accuracy. One of the main points of these systems is a derivation of the error term and pre-integration formula for the IMU, this can be used to increase accuracy of the IMU measurements by estimating the IMU bias and noise. For LiDAR-IMU systems this derivation is used for removing distortion in LiDAR scans, optimizing both measurements and many different approached to couple the 2 devices to obtain greater accuracy and computation speed. The LiDAR-Visual-IMU use strong correlation between images and point clouds to produce more accurate pose detection.

The authors then do performance comparisons on SLAM datasets where most recent SLAM systems appear to estimate pose really close to the ground truth even over distances of several 100 meters.

An information-based exploration strategy for environment mapping with mobile robots

This paper proposes a mathematically oriented way of mapping environments. Based on relative entropy, the authors evaluate a mathematical way to produce a planar map of an environment, using a laser range finder to generate local point-based maps that are compiled to a global map of the environment. Notably the paper also discusses how to localize the robot in the produced global map.

The generated map is a continuous curve that represents the boundary between navigable spaces and obstacles. The curve is defined by a large set of control points which are obtained from the range finder. The proposed method involves the robot generating and moving to a set of observation points, at which it takes a 360-degree snapshot of the environment using the range finder, finding a set of points several specified degrees apart, with some distance from the sensor. The measured points form a local map, which is also characterised by the given uncertainty of the measurements. Each local map is then integrated into the global map (a combination of all local maps), which is then used to determine the next observation point and position of the robot in global space.

The researchers go on to describe how the quality of the proposal is measured, namely in the distance travelled and uncertainty of the map. The uncertainty is a function of the uncertainty in the robot's current position, and the accuracy of the range finder. The robot has a pre-computed expected position of each point, and a post-measurement position of each point, which is then evaluated through relative entropy to compute the increment of the point-information. This and similar equations for the robot's position data are used to select the optimal points for observing the environment. Lastly, the points of each observation point are combined into one map, by using the robot's position data.

Mobile Robot Localization Using Landmarks

The paper discusses a method to determine a robot's position using landmarks as reference points. This is a more absolute system than just inertia-based localization. The paper assumes that the robot can identify landmarks and measure their position relative to each other. Like other papers, it highlights its importance due to error accumulation on relative methods.

It highlights the robot's capability to: - Find landmarks - Associate landmarks with points on a map - Use this data to compute its position.

It uses triangulation between 3 landmarks to find its position, with low error. The paper also discusses how to re-identify landmarks that were misjudged with new data. The robot takes 2 images (using a reflective ball to create a 360 image) and solves the correspondence problem (identifying an object from 2 angles) to find its location. In the paper, the technique is tested in an office environment.

The paper discusses how to perform triangulation using an external coordinate system and the localisation of the robot. The vectors to the landmark are compared and using their angle and magnitude the position can be computed. Next, the paper discusses the same technique, adjusted for noisy data. The paper uses Least-Squares to derive an estimation that can be used, evaluating the robot's rotation relative to at least 2 landmarks. The paper then evaluates the expected distribution in angle-error and position on each axis, to correct for the noise, using the method described above.


The Fuzzy Control Approach for a Quadruped Robot Guide Dog [47]

This basically makes a robot guide dog. Think of Spot from Boston Dynamics with a leash that is trained to guide blind people. A good thing for this is that spot has proven to be able to walk stairs so it should be fast. Problem is that it is hard to guide blind people. Based on its low viewpoint.

The paper also gives a ‘fuzzy’ control process which makes sure that variation in road surfaces would not affect the dog. The rest of the paper shows how this controller can be designed; it does not show how to guide a blind person.

Their conclusion on what they did shows that their fuzzy algorithm improved how smooth the dog walked.

Design of a Portable Indoor Guide Robot for Blind People

This design takes the guide dog replacement differently. By not replacing it with a dog quadruped robot. This design is mainly aimed at indoors. This paper also did some research on what blind people need. A survey conducted for example says that 90% of people worry about obstacles in the air while travelling. The design is basically a motorized walker with sensors on it.

This robot is foldable and has an unfolded height of 700 mm. Further the mechanical design is well explained. This design has no real stair walking capabilities.

The conclusion stated that the robot did well, and it was a low cost, convenient-to-carry, and strong perception blind guide robot.

Guiding visually impaired people in the exhibition

This paper talks about a robotic guide used to help (partially) blind people navigate an exhibition (a noisy, crowded (4 square meters/person), unfamiliar environment). These people are often faced with the challenge of maintaining spatial orientation; ‘the ability to establish awareness of space position relative to landmarks in the surrounding environment’. The paper proposes that supporting functional independence of these people can thus be achieved by ‘providing references and sorts of landmarks to enhance awareness of the surroundings’.

The technology used by this paper to achieve this is a handheld device capable of radio-frequency localization. To prepare the environment a RFID sensor was placed for each 300 square meters (~17x17 m area) at points of interest, services and major areas. The paper does not go into the details of how the localization is done but an educated guess would be that the guiding devices carried by the guided persons are scanned by these fixed sensors which then communicate to calculate the position of the guided. Keep in mind this exhibition took place in 2006, but they found a resolution of 5 meters (minimal distance between distinguishable tags).

The interface of the device makes use of hardware buttons, which they find a solution suited for visually impaired people. Apart from standard navigation and audio control buttons, the device was also equipped with a button which gives quick access to an emergency number.

In this particular use-case the device guided people using an event-system which would ask the user if they wanted to hear a description of their environment. This event would trigger when the handheld device would recognize signals from local sensors. This description would include:

  • an extended title
  • the description of the point of interest
  • one or more extended descriptions
  • descriptions to invite and spatially guide the user near the featured flowers and plants.

The device would also describe near points of interest such as crossroads, entrances, exits, restaurants, toilets etc. such that the user can create their own mental map of their surroundings allowing them to build and follow their own path; being unconstrained by the predefined path.

To overcome noise the user was provided with headphones. Another problem was that some users were frustrated with the silence of the device when they were not at a point of interest. This was solved by providing a message stating this.

The device was recognized by the visually impaired users to allow them a large degree of freedom which traditional (fixed) guides do not.

The authors end with saying the experience would probably be significantly improved with a better localization technology.

CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People

This paper goes over the design of an autonomous navigation robot for blind people in unfamiliar environments. The paper also includes the results of a user study done for this product. The robot uses a floorplan with relevant Points-of-Interest, a LiDAR and a stereo camera with convolutional neural networks for localisation, path planning and obstacle avoidance.

Design

Moves as a differential steered system. Motors controlled by a RoboClaw controller. Allows users to manually push/pull the robot. Uses a LiDAR and stereo camera (ZED). Implemented with ROS (Robot Operating System). It is shaped like a suitcase, so that it ca blend-in with the environment, as well as like this it can simulate a guide dog, being held on the left side, standing slightly in-front of the user. This allows the robot to protect the user from collisions. For Mapping the robot relies on a floorplan map with the location of points of interest. Via the LiDAR, which is placed on the frontal edge of the robot, the map environment is mapped beforehand. Localisation- using wheel odometry and LiDAR scanning it estimates the current location. Compares the real-time scanning and map to previously generated using Monte Carlo localisation (AMCL) package of ROS. In addition, odometry information can be computed using the LiDAR and stereo camera. Path Planning- path on the LiDAR map is planned based on the user's starting point and destination. To avoid obstacles, and to navigate a dynamic environment local, low-level pathing is implemented using the navigation packages of ROS. The robot also considers the space that is occupied both by it and the user in its pathfinding. This is done via a custom algorithm. The robot also provides haptic feedback. The authors use vibro-tactile feedback (different vibration locations and patterns) on the handle to convey the intent of the robot to the user. Via buttons on the handle one can change the speed of the robot. After this explanation, the paper goes over the conducted user study and its results.

Tour-Guide Robot

This paper introduces a tour-guide robot using Kinect technology. The robot follows tourists wherever they go, avoiding obstacles and providing information. The paper begins by naming some previous implementations of such tour guide robots. Such robots are Rhino, Minerva, Asimo, Tawabo, Toyota tour guide robot, Skycall. Using Kinsect to determine gestures and spoken commands as well as facial recognition. Main parts- RGB camera, 3D depth sensing system, multi array microphone. The platform of the robot has ultrasonic sensors to detect obstacles. RFID is used to detect the RFID cards around the museum to correctly identify item and play the corresponding audio file. Base robot platform- Eddie.


Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques

This paper reviews of existing autonomous campus and tour guiding robots. SLAM as the most-often used technique, building a map of the environment and guiding the robot to the goal position. Common techniques for robot navigation- human-machine interface, speech synthesis, obstacle avoidance, 3D mapping. ROS- open-source, popular framework to operate autonomous robots. It provides services designed for a heterogeneous computer cluster. SLAM is achieved via laser scanners (LiDAR) or RGBD cameras. The paper names some popular such robots: TurtleBot2- low cost, ROS-enabled autonomous robot, using a Microsoft Kinect camera (RGBD camera). TurtleBot 3 is the upgraded version, which uses LiDAR instead. Pepper robot- service robot used for assisting people in public places like malls, museums, hotels. Uses wheels to move REEM-C- ROS-enabled autonomous humanoid robot, using RGBD camera for 3D mapping. The paper contains useful tables containing information about these robots, as well as popular ROS computing platforms and mapping sensors. The authors propose the use of lidar measurements on a road's surface to detect road boundaries. based on multiple model method the existence of cubs is determined. The authors propose the usage of a Kinect v2 sensor, rather than range finders such as 2-D LiDAR, as using it dense and robust maps of the environment can be created. It is based on time-of-flight measurement principle and can be used outdoors. The paper also introduces noise models for the Kinect v2 sensor for calibration in both axial and lateral directions. The models take the measurement distance, angle and sunlight incidence into account. As an example of a tour guide robot, the paper presents Nao, which provides tours of a laboratory. This robot is more focused on the human interaction and thus can perform and detect gestures. NTU-1- autonomous tour guide robot that guides on the campus of the National University of Taiwan. It is a big robot, weighting around 80 kg, with a two-wheel differential actuated by a DC brushless motor. It uses multiple sensing technologies such DGPS, dead reckoning and a digital compass, which are all fused by the way of Extended Kalman Filtering. For obstacle avoidance and shortest path planning, 12 ultra-sonic sensors are used, allowing the robot to detect objects withing a range of 3 meters. Another robot that is explored in the paper is an intelligent robot for guiding the visually impaired in urban environments. It uses two Laser Range Finders, GPS, camera, and a compass. Other touring robots explored in the paper are ASKA, Urbano, Indigo, LeBlanc, Konard and Suse.


  1. Romlay, M. R. M., Toha, S. F., Ibrahim, A. M., & Venkat, I. (2021). Methodologies and evaluation of electronic travel aids for the visually impaired people: a review. Bulletin of Electrical Engineering and Informatics, 10(3), 1747–1758. https://doi.org/10.11591/eei.v10i3.3055
  2. (1) (PDF) Guiding visually impaired people in the exhibition (researchgate.net)
  3. What are the problems that the visually impaired face with the white cane? (n.d.). Quora. https://www.quora.com/What-are-the-problems-that-the-visually-impaired-face-with-the-white-cane
  4. Healthdirect Australia. (n.d.). Guide dogs. healthdirect. https://www.healthdirect.gov.au/guide-dogs#:~:text=Guide%20dogs%20help%20people%20who,city%20centres%20to%20quiet%20parks.
  5. What A Guide Dog Does. (n.d.). Guide Dogs Site. https://www.guidedogs.org.uk/getting-support/guide-dogs/what-a-guide-dog-does/
  6. Guide Dogs Vs. White Canes: The Comprehensive Comparison – Clovernook. (2020, 18 September). https://clovernook.org/2020/09/18/guide-dogs-vs-white-canes-the-comprehensive-comparison/
  7. Guide Dog Etiquette: What you should and shouldn’t do – Clovernook. (2020, 10 September). https://clovernook.org/2020/09/10/guide-dog-etiquette/
  8. Guide Dogs for the Blind. (2020, 1 July). Guide Dog Training. https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times.
  9. Guide Dogs for the Blind. (2020b, July 1). Guide Dog Training. https://www.guidedogs.com/meet-gdb/dog-programs/guide-dog-training#:~:text=Guide%20dogs%20take%20their%20cues,they%20are%20at%20all%20times.
  10. 10.0 10.1 10.2 Debajyoti Bosea, Karthi Mohanb, Meera CSc, Monika Yadavc and Devender K. Saini, Review of Autonomous Campus and Tour Guiding Robots with Navigation Techniques (2021), https://www.tandfonline.com/doi/epdf/10.1080/14484846.2021.2023266?needAccess=true&role=button
  11. 11.0 11.1 11.2 João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M. Kitani, Chieko Asakawa, Designing and Evaluating an Autonomous Navigation Robot for Blind People (2019), https://dl.acm.org/doi/pdf/10.1145/3308561.3353771
  12. 12.0 12.1 https://link.springer.com/article/10.1007/s40815-020-01046-x utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot Cite error: Invalid <ref> tag; name "The Fuzzy Control Approach for a Quadruped Robot Guide Dog" defined multiple times with different content
  13. 13.0 13.1 https://ieeexplore.ieee.org/document/9536077
  14. Unfreezing the Robot: Navigation in Dense, Interacting Crowds, Peter Trautman and Andreas Krause, 2010 https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5654369&casa_token=3UPVOvK4kjwAAAAA:IjkyGh3f-uh_x-01jDPtspxLX--eSCBTrZEGTwtVEXc8hU9D2oLLEuOCTCz6OdGHWmy76bX3JA&tag=1
  15. Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning, Changan Chen, Yuejiang Liu, Sven Kreiss and Alexandre Alahi, 2019, https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8794134&casa_token=neBCeEpBndIAAAAA:wZuGoZYF-YCscI-kJGi5ljIIGkUFpzejSTaxySxytUbIUKeV4sUZze6lZN32gw2DmKwbw-G6ZA
  16. https://youtu.be/mh5L3l_7FqE
  17. 17.0 17.1 Mavrogiannis, C., Baldini, F., Wang, A., Zhao, D., Trautman, P., Steinfeld, A., & Oh, J. (2021). Core challenges of social robot navigation: A survey. arXiv preprint arXiv:2103.05668.
  18. Helbing, D., Buzna, L., Johansson, A., & Werner, T. (2005). Self-Organized Pedestrian Crowd Dynamics: Experiments, Simulations, and Design Solutions. Transportation Science, 39(1), 1–24. https://doi.org/10.1287/trsc.1040.0108
  19. Country - The International Agency for the Prevention of Blindness (iapb.org)
  20. 20.0 20.1 Salvini, P., Paez-Granados, D. & Billard, A. Safety Concerns Emerging from Robots Navigating in Crowded Pedestrian Areas. Int J of Soc Robotics 14, 441–462 (2022). https://doi.org/10.1007/s12369-021-00796-4
  21. 21.0 21.1 21.2 CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People (acm.org)
  22. ANTHROPOMETRY AND BIOMECHANICS. (n.d.). https://msis.jsc.nasa.gov/sections/section03.htm
  23. WHO. (n.d.). ASSISTIVE PRODUCT SPECIFICATION FOR PROCUREMENT. At who.int. https://www.who.int/docs/default-source/assistive-technology-2/aps/vision/aps24-white-canes-oc-use.pdf?sfvrsn=5993e0dc_2
  24. dog-harnesses-store.co.uk. (n.d.). Best Guide Dog Harnesses in UK for Mobility Assistance. https://www.dog-harnesses-store.co.uk/guide-dog-harness-uk-c-101/#descSub
  25. Using contact-based inducement for efficient navigation in a congested environment. (2015, August 1). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/7333673
  26. Trautman, P., Ma, J., Murray, R. M., & Krause, A. (2015). Robot navigation in dense human crowds: Statistical models and experimental studies of human–robot cooperation. The International Journal of Robotics Research, 34(3), 335-356.
  27. 27.0 27.1 27.2 ISO 15066:2016(EN) Robots and robotic devices — Collaborative robots, International Organization for Standardization. https://www.iso.org/standard/62996.html, 2016
  28. 28.0 28.1 28.2 ISO 10218-2:2011 Robots and robotic devices — Safety requirements for industrial robots — Part 2: Robot systems and integration, International Organization for Standardization, https://www.iso.org/standard/41571.html, 2011-07
  29. Henderson LF. The statistics of crowd fluids. Nature. 1971 Feb 5;229(5284):381-3. doi: 10.1038/229381a0. PMID: 16059256.
  30. Helbing, D., & Molnar, P. (1995). Social force model for pedestrian dynamics. Physical review, 51(5), 4282–4286. https://doi.org/10.1103/physreve.51.4282
  31. Z. Kowalczuk and T. Merta, "Modelling an accelerometer for robot position estimation," 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 2014, pp. 909-914, doi: 10.1109/MMAR.2014.6957478.
  32. Woodman, O. J. (2007). An introduction to inertial navigation (No. UCAM-CL-TR-696). University of Cambridge, Computer Laboratory.
  33. T. Lee, J. Shin and D. Cho, "Position estimation for mobile robot using in-plane 3-axis IMU and active beacon," 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South), 2009, pp. 1956-1961, doi: 10.1109/ISIE.2009.5214363.
  34. Athani, V. V. (1997). Stepper motors: fundamentals, applications and design. New Age International.
  35. https://arxiv.org/pdf/1903.01067v2.pdf
  36. http://www.roboticsproceedings.org/rss09/p37.pdf
  37. https://www.robots.ox.ac.uk/~mobile/drs/Papers/2022RAL_zhang.pdf
  38. Luis C. Básaca-PreciadoOleg Yu. SergiyenkoJulio C. Rodríguez-QuinonezXochitl GarcíaVera V. TyrsaMoises Rivas-LopezDaniel Hernandez-BalbuenaPaolo MercorelliMikhail PodrygaloAlexander GurkoIrina TabakovaOleg Starostenko (2013), Optical 3D laser measurement system for navigation of autonomous mobile robot, https://www.sciencedirect.com/science/article/pii/S0143816613002480
  39. Dorit Borrmann, Andreas Nüchter, Marija Ðakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, Jasmin Velagić, A mobile robot based system for fully automated thermal 3D mapping (2014), https://www.sciencedirect.com/science/article/pii/S1474034614000408
  40. Zhiliang Ma, Shilong Liu, 2018, A review of 3D reconstruction techniques in civil engineering and their applications (2014), https://www.sciencedirect.com/science/article/pii/S1474034617304275?casa_token=Bv6W7b-GeUAAAAAA:nGuyojclQld2SMnIeHougCByarFJX7eu049kMp_IWrnU5e8ljX9RMao-U4vs6cB3nREk8JP3qIA
  41. Juan Li, Xiang He, Jia L, 2D LiDAR and camera fusion in 3D modeling of indoor environment (2015), https://ieeexplore.ieee.org/document/7443100
  42. https://www.mdpi.com/2072-4292/14/12/2835
  43. Francesco Amigoni, Vincenzo Caglioti, An information-based exploration strategy for environment mapping with mobile robots, Robotics and Autonomous Systems, Volume 58, Issue 5, 2010, Pages 684-699, ISSN 0921-8890, https://doi.org/10.1016/j.robot.2009.11.005. (https://www.sciencedirect.com/science/article/pii/S0921889009002024)
  44. M. Betke and L. Gurvits, "Mobile robot localization using landmarks," in IEEE Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251-263, April 1997, doi: 10.1109/70.563647.
  45. Bellotti, F., Berta, R., De Gloria, A., & Margarone, M. (2006). Guiding visually impaired people in the exhibition. Mobile Guide, 6, 1-6.
  46. Asraa Al-Wazzan , Farah Al-Ali, Rawan Al-Farhan , Mohammed El-Abd, Tour-Guide Robot (2016), https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7462397
  47. The Fuzzy Control Approach for a Quadruped Robot Guide Dog | SpringerLink