PRE2018 3 Group12: Difference between revisions
Line 130: | Line 130: | ||
|} | |} | ||
== Simulation of the | == Simulation of the sensor field of view == | ||
One of the goals for this project is to come up with a solution for the limited field of view of the prototype from last year's group. In order to do that, a proposed solution is to have the sensors rotate, enlarging their field of view. Since we want to minimize the amount of rotation whilst maximizing objects detected, we created a simulation in order to test multiple configurations. | One of the goals for this project is to come up with a solution for the limited field of view of the prototype from last year's group. In order to do that, a proposed solution is to have the sensors rotate, enlarging their field of view. Since we want to minimize the amount of rotation whilst maximizing objects detected, we created a simulation in order to test multiple configurations. | ||
=== Setup === | |||
The simulation is a top-down view of the subject and the environment in front of them. Since the human body is roughly approximated by an ellipses from that perspective, the sensors will be mounted on an elliptical curve at the bottom of the window, facing to the top of the window. The sensors are presumed to be spaced evenly across the curve. According to the spec-sheet of the ultrasonic sensor used by last years group, the field of view of each sensor is at most 15 degrees <ref>https://benselectronics.nl/hc-sr04-ultrasonic-module/</ref>, so that is what will be the field of view per sensor in the simulation as well. Finally, to simulate the user moving forward, rectangles of random dimensions will be randomly initialized at the top of the screen and move towards the bottom at 5 km/h, which is the average walking speed of a human. | |||
=== Variables === | |||
The variables of the simulation are: | |||
- The number of sensors <math>n</math> in use: ranges from 1 to 10, 1 being the minimum number of sensors needed to measure anything, and 10 being the maximum number of sensors which make sense considering we are only measuring the area in front of the user. | |||
- The speed <math>s</math> of rotation: ranges from <math>0 m/s</math> to <math>? m/s</math> | |||
- The amount of rotation: ranges from <math>0 \degree</math> to <math>180 \degree</math>, <math>0 \degree</math> being no motion and <math>180 \degree</math> being the maximum angle required to scan the whole area in front of the user. | |||
- The phase difference <math>d</math> per sensor: ranges from <math>0</math> to <math>360 \degree</math>. '''Note:''' this is for each sensor from left to right, thereby creating different scanning motions for each sensor. | |||
== '''References''' == | == '''References''' == | ||
<references /> | <references /> |
Revision as of 14:13, 20 February 2019
Group Members
Name | Study | Student ID |
---|---|---|
Harm van den Dungen | Electrical Engineering | 1018118 |
Nol Moonen | Software Science | 1003159 |
Johan van Poppel | Software Science | 0997566 |
Maarten Flippo | Software Science | 1006482 |
Jelle Zaadnoordijk | Mechanical Engineering | 1256238 |
Problem Statement
People with a visual impairment will never be able to sense the world as people without visual impairment. Thanks to guide dogs and white canes, however, these people are able to enjoy independence when it comes to navigating outside areas. Yet, these measures cannot give a representation of the world around them beyond the range of the cane or the movement of the dog. With the use of technology that might change. Using sensors these people could be given the ability to sense more than their immediate surroundings, sense objects which their white cane didn't contact or the dog ignored because it was not in the way. Also, using physical phenomena such as the Doppler effect one can also detect motion relative to you, further enhancing the image a visually impaired person can obtain of the world.
Users
The users we are designing the technology for, are the visually impaired. People with a visual disability often need aids to get through their daily life. For a blind or partially blind person, the simplest tasks can be hard to complete. While there are existing tools, such as guiding dogs and white canes, these are not always sufficient.
The most important requirement of the technology is that it offers a valid alternative to existing aids. This does not necessarily mean that the technology better support the users disability than alternatives, it could also mean that it is simply cheaper. If the product is cheaper it can still be an option for people not able to afford more costly alternatives. There are many factors classifying the value of a product. Two important factors are the production and selling costs, and the support given and the usability of the technology.
State of the Art
After doing some initial exploration, we found that the problem can be subdivided into two sub problems: how the environment can be perceived to create data, and how this data can be communicated back to the user. Now follows a short summary of existing technologies:
Mapping the environment
Many studies have been conducted on mapping an environment to electrical signals, in the context of supporting visually impaired users. This section will go over the many different technologies that these studies have used. These method can be subdivided into two categories: the technologies that scan the environment, and those who read previously planted information from the environment.
One way of reading an environment, is to provide beacons in this environment from which an agent can obtain information. In combination with a communication technology, it can be used to communicate this geographical information to a user. Such a system is called a geographic information system (GIS), and can save, store, manipulate, analyze, manage and present geographic data. [1] Examples of these communication technologies are the following:
- Bluetooth can be used to communicate spatial information to devices, for example to a cell phone. [2]
- Radio-frequency identification (RFID) uses electromagnetic fields to communicate data between devices. [3]
- Global Positioning System (GPS) is a satellite based navigation system. GPS can be used to transfer navigation data to a device. However, it is quite inaccurate. [3][4]
The other method of reading an environment is to use some technology to scan the environment by measuring some statistics. Examples of these scanning technologies are the following:
- A laser telemeter is a device that uses triangulation to measure distances to obstacles.[5]
- (stereo) Cameras can be used in combination with computer vision techniques to observe the environment. [6][7][8][9][10][11][12]
- Radar or ultrasound are high frequency sound waves. A device sends out these sounds and receives them when they reflect on objects. This is used to calculate the distance between sender and object. [13][14][15][16][17][18][19][20][4][21][22][23][24]
- Pyroelectricity is a chemical property of materials that can be used to detect objects.[23]
- A physical robot can be used in combination with any of the above mentioned techniques, instead of the device directly interacting with the environment. [25]
Communicating to the user
Given we are dealing with the visually impaired, we cannot convey the gathered information through a display. The most common alternatives are using haptic feedback or audio cues, either spoken or generic tones.
Cassinelli et al. have shown that haptic feedback is an intuitive means to convey spatial information to the visually impaired [24]. Their experiments detail how untrained individuals were able to dodge oncoming objects from behind reliably. This is of great use as it shows haptic feedback is a very good option of encoding spatial information.
Another way to encode spatial information is through audio transmissions, most commonly through an earbud for the wearer. An example of such a system was created by Farcy et al. [5]. By having different notes corresponding to distance ranges this information can be clearly relayed. Farcy et al. make use of a handheld device, which caused a problem for them. It required a lot of cognitive work to merge the audio cues with where the user pointed the device. This made the sonorous interface difficult to use so-long as the information processing is not intuitive. In this project the aim is to have a wearable system, which could mean this problem is not of significance.
Finally, regardless of how distance is encoded for the user to interpret, it is vital the user does not experience information overload. According to Van Erp et al. [26] users are easily overwhelmed with information.
State of the Art conclusion
From our State of the Art literary study, we conclude that a wide variety of technologies have been used to develop an even wider variety of devices to aid visually impaired people. However, we noticed relatively little papers focus on what is most important: the user. Many papers pick a technology and develop a product using that technology. This in and of itself is impressive, but too often there is little focus on what this technology can do for the user. Only afterwards a short experiment is conducted on whether or not it is even remotely usable by the user. Even worse, in most cases, not even visually impaired users are the ones that test the final product. The product is tested with blind-folded sighted people, but differences exist that a blindfold cannot simulate. Research has shown that the brains of blind people and sighted people are physically different[27], which could lead to them responding differently to the feedback that the product provides. The fact that the user is not involved in the early stage of decision making, leads to the fact that the final product is not suited for the problem. When the problem is fully understood by involving the actual users, a product can be developed solving the problem.
Approach
To follow our State of the Art conclusion, our goal is to design a system to aid blind people that is tampered to the needs of this user from the ground up. That is why we aim to involve the user from the start of the project. Firstly, we are first going to conduct a questionnaire-based research to fully understand our user. Only after understanding the user, we will start to gather requirements to make a preliminary design that fills the needs of thse users. After the preliminary design is finished, building the prototype can be started. During the making of the design and building the prototype, it is probable that some things might not go as planned and it will be necessary to go back steps, to make an improvement on the design in the end. When the prototype is finished, it is tweaked to perform as optimal as possible using several tests. We also aim to actually test the final prototype with visually impaired people. Finally, everything will be documented in the wiki.
Deliverables and Milestones
A prototype that aids blind people roaming around areas, that are unknown to them. This prototype is based on the design of last year[28]. From this design, a new design was made that tries to improve on the issues the previous design faced. Additionally, a wiki will be made that helps with giving additional information about the protoype, such as costs, components and it provides some backstory of the subject. Finally, a presentation is made regarding the final design and prototype.
- Presentation
- Presentation represents all aspects of the project
- Design
- Preliminary design
- Final design based on preliminary design, with possible alterations due to feedback from building the prototype
- Prototype
- Finish building the prototype regarding the final design
- Prototype is fully debugged and all components work as intended
- Prototype follows requirements
- Must haves are implemented
- Should haves are implemented
- Could haves are implemented
- Wiki
- Find at least 25 relative state-of-the-art papers
- Wiki page is finished containing all aspects of the project
Planning
Week | Day | Date | Activity | Content | Comments |
---|---|---|---|---|---|
Week 1 | Thursday | 07-02 | Meeting | First meeting, no content | |
Week 1 | Sunday | 10-02 | Deadline | Finding and summarizing 7 papers | |
Week 2 | Monday | 11-02 | Meeting | Creating SotA from researched papers | |
Week 2 | Tuesday | 12-02 | Deadline | Planning, users, SotA, logbook, approach, problem statement, milestones, deliverables | Edited in wiki 18 hours before next panel |
Week 2 | Thursday | 14-02 | Panel | ||
Week 2 | Sunday | 17-02 | Deadline | Prioritized and reviewed requirements document | |
Week 3 | Monday | 28-02 | Meeting | Discussing previous deadline (requirements) | |
Week 3 | Thursday | 21-02 | Panel | ||
Week 3 | Sunday | 24-02 | Deadline | Preliminary design | |
Week 4 | Monday | 25-02 | Meeting | Discussing previous deadline (preliminary design) | |
Week 4 | Thursday | 28-02 | Panel | Maarten not present at panel | |
Vacation | Sunday | 10-03 | Deadline | Final design | Final design is based on preliminary design |
Week 5 | Monday | 11-03 | Meeting | Discussing previous deadline (final design) | |
Week 5 | Thursday | 14-03 | Panel | ||
Week 6 | Monday | 18-03 | Meeting | Discussing deadline progress (prototype) | |
Week 6 | Thursday | 21-02 | Panel | ||
Week 6 | Sunday | 24-03 | Deadline | Prototype complete | |
Week 7 | Monday | 25-03 | Meeting | Discussing previous deadline (prototype) | |
Week 7 | Thursday | 27-03 | Panel | ||
Week 7 | Sunday | 31-03 | Deadline | Conclusion, discussion, presentation | |
Week 8 | Monday | 01-04 | Meeting | Discussing what is left | |
Week 8 | Thursday | 04-04 | Final presentation |
Simulation of the sensor field of view
One of the goals for this project is to come up with a solution for the limited field of view of the prototype from last year's group. In order to do that, a proposed solution is to have the sensors rotate, enlarging their field of view. Since we want to minimize the amount of rotation whilst maximizing objects detected, we created a simulation in order to test multiple configurations.
Setup
The simulation is a top-down view of the subject and the environment in front of them. Since the human body is roughly approximated by an ellipses from that perspective, the sensors will be mounted on an elliptical curve at the bottom of the window, facing to the top of the window. The sensors are presumed to be spaced evenly across the curve. According to the spec-sheet of the ultrasonic sensor used by last years group, the field of view of each sensor is at most 15 degrees [29], so that is what will be the field of view per sensor in the simulation as well. Finally, to simulate the user moving forward, rectangles of random dimensions will be randomly initialized at the top of the screen and move towards the bottom at 5 km/h, which is the average walking speed of a human.
Variables
The variables of the simulation are:
- The number of sensors [math]\displaystyle{ n }[/math] in use: ranges from 1 to 10, 1 being the minimum number of sensors needed to measure anything, and 10 being the maximum number of sensors which make sense considering we are only measuring the area in front of the user. - The speed [math]\displaystyle{ s }[/math] of rotation: ranges from [math]\displaystyle{ 0 m/s }[/math] to [math]\displaystyle{ ? m/s }[/math] - The amount of rotation: ranges from [math]\displaystyle{ 0 \degree }[/math] to [math]\displaystyle{ 180 \degree }[/math], [math]\displaystyle{ 0 \degree }[/math] being no motion and [math]\displaystyle{ 180 \degree }[/math] being the maximum angle required to scan the whole area in front of the user. - The phase difference [math]\displaystyle{ d }[/math] per sensor: ranges from [math]\displaystyle{ 0 }[/math] to [math]\displaystyle{ 360 \degree }[/math]. Note: this is for each sensor from left to right, thereby creating different scanning motions for each sensor.
References
- ↑ Faria, J., Lopes, S., Fernandes, H., Martins, P., & Barroso, J. (2010). Electronic white cane for blind people navigation assistance. World Automation Congress (WAC), 2010, 1–7. Retrieved from https://ieeexplore.ieee.org/abstract/document/5665289/citations#citations
- ↑ Bohonos, S., Lee, A., Malik, A., Thai, C., & Manduchi, R. (2007). Universal real-time navigational assistance (URNA). In Proceedings of the 1st ACM SIGMOBILE international workshop on Systems and networking support for healthcare and assisted living environments - HealthNet '07
- ↑ 3.0 3.1 Fernandes, H., Costa, P., Filipe, V., & Hadjileontiadis, L. (2010). STEREO VISION IN BLIND NAVIGATION ASSISTANCE. 2010 World Automation Congress. Retrieved from https://ieeexplore.ieee.org/abstract/document/5665579
- ↑ 4.0 4.1 Ghate, A. A., & Chavan, V. G. (2017). SMART GLOVES FOR BLIND. IRJET, 12(04), 1025–1028. Retrieved from https://www.irjet.net/volume4-issue12
- ↑ 5.0 5.1 Farcy, R. Bellik, Y. (2002). Locomotion Assistance for the Blind. https://link.springer.com/chapter/10.1007/978-1-4471-3719-1_27
- ↑ Dunai, L., Fajarnes, G. P., Praderas, V. S., Garcia, B. D., & Lengua, I. L. (2010). Real-time assistance prototype- A new navigation aid for blind people. In IECON Proceedings (Industrial Electronics Conference) (pp. 1173–1178). IEEE. https://doi.org/10.1109/IECON.2010.5675535
- ↑ Truelliet, S., & Royer, E. (2010). OUTDOOR/INDOOR VISION-BASED LOCALIZATION FOR BLIND PEDESTRIAN NAVIGATION ASSISTANCE. International Journal of Image and Graphics, 10(04), 481–496. https://doi.org/10.1142/S0219467810003937
- ↑ L. Dunai, G. P. Fajarnes, V. S. Praderas, B. D. Garcia and I. L. Lengua, "Real-time assistance prototype — A new navigation aid for blind people," IECON 2010 - 36th Annual Conference on IEEE Industrial Electronics Society, Glendale, AZ, 2010, pp. 1173-1178. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5675535&isnumber=5674827
- ↑ Schwarze, T. Lauer, M, Schwaab, M. Romanovas, M. Böhm, S. Jürgensohn, T. (2015). A camera-based mobility aid for visually impaired people. https://link.springer.com/article/10.1007/s13218-015-0407-7
- ↑ Wang, H. Katzschmann, R. Teng, S. Araki, B. Giarré, L. Rus, D. (2017). Enabling independent navigation for visually impaired people through a wearable vision-based feedback system. https://ieeexplore.ieee.org/abstract/document/7989772
- ↑ Yi, C., Flores, R. W., Chincha, R., & Tian, Y. (2013). Finding objects for assisting blind people. Network Modeling Analysis in Health Informatics and Bioinformatics, 2(2), 71–79. https://doi.org/10.1007/s13721-013-0026-x
- ↑ Zeb, A., Ullah, S., & Rabbi, I. (2014). Indoor vision-based auditory assistance for blind people in semi controlled environments. In 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA) (pp. 1–6). IEEE. https://doi.org/10.1109/IPTA.2014.7001996
- ↑ A wearable assistive device for the visually impaired. (n.d.). Retrieved February 11, 2019, from http://www.guidesense.com/en/
- ↑ Pereira, A., Nunes, N., Vieira, D., Costa, N., Fernandes, H. & Barroso, J. (2015). Blind Guide: An ultrasound sensor-based body area network for guiding blind people. Procedia Computer Science, 67, 403–408. https://doi.org/10.1016/j.procs.2015.09.285
- ↑ Al-Mosawi, Ali. (2012). Using ultrasonic sensor for blind and deaf persons combines voice alert and vibration properties. Research Journal of Recent Sciences. 1. https://www.researchgate.net/publication/235769070_Using_ultrasonic_sensor_for_blind_and_deaf_persons_combines_voice_alert_and_vibration_properties
- ↑ T. Ifukube, T. Sasaki and C. Peng, "A blind mobility aid modeled after echolocation of bats," in IEEE Transactions on Biomedical Engineering, vol. 38, no. 5, pp. 461-465, May 1991. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=81565&isnumber=2674
- ↑ Bousbia-Salah, M., Bettayeb, M. & Larbi, A. J Intell Robot Syst (2011) 64: 387. https://doi.org/10.1007/s10846-011-9555-7
- ↑ Bousbia-Salah M., Fezari M. (2007) A Navigation Tool for Blind People. In: Sobh T. (eds) Innovations and Advanced Techniques in Computer and Information Sciences and Engineering. Springer, Dordrecht. https://link.springer.com/chapter/10.1007%2F978-1-4020-6268-1_59
- ↑ P. Mihajlik, M. Guttermuth, K. Seres and P. Tatai, "DSP-based ultrasonic navigation aid for the blind," IMTC 2001. Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference. Rediscovering Measurement in the Age of Informatics (Cat. No.01CH 37188), Budapest, 2001, pp. 1535-1540 vol.3. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=929462&isnumber=20096
- ↑ Pereira, A., Nunes, N., Vieira, D., Costa, N., Fernandes, H. & Barroso, J. (2015). Blind Guide: An ultrasound sensor-based body area network for guiding blind people. Procedia Computer Science, 67, 403–408. https://doi.org/10.1016/j.procs.2015.09.285
- ↑ Bujacz, M., & Strumiłło, P. (2016). Sonification: Review of Auditory Display Solutions in Electronic Travel Aids for the Blind. Archives of Acoustics, 41(3), 401–414. https://doi.org/10.1515/aoa-2016-0040
- ↑ Mehta, U. Alim, M. Kumar, S. (2017). Smart path guidance mobile aid for visually disabled persons. https://www.sciencedirect.com/science/article/pii/S1877050917302089
- ↑ 23.0 23.1 Ram, S. Sharf, J. (2002). The people sensor: a mobility aid for the visually impaired. https://ieeexplore.ieee.org/abstract/document/729548
- ↑ 24.0 24.1 Cassinelli, A. Reynolds, C. Ishikawa, M. (2006). Augmenting spatial awareness with Haptic Radar. https://ieeexplore.ieee.org/abstract/document/4067727
- ↑ Lacey, G. Dawson-Howe K. (1998). The application of robotics to a mobility aid for the elderly blind. https://www.sciencedirect.com/science/article/pii/S0921889098000116
- ↑ Van Erp, J. Kroon, L. Mioch, T. Paul, K. (2017), Obstacle Detection Display for Visually Impaired: Coding of Direction, Distance, and Height on a Vibrotactile Waist Band. https://www.frontiersin.org/articles/10.3389/fict.2017.00023/full
- ↑ Park, H.-J., Lee, J. D., Kim, E. Y., Park, B., Oh, M.-K., Lee, S., & Kim, J.-J. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47(1), 98–106. https://doi.org/10.1016/j.neuroimage.2009.03.076
- ↑ Boekhorst, B, te. Kruithof, E. Cloudt, Stefan. Cloudt, Eline. Kamperman, T. (2017). Robots Everywhere PRE2017 3 Groep13. http://cstwiki.wtb.tue.nl/index.php?title=PRE2017_3_Groep13
- ↑ https://benselectronics.nl/hc-sr04-ultrasonic-module/