PRE2024 3 Group18
Members
Name | Student Number | Division |
---|---|---|
Bas Gerrits | 1747371 | BEE |
Jada van der Heijden | 1756710 | BPT |
Dylan Jansen | 1485423 | BEE |
Elena Jansen | 1803654 | BID |
Sem Janssen | 1747290 | BEE |
Approach, milestones and deliverables
Week | Milestones |
---|---|
Week 1 | Project orientation
Initial topic ideas Creating planning Defining deliverables Defining target user group |
Week 2 | Wiki layout
Task distribution SotA research UX Design: User research |
Week 3 | UX Design: User Interviews
Identifying specifications (MoSCoW Prioritization) Wiki: Specifications, Functionalities (sensors/motors used) |
Week 4 | Prototyping system overview
UX Design: Processing interviews Bill of materials |
Week 5 | Evaluating and refining final design
Order needed items |
Week 6 | Prototype creation and testing
Wiki: Prototype description & Testing results |
Week 7 | Presentation/demo preperations
Wiki: Finalize and improve |
Name | Responsibilities |
---|---|
Bas Gerrits | Sensor/Actuator research |
Jada van der Heijden | Administration, UX design, Wiki |
Dylan Jansen | SotA, Prototype |
Elena Jansen | Designing, UX research |
Sem Janssen | Potential solutions, Prototype |
Problem statement and objectives
Problem Statement
When encountering an environment as ever-changing and unpredictable as traffic, it is important for every traffic participant to have the widest range of information about the situation available to them, for safety reasons. Most roads are already covered in guiding materials: traffic lights, cross walks and level crossings have visual, tactile and auditory signals that are able to relay as much information to users of traffic as possible. Unfortunately, some user groups are more dependent on certain type of signals than others, for example due to disabilities. Not every crossing or road has all of these sensory cues, therefore it is important to find a solution for those user groups that struggle with this lack of information and therefore feel less safe in traffic. In specific, we are looking at visually impaired people, and creating a system/robot/design that will aid them in crossing roads, even with the lack of external sensory cues. The specific scenario used for this project is a 50km/h road with bike lanes on both sides, with no accesibility features, red lights or zebra. This scenario was recreated in photoshop in the image on the right;
Main objectives
- The design aids the user in crossing roads regardless of external sensory cues present, thus giving more independence to the user.
- The design has an audible testing phase and then gives intuitive haptic feedback for crossing roads.
- The design must have a reliable detection system.
- The design does not restrict the user in any way from wearing what they want, participating in activities, not calling extra unnecessary attention to the user.
An extended list of all features can be found at MoScoW part.
State of the Art Literature Research
Existing visually-impaired aiding materials
Today there already exist a lot of aids for visually impaired people. Some of these can also be applied to help cross the road. The most common form of aid for visually impaired people when crossing is audio traffic signals and tactile pavement. Audio traffic signals provide audible tones when it’s safe to cross the road. Tactile pavement are patterns on the sidewalk to alert visually impaired people to hazards or important locations like crosswalk. These aids are already widely applied but come with the drawback that it is only available at dedicated crosswalks. This means visually impaired people might still not be able to cross at locations they would like to, which doesn’t positively affect their independence.
Another option is smartphone apps. There are currently two different types of apps that visually impaired people can use. The first is apps that use a video call to connect visually impaired people to someone that can guide them through the use of the camera. Examples of these apps are Be My Eyes and Aira. The second type is an app utilizing AI to describe scenes using the phone’s camera. An example of this is Seeing AI by Microsoft. The reliability of this sort of app is of course a major question, however during research Aira was found to greatly improve QoL in severely visually impaired individuals[1]. There have also been others apps developed but often not widely tested or used and others with a monthly subscription of 20 euros a month which is quite costly.
There have also been attempts to make guidance robots. These robots autonomously guide, avoid obstacles, stay on a safe path, and help you get to your destination. Glidance is one of these robots currently in the testing stage. It promises obstacle avoidance, the ability to detect doors and stairs, and a voice to describe the scene. In its demonstration it also shows the ability to navigate to and across crosswalks. It navigates to a nearby crosswalk, slows down and comes to a standstill before the pavement ends, and keeps in mind the traffic. It also gives the user subtle tactile feedback to communicate certain events to them. These robots could in some ways replace the tasks of guidance dogs. There are also projects that try to make the dogs robot-like. Even though this might make the implementation harder than it needs to be. It seems the reason for the shape of a dog is to make the robot feel more like a companion.
There also exist some wearable/accessory options for blind people. Some examples are the OrCam MyEye: A device attachable to glasses that helps visually impaired users by reading text aloud, recognizing faces, and identifying objects in real time. Or the eSight Glasses: Electronic glasses that enhance vision for people with legal blindness by providing a real-time video feed displayed on special lenses. Theres also the Sunu Band (no longer available): A wristband equipped with sonar technology that provides haptic feedback when obstacles are near, helping users detect objects and navigate around them. While these devices can all technically assist in crossing the road, none of them are specifically designed for that purpose. The OrCam MyEye could maybe help identify oncoming traffic but may not be able to judge their speed. The eSight Glasses are unfortunately not applicable to all types of blindness. And the Sunu Band would most likely not react fast enough to fast-driving cars. Lastly, there are some smart canes that come with features like haptic feedback or GPS that can help guide users to the safest crossing points.
https://babbage.com/ Had a product called the N-vibe consisting of two vibrating bracelets giving feedback on surroundings but if you look it up on their site the page seems to be deleted https://babbage.com/?s=n+vibe The product used to be a GPS that helps blind people get from point a to point b in a less invasive way with feedback through vibration. This feedback through vibration has been tested in multiple scenarios and tends to work. This would also be a good idea for our project.
Babbage is a company with many solutions for visually impaired people with products slightly similar to our goal but nothing quite like it yet.
https://www.electronicsforu.com/news/whats-new/wearable-device-for-the-visually-impaired-powered-by-ai elena gaat dit toevoegen
User Experience Design Research: USEr research
For this project, we will employ a process similar to UX design:
- We will contact the stakeholders or target group, which is in this case visually impaired people, to understand what they need and what our design could do for them
- Based on their insight and literature user research, we will further specify our requirements list from the USE side
- Combined with requirements formed via the SotA research, a finished list will emerge with everything needed to start the design process
- From then we build, prototype and iterate until needs are met
Below are the USE related steps to this process.
Target Group
Primary Users:
- People with affected vision that would have sizeable trouble navigating traffic independently or crave more independance in unknown areas: ranging from heavily visually impaired to fully blind (<10%)
Our user group only focusses on those who are able to indepently take part of traffic in atleast some familiar situations.
Secondary Users:
- Road users: any moving being or thing on the road will be in contact with the system.
- Fellow pedestrians: the system must consider other people when moving. This is a separate user category, as the system may have to interact with these users in a different way than, for example, oncoming traffic.
Users
General information
As not everyone is as educated on visually challenged people and what different percentages mean for people sight some general information is included.
Interviewed users have said that just a percentage is not always enough to illustrate the level of blindness. Thus we combine it with the kind of blindness in our survey and interview.
An example, Peter from Videlio has 0.5% with tunnel vision, and George (70yo) has complete blindness.
- "Total blindness: This is when a person cannot see anything, including light.
- Low vision: Low vision describes visual impairments that healthcare professionals cannot treat using conventional methods, such as glasses, medication, or surgery.
- Legal blindness: “Legal blindness” is a term the United States government uses to determine who is eligible for certain types of aid. To qualify, a person must have 20/200 Trusted Source vision or less in their better-seeing eye, even with the best correction.
- Visual impairment: Visual impairment is a general term that describes people with any vision loss that interferes with daily activities, such as reading and watching TV."
Source citation: Nichols, H. (2023, 24 april). Types of blindness and their causes. https://www.medicalnewstoday.com/articles/types-of-blindness#age-related
In the Netherlands we use different terms and fractions. These are used in answers to our interview questions:
- Milde – gezichtsscherpte slechter dan 6/12 tot 6/18 50%-33%
- Matig – gezichtsscherpte slechter dan 6/18 tot 6/60 33%-10%
- Ernstig – gezichtsscherpte slechter dan 6/60 tot 3/60 10%-5%
- Blindheid – gezichtsscherpte slechter dan 3/60 <5%
6/60 betekent dat wat een normaalziend persoon op 60 meter kan zien, pas op 6 meter scherp wordt waargenomen. Bij 3/60 is dit 3 meter.
The dutch version of legal blindness: "Maatschappelijk blind: je gezichtsscherpte is tussen de 2 en 5%. Je ziet nog wel licht en de omtrek van mensen en voorwerpen, maar je visuele beperking heeft grote invloed op je leven. Soms ziet iemand wel scherp, maar is er een (ernstige) beperking in het gezichtsveld, bijvoorbeeld kokerzien. Als je gezichtsveld minder is dan 10 graden, dan wordt dat ook maatschappelijk blind genoemd"
Source citation: Lentiamo.nl. (2023, 2 augustus). Wat betekent het om wettelijk blind te zijn? https://www.lentiamo.nl/blog/wettelijke-blindheid.html?srsltid=AfmBOorvRSTnULBQsYzCX0BvJ5k9JQnrlvt65dlUgwTOjzMnPugfEybp Oogfonds. (2022, 9 september). Blindheid - oogfonds. https://oogfonds.nl/visuele-beperkingen/blindheid/
For 50%-10% people can usually be in traffic by themselves with no issues and no tools. Sometimes high contrast glasses are used or other magnification tools. Starting at 10% a lot of people use a cane to look for obstacles. This is why we focus on people with less than 10% vision or rapidely declining vision for our research and as our target group.
At 2% vision people can still count fingers from 1m distance. However this can differ with different kinds of blindness;
Tunnel vision, central vision loss (the center of vision is blurry), pheripiral vision loss, blurry vision, fluctuating vision (vision changes due to blood sugar etc), Hemianopia (loss of vision in half of the vision field on one or both eyes), light perception blindness (Being able to see shape and a little color).
What Do They Require?
The main takeaway here is that visually impaired people are still very capable people, and have developed different strategies for how to go about everyday life.
"I have driven a car, a jetski, I have gone running, I have climbed a mountain. Barriers don’t exist we put them infront of us." (Azeem Amir, 2018)
‘Not having 20/20 vision is not an inability or disability its just not having the level of vision the world deems acceptable. Im not disabled because im blind but because the sighted world has decided so.’[2]. This was said by Azeem Amir in a ted talk about how being blind would never stop him. In the interviews and research it came forward again and again that being blind does not mean you can not be an active participant in traffic. While there are people that lost a lot of independence due to being blind there is a big group that goes outside daily, takes the bus, walks in the city, crosses the street. There are however still situations where it does not feel safe to cross a busy street in which case users have reported having to walk around it, even for kilometers, or asking for help from a stranger. This can be annoying and frustrating. In a Belgian interview with blind children a boy said the worst thing about being blind was not being able to cycle and walk everywhere by himself, interviewing users also reflected this[3]. So our number one objective with this project is to give back that independence.
A requirement of the product is not drawing to much attention to the user. This is because while some have no problem showing the world theyre legally blind and need help, others prefer not to have a device announce they are blind over and over.
Something that came forward in the interviews is that google maps and other auditory feedback software lack a repeat button. Traffic can be loud and then when the audible cue is missed it is hard to figure out where to go or what to do. So while audio is helpful it is only helpful if it can be repeated or is not the sole accessible option.
Blind people do not like to be grouped with disabled people, or worse mentally challenged people. They might not have the same vision as us but that is the only difference. There is nothing wrong with their mind, they are not helpless children. Blind people also might not look aware of their surroundings but they often are. When designing this has to be kept in mind. Blind people can still get complicated solutions, can understand a design process and thus why a product might be expensive. Often accesibility solutions can be simple, but we do not need to oversimply it so they can understand.
For our interviews a lot of the interviewed people were fairly old, this also makes sense as our main contact that used his network was Peter who is not young himself, and because as we get older chances of our vision decreasing increase. Many of them due to their lack of vision have learned to use technology and more complicated devices than the average elderly person. They also might be more open to trying new things. This does not go for everyone so we still have to be careful with a intuitive ux design but less so than with other elders.
Society
How is the user group situated in society? How would society benifit from our design?
People that are visually impaired are somewhat treated differently in society. In general, there are plenty of guidelines and modes of help accessible for this group of people, considering globally at least 2.2 billion have a near or distance vision impairment (https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment). In the Netherlands there are 200 000 people with a 10%- 0% level of sight ('ernsitge slechtziendheid' and 'blindheid'). It is important to remember that when someone is legally blind a big percentage can still see. Peter who we have interviewed for this project and is a member of innovation space on campus, has 0.5% sight with tunnelvision. This means Peter can only see something in the very center. Sam from a Willem Wever interview has 3% with tunnel vision and can be seen in the interview wearing special glasses through which he can still watch tv and read[4]. Society often forgets this and simply assume blind people see nothing at all. This is true for George, a 70 year old man we interviewed, but not for all target users.
The most popular aids for visually challenged people is currently the cane and the support dog. Using these blind people navigate their way through traffic. In the Netherlands it is written in the law that when the cane is raised cars have to stop. In reality this does not always happen, especially bikes do not stop, this was said in multiple interviews. Many people do have the decency to stop but it is not a fail proof method. Blind people thus remain dependant on their environment. Consistency, markings, auditory feedback and physical buttons are there to make being independent more accesibly. These features are more prominent in big cities which is why our scenario takes place in a town.
Many interviewed people reported apps on their phones, projects from other students, reports from products in other countries, paid monthly devices, etc. There are a lot of option for improvement being designed or researched but none are being brought into the big market.
As society stands right now, the aids that exist do help in aiding visually impaired people in their day to day life, but do not give them the right amount of security and confidence needed for those people to feel more independent, and more at equal pace as people without impaired vision. In order to close up this gap, creating a design that will focus more on giving these people a bigger sense of interdependency and self-reliance is key.
Enterprise
What businesses and companies are connected to this issue?
Peter from Videlio says the community does not get much funding or attention in new innovations. Which is why he has taken it into his own hands to create a light up cane for at night, which until now does not even exist for dutch users. There is a lot of research on universities and school projects because it is an 'interesting user group' with a lot of potential for new innovations, but they do not get actual new products on the market. People with a vision of less than 10% make up such a small part of society that there is simply not enough demand for new products, which is a shame.
User Experience Design Research: Stakeholder Analysis
To come in closer contact with our target groups, we reached out to multiple associations via mail. The following questions were asked (in Dutch), as a ways to gather preliminary information:
Following these questions, we got answers from multiple associations that were able to give us additional information.
From this, we were able to setup interviews via Peter Meuleman, researcher and founder of Videlio Foundation. This foundation focusses on enhancing daily life for the visually impaired through personal guidance, influencing governmental policies and stimulating innovation in the field of assisting tools. Peter himself is very involved in the research and technological aspects of these tools, and was willing to be interviewed by us for this project. He also provided us with interviewees via his foundation and his personal circle.
These interview were semi-structured, with a structured version being sent as an announcement on https://www.kimbols.be/ (in Dutch) via Microsoft Forms. The announcement, Form link and the questions can be found in the Appendix.
Interview results
With these interviews, we were aiming to find answers to the following research questions:
- How do visually impaired people experience crossing the road?
- What wishes do visually impaired people have when it comes to new and pre-existing aids?
- How can we improve the sense of security and independency for visually impaired people when it comes to crossing the road?
From the interviews, a (deductive) thematic analysis was done. As we went into these interviews with a predefined goal and some pre-exiting notions on what the participants might answer, the approach is deductive.
A total of x people were interviewed, of which 6 were done via call (semi-structured) and 10 via Microsoft Forms (stuctured). The average age of respondands was around 53, with ages ranging from 31 - 69. A large amount of participants reported to be fully blind, with some stating to have legal blindness or low vision.
Thematic Analysis
Theme | Codes |
---|---|
Experience of traffic | - Little attention to pedestrians
- Abundance of non-marked obstacles - Very dependant on situation |
Experience of crossing the road | - Unclarity about crossing location
- Lack of attention from other traffic users - Lack or abundance of auditory information |
Road crossing strategies | - Patience for safety
- Focus on listening to traffic - Being visible to other traffic users - Confidence - Cyclists |
Aid system needs | - Not being able to pick up everything
- Aid should not be handheld - Preference for tactile information |
Experience of traffic
Overall, participants gave very varying answers when it came to their general experience with traffic. Almost all users use a white cane, with some having an additional guide dog. Most if not all participants expressed that they are able to move around freely and with general ease and comfort, but that it is dependant on the situation at hand. Most participants expressed their experience with traffic to worsen in a higher-stake situation, with busy roads. This is mostly due to other traffic users 'being impatient and racing on '. This point comes back multiple times, with people expressing that at peak hours they could lose track of the situation easier.
Additionally, it was mentioned that participants could find it hard to navigate new situations, especially those where things are not marked properly. One participant mentioned that there are cases of bike paths right behind a road with a cross walk, where the cross walk doesn't extend to the bike path. She mentioned that it is hard to anticipate for situations like this, where there are unexpected obstacles. Things such as curbs or tactile guiding lines being blocked off by for example café tables or billboards were also mentioned as hazards.
Experience of crossing the road
The point of the sound guidance at traffic lights or traffic control systems (rateltikker) came up multiple times. Participants mentioned that these really help, but are not available everywhere, sometimes turned off or the sound is too low. When it comes to these types of auditory information, participants mentioned that, since they rely much on their hearing, the lack of auditory signals can make it hard for them to cross. Things such as cyclists, electric vehicles and On the other hand,
Participants also mentioned it can sometimes be hard to find the spot to cross the road. One participant mentioned that the sound of the traffic can be a hint to a traffic light being present ("If the traffic is all going and then suddenly stops, I know there is a traffic light nearby"), but if there are no guidelines (no rateltikker or tactile guiding lines), she would not know where to cross the road.
Road crossing strategies
A general theme that comes up is safety first. Many participants described an approach of patience and clarity when it comes to crossing the road. Firstly, if the road was too busy (a high-speed road, two-way road or at peak hour), many participants would choose to avoid the road or find the safest spot to cross. If there is no other option, some would also ask cyclists to aid in crossing the road. Secondly, all participants described that they would carefully listen to traffic, and try to cross when it is least busy. Many mentioned they would do this when they were absolutely sure nothing was coming (no sound from traffic, even then still waiting a bit). Then, in order to actually cross, the participants that owned a white cane would then lift their cane or push it towards the road, so that other users can see that they want to cross. Participants clearly stated visibility as an important thing, but mentioned even then some traffic users would not stop.
Two participants mentioned that in order to show you want to cross, you should just move confidently and without hesitation, since otherwise traffic will just keep moving. Other participants seem to share this sentiment of traffic being unpredictable, but seem to prefer waiting until they are certain it is safe: when there is hesitation, they would not cross.
An additional issue brought up was cyclists (or electric vehicles): these were hard to hear, and would often refuse to stop. This issue of traffic users being impatient was mentioned before in experience of traffic, and seems to reoccur here and restrict the participants' crossing strategies.
Two-way roads are a point of interests: some participants mentioned these type of roads as hard to cross. Most participants prefer to cross the road in one go, and for two-way roads, it is hard to anticipate whether there is a car coming from the other side, and whether cars are willing to stop and wait. Many mentioned the concept of parallel traffic in order to identify when to cross: if the traffic parallel to them is surging, they know not to cross and vice versa.
Aid system needs
In general, one of the main issues brought up, which is mentioned before in Road crossing strategies, is that there are some traffic users that the participants are unable to hear. Their aids are not able to detect these sorts of things, and circumstances such as loud wind or traffic can make it hard for the participants themselves to pick these things up.
Almost all users expressed a preference for an aid system with tactile information, especially when considering the overload of auditory information already present in traffic (cars, beeps, wind etc.). One participant mentioned that a combination of the two could be used as a way of learning how the aid works (this participant also travels with other visually impaired friends, and mentioned the auditory component could aid their friends in crossing the road together).
Many users expressed concern when it comes to control: most prefer to have absolute control over the aid system, with many expressing concern and distrust ("A sensor is not a solution" and "I'd rather not use this"). Some prefer minimal control, mostly due to their energy going towards crossing the road already. One participant mentioned as an example that they would like to be able to turn their system on and off whenever it suits the situation.
Participants also said they would prefer to not have to hold the system in their hand, as most already have their hands full with their other aids.
Conclusion: main takeaways from the interviews
Key insights about the user group and their wants and needs:
- When describing someones vision do not only use a percentage
- As many people in our user group are elders and not everyone likes technology and gadgets our product is only for those willing to try new things with tech.
- Not all legally blind people are capable of taking part in traffic by themselves. Our product is only for those who are able to go places without assistance.
- 'I would love to have exactly what you are describing, the world feels like there is a spotlight on only where I am standing I have no eyes around that'
- When not properly explained users can mistake the product for something that will tell them about their surroundings not just roads.
- A common struggle is finding the actual point of where to stand to safely cross the road. A spot where the user is visible to others, a crossing, a green light, it is not always easy to find.
- Bicycles on the road are as much of a struggle as cars are to the user.
- Users often do not know if there is a bike lane and bicycles do not stop.
Design insights/ feedback:
- Audible feedback needs to be able to be repeated for when the cue is missed.
- There has to be an option without audio as many blind people also have declining hearing.
- The design has to include both auditory and haptic feedback as traffic is loud.
- A common obstacle is a road that is in the middle divided by a car parking, bushes, tiles etc. users request the device let them know when they are or are not able to cross the road in one go.
- Battery needs to either have a long life or the device has to be able to go on stand-by. It can not be fully off because the user can not see if there even is a road, so the device should let them know.
- The sensor can not be too heavy.
- A bracelet/watch is a very good placement for the feedback sensor.
- For many users it does not matter at all if the sensor is visible.
- Ideally the device would recognise a street that has to be crossed, in a familiar situation (which for the duration of this short project we could focus on) the roads are known. But to increase independence further the device could let the user know there is about to be a crossing when the user is unaware of this.
- The sensor could be a cord around the neck or a clipable sensor for a jacket/ blouse.
MoSCoW Requirements
From the user interview we can infer some design requirements. These will be organised using the MoSCoW method. First off, the product will have to give feedback to the user. It was mentioned in the interviews that audio feedback might not always be heard, however vibrational feedback can be felt and might be more reliable. Therefore vibrational feedback will be the feedback method of choice and audio is seen as an optional feature. Many visually impaired people also already use tools to help them, it should be taken into consideration that our solution does not impact the use of those tools or it could even integrate with them. Interviewees also mentioned that what they would like is to be more independent, this is therefore also a priority for the solution. Ideally it would create more opportunities for visually impaired people to cross the road where and when they want. Some people mentioned in the interviews they don't like drawing much attention to their blindness, therefore a the implementation should be rather discrete. This could be difficult however, as quite a few devices are needed which might be difficult to completely hide. A very important step of designing a road crossing aid is of course safety because mistakes could injure or even cost someone their life. Thus, safety should be a high priority when making a product. It's also clear from the interviews that users do not need the solution to be simplified for them and are capable of handling more complicated devices. The solution can thus be allowed to be more advanced.
Because the group of visually impaired users is quite diverse, a decision must be made about who to design for specifically. As people with minor visual impairments do not require much in the way of assistance in crossing the road, focusing the design around people with moderate to severe blindness will provide more usefulness. Furthermore, the solution will not be able to replace the place of current blind assistance tool but rather act as an additional help. While AI detection of cars, bicycles, or other traffic could provide a good solution to evaluating traffic situations, this is outside of the scope of the project and will therefore not be considered.
The table below summarizes the design requirements.
Must | Should | Could | Won't |
---|---|---|---|
Give vibration feedback | Not draw too much attention | Give audio feedback | Be designed for people with minor visual impairments |
Ensure safety | Allow new road crossing opportunities | Integrate with existing aids | Implement AI detection |
Not get in the way | Not obstruct use of canes/dogs | Be complex/advanced | Replace existing aids |
Provide extra independence |
Design ideas
To safely cross a road with a speed limit of 50 km/h using distance detection technology, we need to calculate the minimum distance the sensor must measure.
Assuming a standard two-lane road width of 7.9 meters to be on the safe side, the middle of the second lane is at 5.8 meters. This gives us a height h of 5.8 meters.
The average walking speed is 4.8 km/h, which converts to 1.33 m/s. Therefore, crossing the road takes approximately 5.925 seconds.
The distance d can now be calculated. A car travelling at 50 km/h moves at 13.88 m/s, so:
[math]\displaystyle{ d=13.88 * 5.925 = 82.24 }[/math] meters
Applying the Pythagorean theorem, the radius r is:
[math]\displaystyle{ r=sqrt(d^2 + h^2)=sqrt(82.24^2 + 5.8^2)=82.44 }[/math]
This means the sensor needs to measure a minimum distance of approximately 82.44 meters to ensure safe crossing.
The sensors that are capable of this are more the 2000 euros, which is out of the budget for the course so for the prototype a small scaled version needs to be used to prove the concept. The new speed will depend on the distance the sensor can measure. This is done by the following ratio.
[math]\displaystyle{ \frac{speed}{distance} = \frac{50}{82.44} }[/math]
Sensors
Radar/Lidar sensor
There are 3 main ways to measure distances. Ultrasonic sound sensor, lidar and radar. Most ultrasonic sensors can measure up 10 meters. Since the distance is too low for the application the ultrasonic sensor is not an option to consider for the end product. Lidar and radar sensors are both able to measures distances of at least 100 meters. Both sensors are widely used for example in autonomous vehicles, robotics, traffic control and surveillance systems. To determine the best sensor for this application the strength and weaknesses are evaluated.
The benefits of the lidar sensor are as followed:
- High precision and resolution: Lidar uses laser beams, which offers high accuracy and resolution. It can be used for 3D mapping of environments and object detections. Due to the high resolution it can make great measurements.
- 3D mapping: The lidar sensor sends light beams in all directions creating a 3D images of the surrounding. This can be useful when implementing an AI with lidar helping the person navigating the surroundings. It can detect the difference between trucks cars and bicycles and other pedestrians.
The drawbacks of the lidar sensor are as followed:
- Weather conditions:The range decreases significantly when the conditions are not ideal. The conditions the lidar sensor struggle with are rain, fog, snow, dust and smoke. The light beam scatters reducing the accuracy. Bright sunlight can cause noise to the data.
- Expensive: Lidar sensor are expensive which would make the product quite expensive and not appealing to the user.
- Surface: The surface matters when measuring using light. Lidar sensors struggle with reflective and transparent surfaces. Which result in inaccurate data.
- Power consumption: Lidar systems consume more power than radar sensors which is not ideal when using a battery system to power the device.
The benefits of the radar sensor are as followed:
- Longer range: Radar sensor are able to measure up to much larger ranges.
- Robustness: The radar sensor is less effected by rain, fog or snow making at more reliable for different weather conditions. The sensor is also not effected by the sunlight because it uses radio waves.
- Lower cost: The most radar sensors are less expensive than lidar sensors especially when comparing sensor with the same measuring ranges. Thus making the product less expensive for the user.
- Smaller: The radar sensor is significantly smaller in size than the lidar sensor. The drawbacks of the radar sensor are as followed: The drawbacks of the radar sensor are as followed:
The drawbacks of the radar sensor are as followed:
- Lower resolution: The radar sensor provides less data then the lidar sensor. Radar sensors have a hard time differentiating between similar objects and small objects. The situation measured using the radar sensor is less detailed then the lidar.
- Busy environments: When there are multiple reflecting surfaces, a lot of echoes arise from the surfaces, causing the radar sensor to pick up more objects than there are in the surroundings.
- Angles: Increasing the angle at which the radar sensor can measure the environment decreases the distance. Due to this the product needs two radar sensors to measure in 2 different directions to check if it is clear to cross the road.
- Frequency dependent: The resolution of the radar sensor is dependent on the frequency of the sensor. Increasing the frequency increases the resolution of the radar but also increasing the price.
In conclusion, the lidar provides better resolution however for this application the high resolution is not necessary to measure road users. The sensor needs to accurately measure position and speed of road users. If sensor picks up more road users at the same place because of echoes that doesn’t matter because filters can solve this problem. The robustness of the sensor is important to reliable operation in all weather condition. Cost is also an important factor. While the radar sensor requires two units to function, it is still cheaper compared to a single LIDAR sensor. Additionally, power consumption is important to keep the device compact and light, avoiding the need for a large battery. That is why the radar sensor is most optimal for this application.
Lidar: https://ouster.com/products/software/gemini
Radar: https://www.youtube.com/watch?v=5SJbFr4cpfA
Ultrasound using doppler effect (Sem)
misschien de andere group hierin referencen?
Wearable AI camera
This wearable device has a camera built into it in order to be able to view traffic scenes visually impaired people might want to cross but lack the vision to do so safely. The cameras send their video to some kind on small computer running an AI image processing model. This model would then extract some parameters that are important for safely crossing the road, think about: if there is oncoming traffic; other pedestrians; how long crossing would take; or whether there are traffic islands. This information can then also be communicated back to the user according to their preferences: audio (like a voice or certain tones) or haptic (through vibration patterns). The device could be implemented in various wearable forms. For example: a camera attached to glasses, something worn around the chest or waist, a hat, or on the shoulder. The main concerns is that the user feels most comfortable with the solution, like being easy to use, not being in the way, and being hidden if it makes them self-conscious.
This idea requires a quite sophisticated AI model, it is of concern that the computer is able to quickly process cars and relay the information back to the user or else the window to safely cross the road might have already passed. It also needs to be reliable, should it misidentify a car or misjudge is speed it could have a negative impact for the users life expectancy. Running an AI model like is proposed could also pose a challenge on a small computer. The computer has to be attached to the user is some way and can be quite obtrusive if it is too large. Users might also feel it makes them uncomfortable if it is very noticeable that they are wearing some aid and a large computer could make it difficult to make discrete. On the other side, an advanced model could probably not run locally on a very small machine, especially considering the fast reaction times required.
Actuators
When it comes to the actuating part of the design, it is important that the device adheres to the following:
- The signal must be noticeable, over all traffic noises and stimuli surrounding the user
- The signal must clearly indicate whether it is time to walk (Go condition) or to keep still (No-Go condition), and these conditions must be distinguishable
- The signal must not be confused by any other surrounding stimuli
These rules must be adhered to in order to properly carry over the information from the device to the user, and for the user to use said information to then make their decisions.
Vibration
Sound cues
When it comes to sound cues, there are multiple aspects to consider. Most crossings with traffic lights already have an auditory system set up (rateltikker) in order to signal to the pedestrians whether they can cross or not. To mimic this type of auditory pedestrian signal, the same type of considerations should be used for our design. From the book Accessible Pedestrian Signals by Billie L. Bentzen, some guidelines and additional info had been laid out that can be considered when looking at what kinds of sounds, what volume levels and what frequencies are best to use.
The sound of the aiding device must be audible over the sounds of traffic. The details of noise levels around intersections are out of scope for this specific research, as there are many factors that go with determining the noise level (such as wind conditions, temperature, height of receiver and so on). Therefore, we will keep this straightforward, and use information of research with similar conditions as our key scenario, a 50 k/m road with no traffic lights.
In general, a signal that is 5dB above surrounding sound from the receiver is loud enough for the pedestrian to hear and process the sound made.[5] As the sound in our scenario will be coming from the a device relatively close to the user, we do not need to be that concerned with it being audible from across the street. From research and simulations on traffic noises at intersections, the general noise sound levels around intersections lay around 65-70 dB.[6] For a signal then to be heard, a sound of 70-75 dB seems to be loud enough to be audible.
As for frequency levels of sound, research shows that high-frequencies sounds are easier to distinguish and detect from traffic sounds that low- or medium-frequency sounds. For example, the results from a study done by Hsieh M.C. et al. from 2021, where the auditory detectability of electric vehicle warning sounds at different volumes, distances, and environmental noise levels was tested, shows an overall preference for the high-frequency warning sound.[7] This sound was not affected by environmental noise. Things to note however, is that the detection rate (whether participants could detect the sound) of the higher sound pressure level (51 dBA) was significantly higher than the lower sound pressure level (46 dBA) under the high-frequency sound. This means that, when using high-frequency sound, it is imperative that a higher sound pressure level is used in order for users to actually detect the sound. Additionally, this research was done about electric vehicle warning sounds, which are by nature harder-to-detect sounds. This study however still informs us that even under these lower sound level conditions, high-frequency sounds are preferred.
Research from Szeto, A. Y. et al. also notes this usage of high-frequency sounds: low-frequency sounds require longer durations and intensity to be heard, and low-frequency masking of the auditory signals can occur in traffic, even when most of the sound energy in traffic noise occurs in frequencies below 2000 Hz.[8] From their testing, using different types of sounds (cuckoo vs. chirp, which we will get into later), for the signal to be audible, there needs to be a good balance between the potential of the low-frequency masking of the traffic noise (<1000 Hz) and the aspects of hearing loss for higher frequency sounds (>2500 Hz) most common in elderly adults. Their advice is to use signals with frequencies between 2000 - 3000 Hz, with harmonics up to 7000 Hz. Best is to use an array of frequencies, that are continuous and have a duration of 500 ms. Varying the frequency around 2500 Hz, with a minimum of 2000 and maximum of 3000 Hz is recommended by Szero, A. Y. et al. The book on Accessible Pedestrian Signals by Billie L. Bentzen supports this idea: high harmonics, with continuous varying frequencies.[5]
As for type of sound, there are multiple options as well:
- Voice messages: messages that communicate in speech are optimal for alerting pedestrians and their surroundings that the pedestrian is wanting to cross, as this signal is understandable and meaningful to everyone (ex. 'Please wait to cross')[5]
- Tones: chirp and cuckoo sounds. In America, the high-frequency, short duration 'chirp' and low-frequency, long duration 'cuckoo' type sounds are used to indicate east/west and north/south cross walks respectively. Additionally, the chirp sounds are of higher frequency compared to the cuckoo sounds, and are therefore easier to detect. In turn, this higher frequency sound is harder to detect for elderly people, as mentioned before.[8] Another point of consideration is that pedestrians have mentioned that mockingbirds or other surrounding animals may imitate these sounds, therefore making it harder to distinguish from their surroundings.[5]
- Tones: major and minor scales. Major scales in music are often associated with positive emotions, as minor scales are associated with negatives. Additionally, minor scales exhibit more uncertainty and variability, which can evoke feelings of instability. Compared to the major scale, a minor scale contains pitches that are lower than expected, paralleling characteristics of sad speech, which often features lower pitch and slower tempo.[9]
Considering this research, it is thus important to think about the trade-off between how meaningful the auditory message can be in context of its' surroundings, in order to make it distinguishable but still relay the information of whether crossing is safe or not.
Creating the solution
Keeping in mind the design requirements, we can start thinking about the prototype. First of all there will have to be some feedback to the user when it is safe to cross the road. It was clear there is a preference for vibrational feedback so a vibration motor will be used for this purpose. The next step is to think about how to attach this to the user so they can feel it. There are several options here: one is to attach some kind of belt to the waist, another to make a vest , and lastly to have a wristband. In the interviews some people asked for discreet options and other preffered familiar placings. As such the wristband seems like the best choice here, which is what the prototype will use. That being said the other solutions are not without merit. Their larger size might allow them to embed multiple vibrational motors allowing for more detailed or directional feedback.
Next is to think about the detection of traffic. From multiple design options the long range lidar sensor seems the most reliable and as safety (and thus reliability) is very important, this is the detection method of choice. The chosen sensor is...
Physical product design
Design wearable sensor:
Function:
Analyse the traffic situation and give a signal to the bracelet of whether to cross or not.
Sensors:
- radar - speaker - bluetooth
Requirements:
- Can not be heavy
- Camera needs to maintain the right angle
- Has to be moveable as not the same clothes/hats etc are worn through every season
- Has to view the road the user is trying to cross on both sides
Inspiration/reasoning:
Two radars are needed because otherwise the one radar would have to be manually turned from left to right. Users reported not wanting the sensor in their hands, in fear of dropping it, it being stolen or because theyve already got their hands full. Attaching it to the front of the user is equally not an option as users told us it is dangerous for a blind person to look from side to side as this signals to the cars and other traffic that they can see, which means the traffic wont accomadate for them.
Placement of the sensor was decided by the final interview in which our preliminary design was discussed. The team was deciding between attaching it to the cane or having the user wear it on the shoulders. It was explained that a cane is constantely swayed from side to side and gets stuck behind surfaces, tiles, etc. and thus receives a lot of impact and movement. This would make the sensor unstable. As well as the legal protecol in the Netherlands of sticking the cane vertical and then forward when crossing roads. This would prevent the sensor from being angled in the right direction before crossing. Shoulders was said to be a good solution.
There was doubt about the aesthetics of having sensors on the shoulders, this could look silly or robotic, but a user assured us that it was a good idea. "Out of experience I will tell you that blind people can be willing to sacrifice some looks for independance, functionality and comfort." (translation of Peter 2025). And after functionality is working aesthetics can be improved.
For the clip on function it has to be sturdy so the angle of the radar stays in place, yet moveable so the user can wear it through the seasons.
A way to do this is adding multiple attachment options, a clip on works for a t-shirt but not on a sweater, pins work on a sweater but not on a jacket. By adding a clips, pins and a strap the user can adjust it to their needs.
Sketch:
Sketch of a possible design for the wearable radar. The top of the wearable is not reflective of what the actual product would look like due to the radar having a different area that has to not be covered and this differs per radar. For our radar it is the white area in the image on the right the sketch. It does illustrate the several ways of keeping the radar on the shoulders. A combination of pins, a band and a clip makes sure it works for all sizes and clothes.
.
.
.
.
.
.
.
.
.
Further improvements:
- No false negatives and false positives. For extra safety precaustions er moet ook een tril signaal zijn dat iemand niet kan oversteken want hij zegt van anders weet je niet of ie het niet aan het doen is of dat je niet kan oversteken
Design feedback wristband:
Function: Letting the user know when it is safe to cross the road and when it is not.
Sensors:
- Vibration plate
- Charging port
- Mini speaker (optional but ideal)
- Some sort of bluetooth/ wifi connection
Requirements:
- Has to be chargeable
- Can not easily fall off, has to be easy to put on
- Can not be bulky
- Discreet and fashionable
Inspiration:
- For the design a bracelet that is as flat as possible is ideal. Users have to wear it through all seasons and bulky watches do not go under all winter sweaters and jackets. Having a bulky watch can also be uncomfortable.
Because of the flat design there needs to be a flat charging option. Looking at affordability, easy to use, compactness and efficiency, the magnetic pogo pin charger is the best option. In the image on the right it can be seen in a Xiaomi smartband which is an affordable smartwatch. This charger is magnetic and thus easy for a blind person to use.
.
Product sketch: Using the previously talked about requirements this sketch was made.
The bracelet is worn using a strong magnetic strap this way it is easy to put on, no clasps needed. The charger pins are also magnetic, the charger pops right on.
The purpose of the mini speaker is for notifying the user when it is charging, being able to send a signal from a phone to let the user know where it is.
The vibration plate in the sketch is a flat rectangle but in our prototype it will be a little thicker circle as we are working with budget and short term availability.
Using a little bit of a wider band ensures the sensor will stay flat against the skin and the band will not move around too much. It is also more comfortable in daily use.
The band has a very unisex feel and can be used by young and old, men and women. Multiple colors could be sold.
Improvements:
With the speaker and a bluetooth sensor installed the band could also have potential for further programming or to be connected to an app.
The vibration sensor could double as a navigation tool or a sonar sensor could be added to notify the user of surroundings.
.
Implementation
The next step in the solution is to implement the actuating and sensing aspects to the prototype design. The sensing aspects of the design were tested as proof of concept via a simulation, while the actuators were tested with user feedback.
The prototype made can be seen in the picture next to the materials table. This prototype uses the following materials:
Materials | What for? | Representation in actual design: |
---|---|---|
Arduino board | ||
Buttons (3x) | Turning on the system (for the user), switching between conditions. | The final design would have one button on the wristwatch, so that the user can turn the sensing device on or off whenever they want. |
Vibrating Modules (2x) | Causes the main vibration function of the device, with a No-Go and Go condition accompanied with differing modes of vibration. | The final design would have these vibration modules inside the wristband as described in the product sketches. In the protoype these are laid bare so that during testing it is easier to sense if they work or change the wiring. |
Testing the modes of vibration
The physical prototype was made (as can be seen in picture x), with two vibrating modules inside the wristband. To test the mode of vibration, 4 different modes were tested. The reasoning behind each of these modes is based on the literature research in x.
- Mode 1: Variating waiting time. The No-Go condition used long waiting times between pulses, the Go condition used close-rapid pulses. Length of the pulses is equal in both conditions.
- Details (actual numbers here)
- This mode is similar to how a rateltikker works, but in vibration format instead of auditory.
- Mode 2: Variating pulse time. The No-Go condition used long pulses, the Go condition used short pulses. Waiting time between pulses is equal in both conditions.
- numbers here
- The idea of this mode is that it mimics the pulses used in morse code to differentiate between different conditions.
- Mode 3: Variating intensity of pulse. The No-Go condition used low-intensity pulses, the Go condition used high-intensity pulses. Waiting time between and length of pulses is equal in both conditions.
- numbers here
- The idea of this mode was that a higher intensity of the pulse is easy to notice when the environment is quite busy, giving a sense of urgency via the device.
- Mode 4: Combining Mode 2 & 3.
- numbers here
- This was done to see if combining different modes of information would increase the clarity of the information relayed to the user.
We tested each mode once, without giving any instruction. Then, each mode was played again, and the tester was asked about their opinions, with a main focus on whether the No-go & Go conditions were 1. Differentiable, and 2. Clear & Readable.
From this testing we concluded the following:
- Mode 1 & 2 are the most clear and differentiable to the testers. There was a preference for Mode 1, as some testers mentioned the prolonged vibrating of the No-Go condition in Mode 2 could end up being 'annoying' to the user.
- Variating the intensity of the pulse on our prototype was limited. The intensity of the pulses is controlled via the amount of voltage sent to the vibrating modules. Due to the max voltage the system could handle, the high-intensity vibration could only go so far. Testers said the low-intensity vibration was too low: it was still noticeable, but when taken into the context of a busy crossing situation, it might be easy to get used to or miss the stimulus. They also mentioned the high-intensity vibration was not high enough. Just like prior, they mentioned users might get used to the continuous stimulus, making it seem less intense than it is and causing the difference between the two conditions to become less noticeable. This is also known as habituation[10] in the field of Psychology. This is a form of learning where the response to a certain stimulus decreases when the exposure of the stimulus is prolonged.
- From Mode 4 it was clear that the intensity did add some increased meaning for some testers, but others found it confusing as it added another mode of information to an already differing set of conditions.
Considering the preferences mentioned in testing, the problem of habituation and the familiarity of the stimulus, we opted to use Mode 1 for the prototype.
Testing the sensing
To test the sensing, a simulation was made in MatLab to mimic how the lydar sensor would function in our actual product.
Choosing mode of sound
From the research done under design ideas, and the possibilities of the prototype, we decided on the following:
- Considering our hardware options, tones we can use are between B6/C7 - F#7 (between 2000 - 3000 Hz).[11]
- We can unfortunately not change the volume of the speaker, but ideally the volume would be around 70 dB.
From this, we have two options:
- Imitating the regular sound made by guides at traffic lights. For the No-Go conditions, this would be alternating notes of x and x, with a duration of 500ms and wait time ?ms.
- Using scales to indicate Go or No-Go. Using a major scale for Go, and a minor scale for No-Go, based on the positive and negative connotations of these scales. A negative, minor scale could evoke feelings of instability and uncertainty, therefore making it clearer to the user that it is not yet safe to cross. Using C Major (C, D, E, F, G, A, and B) and D minor (D, E, F, G, A, B♭, and C) in the 7th octave. Playing C7 to F7 in succession low to high in major as a jingle for the Go condition, and B♭6 to E7 high to low in minor as a jingle for the No-Go condition.
- Additionally, for a more constant sound, C7, E7 and G7 (C7 Major chord) could be alternated in order for the Go condition, while B♭6, D♭7, F7 (Bb7 Minor chord) could be alternated for the No-Go condition.
- Another simplified version is alternating and G7 with A7 for the No-Go condition, and B♭6 with B6 for the No-Go condition.
If this prototype were to be transformed into a viable product, we would opt for single-use audio messages, such as 'Please wait to cross' and 'You can cross the road now!' instead of x. Audio messages would aid the user more in their learning process of understanding the modes of vibration, and would also aid them in alerting others around them auditorily that they are actively attempting to cross. This would also comply with our stakeholder research, as these type of audio messages would also relay to fellow pedestrians when it is safe to cross, and could by extension aid other visually impaired pedestrians outside the user (for example, if the user is going out with their friends who are also visually impaired, this type of auditory messaging would then allow their friends to use the product as well by extension). The single-use, meaning this sound would only be heard at the point of each condition being activated, would ensure that the user does not get a sensory overload with the addition of this audio guidance to the vibration modes. We would consider having the audio message connected to the No-Go condition to be repeated once evert few seconds, but we believe that the vibration modes would already get the idea across enough.
Conclusion
Discussion
Potential ideas, improvements, or future recommendations
Vest or belt for vibration feedback
Navigation function with vibration feedback
Sonar sensors for environment awareness
Affordability
For any other project groups reading this;
We encourage others to continue this project and attempt to take it a step closer to being on the market. Many users reported lacking such a product or one that has a navigation tool that replaces their eyes, feel free to continue our project and add this function.
This user group gets asked for interviews of school projects all the time, but none of them actually become real and they are left with the same lack of good apps/ devices. There is not much money in the industry and also a relatively little amount of users in the Netherlands.
Continue our research and encourage the next quartile to do the same, together we could make the change!
Appendix
Timesheets
Week | Names | Breakdown | Total hrs |
---|---|---|---|
Week 1 | Bas Gerrits | Meeting & brainstorming (4h), SotA research (4h), working out concepts (1h), sub-problem solutions (2h) | 10 |
Jada van der Heijden | Meeting & brainstorming (4h), planning creation (2h), wiki cleanup (2h), SotA research (3h) | 10 | |
Dylan Jansen | Meeting & brainstorming (4h), SotA research (3h), working out concepts (2h), looking for reference material (2h) | 10 | |
Elena Jansen | Meeting & brainstorming (4h), self balancing subject research (4h), reading previous wikis (30m), self balancing document (30min), research new topic (1h) | 10 | |
Sem Janssen | Meeting & brainstorming (4h), self balancing application research (4h), reading wikis (1h), working out ideas (1h) | 10 | |
Week 2 | Bas Gerrits | Meeting (2h), User research (2h), MoSCoW (2h), design (2h), design problem solving (4h) | 12 |
Jada van der Heijden | Meeting (2h), Editing wiki to reflect subject change (2h), Contacting institutions for user research (1h), Setting up user research (interview questions) (1h), User research (2h) | 7 | |
Dylan Jansen | Meeting (2h), Contacting robotics lab (1h), SotA research for new subject (4h), Looking for reference material (2h), Updating Wiki (1h), Prototype research (2h) | 12 | |
Elena Jansen | Meeting (2h), user research + make document (7h), call with Peter about interviews (1h), creating text for Peters network (30min), interview questions (1h) | ||
Sem Janssen | Meeting (2h), | ||
Week 3 | Bas Gerrits | Meeting (3h), Looking at sensors and such to see what is possible (making a start on program) (3h) Theoritical situation conditions (3h), | |
Jada van der Heijden | Meeting (3h), Editing User Research wiki (4h), Refining interview questions (2h) | 9 | |
Dylan Jansen | Meeting (3h), Updating Wiki (1h), uploading and refining SotA (4h), Ideas for prototype (2h), Bill of materials for prototype (2h) | 12 | |
Elena Jansen | Meeting (3h), Making interview questions for online (2h), updating user research (2h), emailing users (1h) | ||
Sem Janssen | Meeting (3h), Looking at different options for relaying feedback (design) | ||
Week 4 | Bas Gerrits | Meeting (3h), online update meeting (1h), | |
Jada van der Heijden | Meeting (3h), Editing User Research wiki (4h), Refining interview questions (2h), Interviewing (1h), online update meeting (1h), | 10 | |
Dylan Jansen | Meeting (3h), Working out a potential design (3h), Adding design description to wiki (2h), Discussion on best solution (2h) ,online update meeting (1h), | 10 | |
Elena Jansen | less hours due to being away. I will make up for it. online update meeting (1h), emailing/ texting users (30min), reading + summarize paper (3h) | ||
Sem Janssen | Meeting (3h), online update meeting (1h), | ||
Week 5 | Bas Gerrits | tutor meeting + after meeting (2h), online meeting (1h), weekend meeting (2h), | |
Jada van der Heijden | tutor meeting + after meeting (2h), online meeting (1h), gathering interview results (2h), interview thematic analysis (5h), weekend meeting (2h), | 12 | |
Dylan Jansen | tutor meeting + after meeting (2h), online meeting (1h), Robotics lab visit (3h), MoSCoW requirements (2h), Simulation (3h), weekend meeting (2h), | ||
Elena Jansen | tutor meeting + after meeting (2h), online meeting (1h), interview 1 (1h), interview 2 (2h), real life visit for interviews + preparation (6h) , analysing forms interviews (1h), updating wiki (2h), emailing with participants (30m), designing bracelet + radar wearable (+sketching) (4h), weekend meeting (2h), wiki update (30m) | 23 | |
Sem Janssen | online meeting (1h), weekend meeting (2h), | ||
Week 6 | Bas Gerrits | morning meeting (1h), coach + after meeting (2h) | |
Jada van der Heijden | morning meeting (1h), coach + after meeting (2h), adding to the wiki (6h) | 10 | |
Dylan Jansen | morning meeting (1h), coach + after meeting (2h) | ||
Elena Jansen | morning meeting (1h), coach + after meeting (2h), interview Peter (1h), Design sensor + sketch (2h), User research wiki finish (?), update interview section (?), Interviews in Appendix (?), update state of the art (1h), | ||
Sem Janssen | morning meeting (1h), coach + after meeting (2h) | ||
Week 7 | Bas Gerrits | meeting (3h), testing vibration modules (1h), working on presentation, practice presentation, update wiki, add hours to timesheet | |
Jada van der Heijden | meeting (3h), testing vibration modules (1h), working on presentation (2h), practice presentation (1h), update wiki (Implementation (2h), Auditory research (5h)) | ||
Dylan Jansen | meeting (3h), testing vibration modules (1h), working on presentation, practice presentation, update wiki, add hours to timesheet | ||
Elena Jansen | meeting tutors (30m) + after (2.5h), testing vibration modules (1h), presentation layout and slides (2h), practice presentation at home (1h), meeting wednesday (2h), update wiki, add hours to timesheet, road scenario sketch (20min) | ||
Sem Janssen | meeting (3h), testing vibration modules (1h), working on presentation, practice presentation, update wiki, add hours to timesheet |
140 pp, 700 total, ~20 hrs a week
Interviews
Link to the forms:
https://forms.office.com/e/s9FKAxLv7R
Questions for both the forms and interviews done via the phone:
Summaries phone/ in real life:
George: 70 year old man, full blindness, George had tested a lot of products currently on the market as well as school projects similar to ours. His experience with this gave us a lot of insight.
Peter: Peter is a system architect and advocate for visually impaired. He himself is also blind (0.5% with only central vision), he agreed to do interviews with us and even put out a message to his network asking others to help us with our project. Through this we got our phone interviews. We did called two times, once before we had our product and once after our plan was nearly done. This was especially useful as it was used to validate decisions and solve final dilemmas on the design and function.
Wilma: (irl) This interview lead us to create a more precise user group. Wilma has ushers and is blind with decreasing hearing. She is not an active independant participant in traffic and is not interested in any tools and gadgets. So while she fit our initial user group she is not interested in our product or applicable to the majority of our interview questions.
Sara: Sara is an artist and has no vision left. She loves to go on walks in her neighbourhood and has a set route of 6km, but on lesser known roads she would be very interested in our product. She gave valuable insights on battery life, wanting an audio to learn but haptic to get feedback, in what situations she would use it etc..
Forms summaries:
References
- ↑ Park, Kathryn & Kim, Yeji & Nguyen, Brian & Chen, Allison & Chao, Daniel. (2020). "Quality of Life Assessment of Severely Visually Impaired Individuals Using Aira Assistive Technology System". Vision Science & Technology. 9. 21. 10.1167/tvst.9.4.21. https://www.researchgate.net/publication/340067268_Quality_of_Life_Assessment_of_Severely_Visually_Impaired_Individuals_Using_Aira_Assistive_Technology_System
- ↑ TEDx Talks. (2018, 8 november). Blind people do not need to see | Santiago Velasquez | TEDxQUT [Video]. YouTube. https://www.youtube.com/watch?v=LNryuVpF1Pw
- ↑ Karrewiet van Ketnet. (2020, 15 oktober). hoe is het om blind of slechtziend te zijn? [Video]. YouTube. https://www.youtube.com/watch?v=c17Gm7xfpu8
- ↑ Kim Bols. (2015, 18 februari). Willem Wever - Hoe is het om slechtziend te zijn? [Video]. YouTube. https://www.youtube.com/watch?v=kg4_8dECL4A
- ↑ Jump up to: 5.0 5.1 5.2 5.3 Bentzen, B. L., Tabor, L. S., Architectural, & Board, T. B. C. (1998). Accessible Pedestrian Signals. Retrieved from https://books.google.nl/books?id=oMFIAQAAMAAJ
- ↑ Quartieri, J., Mastorakis, N. E., Guarnaccia, C., Troisi, A., D’Ambrosio, S., & Iannone, G. (2010). Traffic noise impact in road intersections. International Journal of Energy and Environment, 1(4), 1-8.
- ↑ Hsieh, M. C., Chen, H. J., Tong, M. L., & Yan, C. W. (2021). Effect of Environmental Noise, Distance and Warning Sound on Pedestrians' Auditory Detectability of Electric Vehicles. International journal of environmental research and public health, 18(17), 9290. https://doi.org/10.3390/ijerph18179290
- ↑ Jump up to: 8.0 8.1 Szeto, A. Y., Valerio, N. C., & Novak, R. E. (1991). Audible pedestrian traffic signals: Part 3. Detectability. Journal of rehabilitation research and development, 28(2), 71-78.
- ↑ Parncutt, R. (2014). The emotional connotations of major versus minor tonality: One or more origins? Musicae Scientiae, 18(3), 324-353. https://doi.org/10.1177/1029864914542842 (Original work published 2014)
- ↑ Thompson RF. Habituation: a history. Neurobiol Learn Mem. 2009 Sep;92(2):127-34. doi: 10.1016/j.nlm.2008.07.011. Epub 2008 Sep 10. PMID: 18703156; PMCID: PMC2714193.
- ↑ Mottola, R. M. (2021). Building the steel String Acoustic guitar: Table of Musical Notes and Their Frequencies and Wavelengths. Taken from https://www.liutaiomottola.com/formulae/freqtab.htm