0LAUK0 PRE2016 3 Groep10 Project progress: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(118 intermediate revisions by 6 users not shown)
Line 1: Line 1:
== Week 1 ==
= Week 1 =


=== General tasks ===
=== General tasks ===
Line 18: Line 18:
After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies?
After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies?


== Week 2 ==
=== The first presentation ===
The first presentation is going to be held by '''Ken''' and '''Steef'''.


=== Evaluation ===
= Week 2 =
After our presentation, we found that our subject was not sufficient, since there was no clear problem we were solving. Therefore we had to think of a new subject or find a problem involving the old subject.


=== General tasks ===
=== General tasks ===
*Prepare presentation 2 for Monday 20 February.
*The second presentation, mainly focussed on the actual defined planning, has to be prepared.
*Brainstorm session on new subjects for the project.  
*Brainstorm session on new subjects for the project.
*After 3 hours of discussion there was still no consensus about a sufficient subject, therefore we decided that every group member had to do his own research for the next meeting, which would be on Wednesday 15 February.
*Evaluation of the first presentation.
*Specific tasks for 13 February: Every group member thinks of a subject that he is enthusiastic about and:
*Define a final set of requirements, preferences and constraints.
** that really is a problem;
*Define set of needs and demands of user, society and enterprise.
** that can be fixed with robotics and AI.
*Choose and state a specialization.
** with which we can show or build a certain solution.
*Define project planning.
*Create subgroups and divide tasks to each subgroup.


==== Brainstorm session ====
=== Evaluation: First presentation ===
After our presentation, it could be conlcluded that the subject was not sufficient, since there was no clear problem that had to be solved. Therefore we had to think of a new subject or find a problem involving the old subject.


===== Bram =====
=== Brainstorm session ===
HIV testing Robot
During this brainstorm session, each member had to come up with an idea what could be used as subject. This subject should contain a clear problem statement what can be resolved with robotics or artificial intelligence, including USE aspects that can be targeted. Also, the feasibility of our project according to each idea is discussed.


====== Problem ======
==== Ken ====
Many people with HIV in certain region in Africa, who don’t know that they have HIV (or another STD).
'''Advanced Elderly Emergency System (A.E.E.S.)''': A device that should be worn by, mostly, elderly that detects when one has fallen. It can automatically send a warning to ‘ICE’-persons or even call 112. Automatically sending its location along with it. A microphone and camera can in this case be used to observe the situation even faster. By connecting the device to the internet, this all can be made possible even faster. Also the device can ‘ask’ questions to the owner in case of emergency, which can be answered by simple answers (yes/no).


====== Solution ======
==== Bram ====
A robot driving around from village to village to perform tests on people. They need to let the robot take some blood or urine, so the robot can do the tests.  
'''HIV Testing Robot''': This robot can help people with HIV in certain region in Africa, who do not know that they have HIV (or another STD). The robot drives around from village to village to perform tests on people. They need to let the robot take some blood or urine, so the robot can do the tests. The robot has a curtain (or foldable box) with it for privacy while the person is urinating. Then inside the robot the lab for tests is built and it runs the test. It shows a clear result to the person (for example: happy or sad smiley) via an integrated screen.


The robot has a curtain (or foldable box) with it for privacy while the person is urinating. Then inside the robot the lab for tests is built and it runs the test. It shows a clear result to the person (for example: happy or sad smiley), maybe on a screen.
It might need to educate people first on what STD’s are, how they can go from person to person, and why it is bad to have them. It might also deliver condoms. This educating can be difficult, because there might not be many people speaking English. So the robot needs to learn the language of the different regions as well. Also, it could use many pictures to try to explain things simply.


It might need to educate people first on what STD’s are, how they can go from person to person, and why it’s bad to have them.  
If a person has HIV or another disease, the robot remembers the location where he is, and sends it to the nearest doctor. To ask the doctor if he can come over to the house of the person. It might take a picture of the person, or save the fingerprint of the person and also send it to the doctor, so the doctor knows who this patient is. Also, it could show the patient the route to the nearest doctor and explain why he/she needs to go there.
It might also deliver condoms.
This educating might be difficult, because there might not be many people speaking English. So the robot needs to learn the language of the different regions as well. Also, it could use many pictures to try to explain things simply.


If a person has HIV or another disease, the robot remembers the location where he is, and sends it to the nearest doctor. To ask the doctor if he can come over to the house of the person. It might make a picture of the person, or save the fingerprint of the person and also send it to the doctor, so the doctor knows who the patient is. Also, it could show the patient the route to the nearest doctor and explain why he/she needs to go there.
According to this idea, it is possible to make several things, such as: The explanation video in English, or the design of the robot. While designing the robot, severe important aspects should be taken in mind, since the robot needs to comfort people and not scare them off. Taking blood can be scary for some patients, so details as the amount of blood what should be taken, has to be considered.


What we can make: a lot of little parts, like for example the explanation video in English. Or the design of the robot. (like where to people sit to pee, where is the lab part, where is the video screen, does it need to look like a human? It needs to comfort people, not scare them off. Of course a robot that takes blood is scary, but we need to make sure this is as little as possible.)
==== Pieter ====
'''Self-adjusting monitor/chair (including energizer)''': It is really hard to have a correct seating position at your desk and it is also really hard to get the monitor height and/or tilt set up correctly. A computer chair or monitor that automatically adjusts the height of the monitor or the seat settings can be used to resolve this. It measures from your seating position whether you are sitting correctly and adjusts accordingly. The monitor uses a camera to see how your head is angled and adjusts the height and/or tilt to the most ideal position. <ref name="SittingPostures">Supporting Implicit Human-to-Vehicle Interaction:
Driver Identification from Sitting Postures https://pdfs.semanticscholar.org/a6ba/1de940b50603b3440365bcda8c82939f74f7.pdf</ref>


===== Pieter =====
==== Steef ====
====== Problem ======
'''Container opener''': Some people are unable to open a container, because they have a disability or they simply do not have enough strength. A container opener adapts its shape to different types of containers which opens these containers for people who are unable to effectively open these containers.
It is really hard to have a correct seating position at your desk and it is also really hard to get the monitor height and/or tilt set up correctly.


====== Solution ======
==== Man-Hing Wong ====
A computer chair or monitor that automatically adjusts the height of the monitor or the seat settings.
'''Adaptive cruise control (ACC)''': A system that can be implemented in nowaday's vehicles, as an addition on the current cruise control system. The system is able to keep track of its environment in such a way that it can allow a vehicle to adapt its velocity to the vehicle that is driving in front of him, especially used to prevent traffic congestion and traffic accidents.
It measures from your seating position whether you are sitting correctly and adjusts accordingly.
The monitor uses a camera to see how your head is angled and adjusts the height and/or tilt to the perfect position.


====== Sources ======
==== Lennard ====
https://pdfs.semanticscholar.org/a6ba/1de940b50603b3440365bcda8c82939f74f7.pdf
'''Mosquito catcher''': Mosquitos are one of the most annoying insects of mankind. It is even stated that certain species are one of the most dangerous animals on planet, since these species can carry diseases (i.e. HIV). A robotic version of a bug-eating plant can resolve this problem. This robot lures a mosquito with a lamp or some scent. By using a motion sensor, a heat-sensor and microphone, it will 'bite' when the mosquito gets close enough.
 
 
===== Steef =====
====== Problem ======
Some people are unable to open a container because they have a disability or they simply do not have the strength.
 
====== Solution ======
A device that adapts its shape to different types of containers which opens these containers for people who are unable to effectively open these containers.
 
===== Man-Hing Wong =====
Adaptive cruise control (ACC) is a system that can be implemented in nowaday's vehicles, as an addition on the current cruise control system. The system is able to keep track of its environment in such a way that it can allow a vehicle to adapt its velocity to the vehicle that is driving in front of him, especially used to prevent traffic congestion and traffic accidents.
 
===== Lennard =====
 
Problem: Malaria mosquitos
 
Solution: A robotic version of a bug-eating plant, so a robot that lures a mosquito with a lamp or some scent. Using a motion sensor and maybe a heat-sensor it will 'bite' when the mosquito gets close enough.
 
Hand in: The actual robot


=== New subject ===
=== New subject ===
To determine the new subject for our project, our design team has made a points-matrix to determine which subject suits us best according to individual opinions and the relevant course requirements.
To determine the new subject for our project, our design team has made a points-matrix to determine which subject suits us best according to individual opinions and relevant course requirements. All robot technologies that are implemented are discussed and clearly explained during the brainstorm session.


{| class="wikitable" style="width: 650px"
{| class="wikitable" style="width: 650px"
Line 91: Line 71:
|
|
|-
|-
| '''Robot technology'''
|  
| '''Points for problem statement'''
| '''Points for problem statement'''
| '''Points for consisting robotics or artificial intelligence'''
| '''Points for consisting robotics or artificial intelligence'''
| '''Points for feasability'''
| '''Points for feasibility'''
| '''Total points'''
| '''Total points'''
|-
|-
Line 149: Line 129:
According to this points-matrix, it has been decided that the elderly rescue bracelet will be the new subject for the project, what is able to detect when someone is falling and help afterwards is recommended. This system asks questions to the relevant person that just fell, and determine correspondingly whether emergency actions should be taken. A model of this technology will be designed by the group to demonstrate its abilities and functions, keeping USE and technical aspects in mind.
According to this points-matrix, it has been decided that the elderly rescue bracelet will be the new subject for the project, what is able to detect when someone is falling and help afterwards is recommended. This system asks questions to the relevant person that just fell, and determine correspondingly whether emergency actions should be taken. A model of this technology will be designed by the group to demonstrate its abilities and functions, keeping USE and technical aspects in mind.


== Others ==
=== Specialization ===
To choose and state a specialization for our design, the current state-of-the-art of the wearable system has been tracked. A specialization is needed to specify our design and to differ from existing designs. All existing designs of the wearable system consist of a manual button what has to be pressed in after falling. Of course, this a safe and recommended solution to prevent elderly from worse, but is it actually that practical? What if the elderly fell on the ground in such a way it is not capable anymore to move its arms due broken bones?
 
Besides these models with a manual button only, there are also European projects of these systems, which has an integrated fall detecting component in it. It is not needed anymore to manually press a button. A.E.E.S. has the manual press button, as well as the fall detection component attached, but to specialize our design, artificial intelligence will be implemented. This system does not exist yet, and during this project, it becomes clear that artificial intelligence can indeed bring advantages to the emergency principle.
 
=== The second presentation ===
The second presentation is held by '''Pieter''' and '''Man-Hing'''.
 
 
= Week 3 =
 
=== General tasks ===
*Evaluation of the second presentation.
*Discuss design options.
*Preliminary model.
*State a concrete desciption of the final design.
*Document all relevant ethical aspects.
 
=== Individual tasks ===
Each week, each member will individually get a task to accomplish before the start of the next meeting. The tasks has been specified according to the evaluation and the group -and tutor meetings.
 
* '''Ken''': Writing and detailing the state-of-the-art section.
 
* '''Bram''': Doing research on the implementation of the artificial intelligence.
 
* '''Lennard''': Explaining how to build a fall detecting system according to a scientific paper.
 
* ''' Man-Hing''': Work out list of needs and demands of the users, society and enterprise and find out in which cases a certain person should be held responsible.
 
* '''Steef''': Find information of all components that should be gathered, including all costs and availability.
 
* '''Pieter''': Doing research on the speech recognition.
 
=== Evaluation: Second presentation ===
The presentation was pretty clear. Most proposed questions from the audience could be answered properly. Nonetheless, there were some minor points to think about as group to make the final design, tasks and planning more concrete. Specific questions as 'How can you keep the amount of false positives as low as possible?' and 'What is the artificial intelligence in the system?' has been focused on afterwards, which have all been answered on the Wiki-page. Considering the planning of the design project, it all looks decent with clear milestones and deliverables. Unfortunately, the planning, milestones and deliverables had to be changed during the project, since the plans for designing the final design had been changed.
 
 
= Week 4 =
 
=== General tasks ===
* Evaluation of first tutor meeting.
* Contact instances for interviews.
* Prepare relevant interview questions for elderly.
* Interview the elderly.
* Start on application development.
* Deep out subject on Wiki-page.
 
=== Individual tasks ===
 
* '''Ken''': Working on expanding the state-of-the-art and updating the project progress (log) and the planning.
 
* '''Bram''': iOS speech recognition API: https://developer.apple.com/videos/play/wwdc2016/509/.
 
* '''Lennard''': Describe the fall detecting system of Falin Wu et al.
 
* ''' Man-Hing''': Start on writing on the final design section and define a design description of how the prototype will function and look like.
 
* '''Steef''': Look at the possibilities for communication between the separate components.
 
* '''Pieter''': Find Google Cloud speech recognition API and do research on feasibility of using a smartwatch.
**'''Pieter's subtasks:'''
*** Research text to speech API's
*** Starting app
*** Basic speech recognition interface
*** Simulation buttons
*** Fall detected
*** Cancel simulation
*** Enter text to simulate voice
 
=== Changes on deliverables and final design ===
According to the first tutor meeting, the tutors suggested that the group should consider changing the final design, as well as the deliverables. Not because it was incorrect, but it was too time-consuming and not enough focused on 'something new or innovative'.
 
Initially, the final design would be a prototype that consists of two components: a fall detection components and a speech recognition component, all integrated into one wearable emergency system for elderly.
The fall detection component will be attached on the belt, since this location of the human body is the most-balanced while moving and performing human maneuvers.
The speech recognition component is an arm wrist, so talking can be done easily. Artificial intelligence should be held into consideration when designing this part. The arm wrist should ask questions to the relevant person that just fell. Artificial intelligence can be used to have a conversation with the person based on the situation, and further on determine the state of the current situation. Based on the state of this situation, it can perform actions fully autonomous by notifying health instances in cases when needed. This can speed up the process to help the wounded elderly, reduce of the amount of emergency cases where 911 has to be involved, or even save their lives.
 
The plans of the final design had to be changed this week. Instead of making the whole system, the design team is going to specifically focus on the innovative part. A prototype of the application of A.E.E.S. will be made instead. The group has decided to not focus on the fall detection component anymore, but only describe it as clearly and concrete as possible, since this technology already exists currently. Multiple researchers has already done experiments with fall detection systems, confirming that detecting a fall can be done with high accuracy, approximately between 85% and 95% usually. How this part of the system will be designed, is not within the group project's scope. The time that can be saved, can be used to focus more on the USE aspects, research and how this product can be successful for users, society and enterprise. Time can for example be spend on doing interviews to gather information of the real-world.
 
Another reason why it has been chosen to change the plans for the final design, is that the design team could not provide a clear and correct anwer on the question 'What is the artificial intelligence in your fall detecting emergency system?', what has been proposed by the tutors multiple times. By choosing to work out the speech recognition component only, it would be better for us to show the use of the artificial intelligence and explain in more detail what improvements artificial intelligence can bring.
 
In conclusion, instead of designing the whole A.E.E.S., an prototype of the iOS-application of A.E.E.S. will be the final design and final deliverable, which is going to be demonstrated and discussed during the last presentation. This application will be put on an Apple's iPhone, which represents the speech recognition system on the body.
 
=== Preparations interview ===
Last week, several retirement homes has been contacted to find out whether there are elderly living there, wearing an emergency service system. Then, it has been decided to visit one of these homes to interview 10-15 elderly who possess such an emergency system. Most preferably, five of them already use such an emergency system. The design team is especially curious about the thoughts of the elderly on the current device, and what they would think of the A.E.E.S.. '''Bram''' and '''Man-Hing''' are responsible for this task, as well as the preparation and statement of the questions.
 
The group is also interested in companies or institutes that utilize medical alert systems. To get to know what happens after an incoming emergency call, the instance has to be contacted with a survey. '''Steef''' and '''Lennard''' will be responsible for this task.
 
The targeted interview locations are respectively Vitalis Berckelhof and CSI Leende.
 
= Week 5 =
 
=== General tasks ===
* Evaluation of second tutor meeting.
 
=== Individual tasks ===
 
* '''Ken''': Do research on European projects, finalize state-of-the-art section and update planning appendix (eventually help can be provided with stating questions for CSI).
 
* '''Bram''': Work out the information what has been gathered during the interviews (part 1), and do research on how uncertainty get involved in the speech recognition component.
 
* '''Lennard''': Make clear what the advantages of the artificial intelligence aspect is, and find out how decision making can play a role in the system.
 
* ''' Man-Hing''': Work out the information what has been gathered during the interviews (part 2). Also find how uncertainty can affect the system in general and how to deal with it.
 
* '''Steef''': Start on decision tree by looking for conversation options after a fall has been registered, and do research on European projects.
 
* '''Pieter''': Continue implementing the application.
**'''Pieter's subtasks:'''
*** Text to speech
*** Actual speech recognition
*** Question file format
 
=== Conclusion of the interviews ===
The interview at CSI Leende did not occur. CSI did not respond on the e-mails, whereupon CSI has been contacted by phone. During the conversation by phone, they have been telling that someone will respond in near-future, but unfortunately, this moment in time has never come.
 
Instead, the interview at nursing home care Vitalis Berckelhof went calm and smooth. A broad insight of how the A.E.E.S. could be used in different ways could be obtained, whereupon the purpose of A.E.E.S. could be expanded. According to this interview, the team decided to change the appearance of the design by taking the FallWatch as reference. The final design changed in such a way that the fall detection component can be worn on the skin, instead of on the hips. From there, the preferences of the elderly has been satisfied and the system's accuracy has been increased, since it consists i.e. an additional heartbeat sensor and is as stable as the system on the belt. Correspondingly, choosing for this design, it became easier to show how artificial intelligence can be used to improve the conceptual purpose of the current elderly alarm systems.
 
=== Decision-making by artificial intelligence ===
Ideas:
* Involve uncertainty in synonyms of words and sentences (same semantics, different syntax), i.e. the probability of how much a word or sentence can be understood as a 'Yes' or 'No'.
* Decide whether the fall was a real fall or not, including an estimation of how hard that person fell, by acquiring the fall detection system's data.
* Calling for different contacts depending on the severity of the situation.
* A simple version of symptom analysis can be performed, with for example a set of only five diseases. Here, the system will decide which injury the causalty has got after falling by analysing its symptoms.
* Determine which risk factor categories has influenced the falling:
** Risk factor categories:
*** Biological
*** Behavioural
*** Environmental
*** Socioeconomic
 
The first four ideas has been implemented in the prototype.
 
= Week 6 =
 
=== General tasks ===
* Evaluation of third tutor meeting.
 
=== Individual tasks ===
 
* '''Ken''': Acquire a list of synonyms of the words 'Yes' and 'No', and determine the severeties of all acquired symptoms.
 
* '''Bram''': Finding out how and why the design will be beneficial by looking at what the device will do and what the results are from these applications.
 
* '''Lennard''': Acquire a list of symptoms and their 'synonyms' that can be used as database by the application, which will allow a more accurate insight of the current situation.
 
* ''' Man-Hing''': Acquire a list of symptoms and their 'synonyms' that can be used as database by the application, which will allow a more accurate insight of the current situation.
 
* '''Steef''': Show why this design decreases the amount of false positives compared to the existing fall detection systems.
 
* '''Pieter''': Implementing symptom analysis in the application prototype.
 
= Week 7 =
 
=== General tasks ===
* Evaluation of final tutor meeting.
* Integrate speech recognition in application.
* Testing and fine-tuning of design, suited for the final demonstration.
 
=== Individual tasks ===
 
* '''Ken''': Finalize Wiki-page: Planning
 
* '''Bram''': -
 
* '''Lennard''': -
 
* ''' Man-Hing''': Finalize Wiki-page, focussing on the interviews, USE aspects, design description, general uncertainty, conclusion and project progress (log).
 
* '''Steef''': Finalize Wiki-page
 
* '''Pieter''': Finish application
 
=== The final presentation ===
The final presentation is going to be held by '''Bram''' and '''Lennard'''. This will be the last and final moment to explain the system to the audience. Because of the amount of time what has to be spent on the structure, live-demonstration, PowerPoint-presentation and practising, no individual tasks were assigned to Bram and Lennard.
 
= Week 8 =
 
=== General tasks ===
* The final presentation, mainly focussed on the final design, and why it brings advantages on top of the current system.
* Evaluation of the final presentation.
* Finalize and submit Wiki-page.
* Finalize the application (to be demonstrated in the final presentation).
* Performing a peer-review.
 
=== Individual tasks ===
The individual subtasks not has been assigned specifically during the last week. During the last week, the Wiki-page's structure, language (i.e. syntax and semantics) and outdated section has been corrected in a proper way, so it can be submitted after. Also the individual tasks that had to be done in week 7 can be finalized if necessary. Below, the subtasks that had to be performed, are listed:
 
Outdated wiki chapters:
 
* Approach: Requirements
* Approach: Design options
* Final Design: Table with diseases
* Final Design: Possible expansions
 
Language:
* Replace 'we' and 'our'.
* Number relevant tables and figures.
* Any other errors in language.
 
= Citations =
<references/>
 
= Others =
[http://cstwiki.wtb.tue.nl/index.php?title=PRE2016_3_Groep10#Idea Back to homepage]
[http://cstwiki.wtb.tue.nl/index.php?title=PRE2016_3_Groep10#Idea Back to homepage]

Latest revision as of 22:44, 13 April 2017

Week 1

General tasks

  • A design team of six students has to be formed.
  • Brainstorm session on subjects for the project.
  • The first presentation, mainly focussed on the actual chosen subject, has to be prepared.

The design team

The design team consists of six students of different departments from the University of Technology Eindhoven:

  • Bram Grooten (Applied Mathematics)
  • Ken Hommen (Industrial Engineering)
  • Lennard Buijs (Mechanical Engineering)
  • Man-Hing Wong (Electrical Engineering)
  • Pieter van Loon (Software Science)
  • Steef Reijntjes (Electrical Engineering)

Initial subject

After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies?

The first presentation

The first presentation is going to be held by Ken and Steef.

Week 2

General tasks

  • The second presentation, mainly focussed on the actual defined planning, has to be prepared.
  • Brainstorm session on new subjects for the project.
  • Evaluation of the first presentation.
  • Define a final set of requirements, preferences and constraints.
  • Define set of needs and demands of user, society and enterprise.
  • Choose and state a specialization.
  • Define project planning.
  • Create subgroups and divide tasks to each subgroup.

Evaluation: First presentation

After our presentation, it could be conlcluded that the subject was not sufficient, since there was no clear problem that had to be solved. Therefore we had to think of a new subject or find a problem involving the old subject.

Brainstorm session

During this brainstorm session, each member had to come up with an idea what could be used as subject. This subject should contain a clear problem statement what can be resolved with robotics or artificial intelligence, including USE aspects that can be targeted. Also, the feasibility of our project according to each idea is discussed.

Ken

Advanced Elderly Emergency System (A.E.E.S.): A device that should be worn by, mostly, elderly that detects when one has fallen. It can automatically send a warning to ‘ICE’-persons or even call 112. Automatically sending its location along with it. A microphone and camera can in this case be used to observe the situation even faster. By connecting the device to the internet, this all can be made possible even faster. Also the device can ‘ask’ questions to the owner in case of emergency, which can be answered by simple answers (yes/no).

Bram

HIV Testing Robot: This robot can help people with HIV in certain region in Africa, who do not know that they have HIV (or another STD). The robot drives around from village to village to perform tests on people. They need to let the robot take some blood or urine, so the robot can do the tests. The robot has a curtain (or foldable box) with it for privacy while the person is urinating. Then inside the robot the lab for tests is built and it runs the test. It shows a clear result to the person (for example: happy or sad smiley) via an integrated screen.

It might need to educate people first on what STD’s are, how they can go from person to person, and why it is bad to have them. It might also deliver condoms. This educating can be difficult, because there might not be many people speaking English. So the robot needs to learn the language of the different regions as well. Also, it could use many pictures to try to explain things simply.

If a person has HIV or another disease, the robot remembers the location where he is, and sends it to the nearest doctor. To ask the doctor if he can come over to the house of the person. It might take a picture of the person, or save the fingerprint of the person and also send it to the doctor, so the doctor knows who this patient is. Also, it could show the patient the route to the nearest doctor and explain why he/she needs to go there.

According to this idea, it is possible to make several things, such as: The explanation video in English, or the design of the robot. While designing the robot, severe important aspects should be taken in mind, since the robot needs to comfort people and not scare them off. Taking blood can be scary for some patients, so details as the amount of blood what should be taken, has to be considered.

Pieter

Self-adjusting monitor/chair (including energizer): It is really hard to have a correct seating position at your desk and it is also really hard to get the monitor height and/or tilt set up correctly. A computer chair or monitor that automatically adjusts the height of the monitor or the seat settings can be used to resolve this. It measures from your seating position whether you are sitting correctly and adjusts accordingly. The monitor uses a camera to see how your head is angled and adjusts the height and/or tilt to the most ideal position. [1]

Steef

Container opener: Some people are unable to open a container, because they have a disability or they simply do not have enough strength. A container opener adapts its shape to different types of containers which opens these containers for people who are unable to effectively open these containers.

Man-Hing Wong

Adaptive cruise control (ACC): A system that can be implemented in nowaday's vehicles, as an addition on the current cruise control system. The system is able to keep track of its environment in such a way that it can allow a vehicle to adapt its velocity to the vehicle that is driving in front of him, especially used to prevent traffic congestion and traffic accidents.

Lennard

Mosquito catcher: Mosquitos are one of the most annoying insects of mankind. It is even stated that certain species are one of the most dangerous animals on planet, since these species can carry diseases (i.e. HIV). A robotic version of a bug-eating plant can resolve this problem. This robot lures a mosquito with a lamp or some scent. By using a motion sensor, a heat-sensor and microphone, it will 'bite' when the mosquito gets close enough.

New subject

To determine the new subject for our project, our design team has made a points-matrix to determine which subject suits us best according to individual opinions and relevant course requirements. All robot technologies that are implemented are discussed and clearly explained during the brainstorm session.

Points for problem statement Points for consisting robotics or artificial intelligence Points for feasibility Total points
Elderly rescue bracelet 4 5 5 5 5 5 3 4 4 5 3 5 5 5 5 68
Artificial intelligent HIV testing robot 2 2 2 2 4 1 2 3 2 2 2 1 1 2 2 30
Mosquito catcher 5 3 2 3 3 1 4 3 3 3 4 3 2 1 1 41
Container opener 3 1 1 1 1 2 1 1 1 1 1 2 1 2 3 22
Adaptive cruise control (ACC) 1 1 4 3 4 3 4 5 3 4 4 3 2 4 4 49
Self-adjusting monitor/chair (incl. enegizer) 4 2 3 4 5 5 2 5 2 5 5 4 3 3 5 57


According to this points-matrix, it has been decided that the elderly rescue bracelet will be the new subject for the project, what is able to detect when someone is falling and help afterwards is recommended. This system asks questions to the relevant person that just fell, and determine correspondingly whether emergency actions should be taken. A model of this technology will be designed by the group to demonstrate its abilities and functions, keeping USE and technical aspects in mind.

Specialization

To choose and state a specialization for our design, the current state-of-the-art of the wearable system has been tracked. A specialization is needed to specify our design and to differ from existing designs. All existing designs of the wearable system consist of a manual button what has to be pressed in after falling. Of course, this a safe and recommended solution to prevent elderly from worse, but is it actually that practical? What if the elderly fell on the ground in such a way it is not capable anymore to move its arms due broken bones?

Besides these models with a manual button only, there are also European projects of these systems, which has an integrated fall detecting component in it. It is not needed anymore to manually press a button. A.E.E.S. has the manual press button, as well as the fall detection component attached, but to specialize our design, artificial intelligence will be implemented. This system does not exist yet, and during this project, it becomes clear that artificial intelligence can indeed bring advantages to the emergency principle.

The second presentation

The second presentation is held by Pieter and Man-Hing.


Week 3

General tasks

  • Evaluation of the second presentation.
  • Discuss design options.
  • Preliminary model.
  • State a concrete desciption of the final design.
  • Document all relevant ethical aspects.

Individual tasks

Each week, each member will individually get a task to accomplish before the start of the next meeting. The tasks has been specified according to the evaluation and the group -and tutor meetings.

  • Ken: Writing and detailing the state-of-the-art section.
  • Bram: Doing research on the implementation of the artificial intelligence.
  • Lennard: Explaining how to build a fall detecting system according to a scientific paper.
  • Man-Hing: Work out list of needs and demands of the users, society and enterprise and find out in which cases a certain person should be held responsible.
  • Steef: Find information of all components that should be gathered, including all costs and availability.
  • Pieter: Doing research on the speech recognition.

Evaluation: Second presentation

The presentation was pretty clear. Most proposed questions from the audience could be answered properly. Nonetheless, there were some minor points to think about as group to make the final design, tasks and planning more concrete. Specific questions as 'How can you keep the amount of false positives as low as possible?' and 'What is the artificial intelligence in the system?' has been focused on afterwards, which have all been answered on the Wiki-page. Considering the planning of the design project, it all looks decent with clear milestones and deliverables. Unfortunately, the planning, milestones and deliverables had to be changed during the project, since the plans for designing the final design had been changed.


Week 4

General tasks

  • Evaluation of first tutor meeting.
  • Contact instances for interviews.
  • Prepare relevant interview questions for elderly.
  • Interview the elderly.
  • Start on application development.
  • Deep out subject on Wiki-page.

Individual tasks

  • Ken: Working on expanding the state-of-the-art and updating the project progress (log) and the planning.
  • Lennard: Describe the fall detecting system of Falin Wu et al.
  • Man-Hing: Start on writing on the final design section and define a design description of how the prototype will function and look like.
  • Steef: Look at the possibilities for communication between the separate components.
  • Pieter: Find Google Cloud speech recognition API and do research on feasibility of using a smartwatch.
    • Pieter's subtasks:
      • Research text to speech API's
      • Starting app
      • Basic speech recognition interface
      • Simulation buttons
      • Fall detected
      • Cancel simulation
      • Enter text to simulate voice

Changes on deliverables and final design

According to the first tutor meeting, the tutors suggested that the group should consider changing the final design, as well as the deliverables. Not because it was incorrect, but it was too time-consuming and not enough focused on 'something new or innovative'.

Initially, the final design would be a prototype that consists of two components: a fall detection components and a speech recognition component, all integrated into one wearable emergency system for elderly. The fall detection component will be attached on the belt, since this location of the human body is the most-balanced while moving and performing human maneuvers. The speech recognition component is an arm wrist, so talking can be done easily. Artificial intelligence should be held into consideration when designing this part. The arm wrist should ask questions to the relevant person that just fell. Artificial intelligence can be used to have a conversation with the person based on the situation, and further on determine the state of the current situation. Based on the state of this situation, it can perform actions fully autonomous by notifying health instances in cases when needed. This can speed up the process to help the wounded elderly, reduce of the amount of emergency cases where 911 has to be involved, or even save their lives.

The plans of the final design had to be changed this week. Instead of making the whole system, the design team is going to specifically focus on the innovative part. A prototype of the application of A.E.E.S. will be made instead. The group has decided to not focus on the fall detection component anymore, but only describe it as clearly and concrete as possible, since this technology already exists currently. Multiple researchers has already done experiments with fall detection systems, confirming that detecting a fall can be done with high accuracy, approximately between 85% and 95% usually. How this part of the system will be designed, is not within the group project's scope. The time that can be saved, can be used to focus more on the USE aspects, research and how this product can be successful for users, society and enterprise. Time can for example be spend on doing interviews to gather information of the real-world.

Another reason why it has been chosen to change the plans for the final design, is that the design team could not provide a clear and correct anwer on the question 'What is the artificial intelligence in your fall detecting emergency system?', what has been proposed by the tutors multiple times. By choosing to work out the speech recognition component only, it would be better for us to show the use of the artificial intelligence and explain in more detail what improvements artificial intelligence can bring.

In conclusion, instead of designing the whole A.E.E.S., an prototype of the iOS-application of A.E.E.S. will be the final design and final deliverable, which is going to be demonstrated and discussed during the last presentation. This application will be put on an Apple's iPhone, which represents the speech recognition system on the body.

Preparations interview

Last week, several retirement homes has been contacted to find out whether there are elderly living there, wearing an emergency service system. Then, it has been decided to visit one of these homes to interview 10-15 elderly who possess such an emergency system. Most preferably, five of them already use such an emergency system. The design team is especially curious about the thoughts of the elderly on the current device, and what they would think of the A.E.E.S.. Bram and Man-Hing are responsible for this task, as well as the preparation and statement of the questions.

The group is also interested in companies or institutes that utilize medical alert systems. To get to know what happens after an incoming emergency call, the instance has to be contacted with a survey. Steef and Lennard will be responsible for this task.

The targeted interview locations are respectively Vitalis Berckelhof and CSI Leende.

Week 5

General tasks

  • Evaluation of second tutor meeting.

Individual tasks

  • Ken: Do research on European projects, finalize state-of-the-art section and update planning appendix (eventually help can be provided with stating questions for CSI).
  • Bram: Work out the information what has been gathered during the interviews (part 1), and do research on how uncertainty get involved in the speech recognition component.
  • Lennard: Make clear what the advantages of the artificial intelligence aspect is, and find out how decision making can play a role in the system.
  • Man-Hing: Work out the information what has been gathered during the interviews (part 2). Also find how uncertainty can affect the system in general and how to deal with it.
  • Steef: Start on decision tree by looking for conversation options after a fall has been registered, and do research on European projects.
  • Pieter: Continue implementing the application.
    • Pieter's subtasks:
      • Text to speech
      • Actual speech recognition
      • Question file format

Conclusion of the interviews

The interview at CSI Leende did not occur. CSI did not respond on the e-mails, whereupon CSI has been contacted by phone. During the conversation by phone, they have been telling that someone will respond in near-future, but unfortunately, this moment in time has never come.

Instead, the interview at nursing home care Vitalis Berckelhof went calm and smooth. A broad insight of how the A.E.E.S. could be used in different ways could be obtained, whereupon the purpose of A.E.E.S. could be expanded. According to this interview, the team decided to change the appearance of the design by taking the FallWatch as reference. The final design changed in such a way that the fall detection component can be worn on the skin, instead of on the hips. From there, the preferences of the elderly has been satisfied and the system's accuracy has been increased, since it consists i.e. an additional heartbeat sensor and is as stable as the system on the belt. Correspondingly, choosing for this design, it became easier to show how artificial intelligence can be used to improve the conceptual purpose of the current elderly alarm systems.

Decision-making by artificial intelligence

Ideas:

  • Involve uncertainty in synonyms of words and sentences (same semantics, different syntax), i.e. the probability of how much a word or sentence can be understood as a 'Yes' or 'No'.
  • Decide whether the fall was a real fall or not, including an estimation of how hard that person fell, by acquiring the fall detection system's data.
  • Calling for different contacts depending on the severity of the situation.
  • A simple version of symptom analysis can be performed, with for example a set of only five diseases. Here, the system will decide which injury the causalty has got after falling by analysing its symptoms.
  • Determine which risk factor categories has influenced the falling:
    • Risk factor categories:
      • Biological
      • Behavioural
      • Environmental
      • Socioeconomic

The first four ideas has been implemented in the prototype.

Week 6

General tasks

  • Evaluation of third tutor meeting.

Individual tasks

  • Ken: Acquire a list of synonyms of the words 'Yes' and 'No', and determine the severeties of all acquired symptoms.
  • Bram: Finding out how and why the design will be beneficial by looking at what the device will do and what the results are from these applications.
  • Lennard: Acquire a list of symptoms and their 'synonyms' that can be used as database by the application, which will allow a more accurate insight of the current situation.
  • Man-Hing: Acquire a list of symptoms and their 'synonyms' that can be used as database by the application, which will allow a more accurate insight of the current situation.
  • Steef: Show why this design decreases the amount of false positives compared to the existing fall detection systems.
  • Pieter: Implementing symptom analysis in the application prototype.

Week 7

General tasks

  • Evaluation of final tutor meeting.
  • Integrate speech recognition in application.
  • Testing and fine-tuning of design, suited for the final demonstration.

Individual tasks

  • Ken: Finalize Wiki-page: Planning
  • Bram: -
  • Lennard: -
  • Man-Hing: Finalize Wiki-page, focussing on the interviews, USE aspects, design description, general uncertainty, conclusion and project progress (log).
  • Steef: Finalize Wiki-page
  • Pieter: Finish application

The final presentation

The final presentation is going to be held by Bram and Lennard. This will be the last and final moment to explain the system to the audience. Because of the amount of time what has to be spent on the structure, live-demonstration, PowerPoint-presentation and practising, no individual tasks were assigned to Bram and Lennard.

Week 8

General tasks

  • The final presentation, mainly focussed on the final design, and why it brings advantages on top of the current system.
  • Evaluation of the final presentation.
  • Finalize and submit Wiki-page.
  • Finalize the application (to be demonstrated in the final presentation).
  • Performing a peer-review.

Individual tasks

The individual subtasks not has been assigned specifically during the last week. During the last week, the Wiki-page's structure, language (i.e. syntax and semantics) and outdated section has been corrected in a proper way, so it can be submitted after. Also the individual tasks that had to be done in week 7 can be finalized if necessary. Below, the subtasks that had to be performed, are listed:

Outdated wiki chapters:

  • Approach: Requirements
  • Approach: Design options
  • Final Design: Table with diseases
  • Final Design: Possible expansions

Language:

  • Replace 'we' and 'our'.
  • Number relevant tables and figures.
  • Any other errors in language.

Citations

  1. Supporting Implicit Human-to-Vehicle Interaction: Driver Identification from Sitting Postures https://pdfs.semanticscholar.org/a6ba/1de940b50603b3440365bcda8c82939f74f7.pdf

Others

Back to homepage