0LAUK0 PRE2016 3 Groep10 Project progress: Difference between revisions
No edit summary |
No edit summary |
||
Line 18: | Line 18: | ||
After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies? | After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies? | ||
=== The first presentation === | |||
The first presentation is going to be held by '''Ken''' and '''Steef'''. | |||
= Week 2 = | = Week 2 = | ||
Line 32: | Line 34: | ||
=== Evaluation === | === Evaluation === | ||
After our presentation, | After our presentation, it could be conlcluded that the subject was not sufficient, since there was no clear problem that had to be solved. Therefore we had to think of a new subject or find a problem involving the old subject. | ||
=== Brainstorm session === | === Brainstorm session === | ||
Line 130: | Line 132: | ||
To choose and state a specialization for our design, the current state-of-the-art of the wearable system has been tracked. A specialization is needed to specify our design and to differ from existing designs. All existing designs of the wearable system consist of a manual button what has to pressed in after falling. Of course, this a safe and recommended solution to prevent elderly from worse, but is it actually that practical? What if the elderly fell on the ground in such a way it is not capable anymore to move its arms due broken bones? To be more ensured, our final design has some extended functionalities compared with existing models, since it is not needed to manually press a button. | To choose and state a specialization for our design, the current state-of-the-art of the wearable system has been tracked. A specialization is needed to specify our design and to differ from existing designs. All existing designs of the wearable system consist of a manual button what has to pressed in after falling. Of course, this a safe and recommended solution to prevent elderly from worse, but is it actually that practical? What if the elderly fell on the ground in such a way it is not capable anymore to move its arms due broken bones? To be more ensured, our final design has some extended functionalities compared with existing models, since it is not needed to manually press a button. | ||
=== The second presentation === | |||
The second presentation is held by '''Pieter''' and '''Man-Hing'''. | |||
= Week 3 = | = Week 3 = | ||
Line 138: | Line 142: | ||
*Preliminary model. | *Preliminary model. | ||
*State a concrete desciption of the final design. | *State a concrete desciption of the final design. | ||
*Document all relevant ethical aspects. | *Document all relevant ethical aspects. | ||
Line 157: | Line 160: | ||
=== Evaluation === | === Evaluation === | ||
The presentation was pretty clear. Most proposed questions from the audience could be answered properly. Nonetheless, there were some minor points to think about as group to make | The presentation was pretty clear. Most proposed questions from the audience could be answered properly. Nonetheless, there were some minor points to think about as group to make the final design, tasks and planning more concrete. Specific questions as 'How can you keep the amount of false positives as low as possible?' and 'What is the artificial intelligence in the system?' has been focused on afterwards, which have all been answered on the Wiki-page. Considering the planning of the design project, it all looks decent with clear milestones and deliverables. Unfortunately, the planning, milestones and deliverables had to be changed during the project, since the plans for designing the final design had been changed. | ||
Line 164: | Line 167: | ||
=== General tasks === | === General tasks === | ||
* Evaluation of first tutor meeting. | * Evaluation of first tutor meeting. | ||
* Contact instances for interviews. | |||
* Prepare relevant interview questions for elderly. | |||
* | * | ||
Line 180: | Line 185: | ||
* '''Pieter''': Find Google Cloud speech recognition API and do research on feasibility of using a smartwatch. | * '''Pieter''': Find Google Cloud speech recognition API and do research on feasibility of using a smartwatch. | ||
**'''Pieter's subtasks:''' | **'''Pieter's subtasks:''' | ||
*** Research text to speech | *** Research text to speech API's | ||
*** Starting app | *** Starting app | ||
*** Basic speech recognition interface | *** Basic speech recognition interface | ||
Line 188: | Line 193: | ||
*** Enter text to simulate voice | *** Enter text to simulate voice | ||
=== | === Changes on deliverables and final design === | ||
According to the first tutor meeting, the tutors suggested that the group should consider changing the final design, as well as the deliverables. Not because it was incorrect, but it was too time-consuming and not enough focused on 'something new or innovative'. | According to the first tutor meeting, the tutors suggested that the group should consider changing the final design, as well as the deliverables. Not because it was incorrect, but it was too time-consuming and not enough focused on 'something new or innovative'. | ||
Line 204: | Line 200: | ||
The speech recognition component is an arm wrist, so talking can be done easily. Artificial intelligence should be held into consideration when designing this part. The arm wrist should ask questions to the relevant person that just fell. Artificial intelligence can be used to have a conversation with the person based on the situation, and further on determine the state of the current situation. Based on the state of this situation, it can perform actions fully autonomous by notifying health instances in cases when needed. This can speed up the process to help the wounded elderly, reduce of the amount of emergency cases where 911 has to be involved, or even save their lives. | The speech recognition component is an arm wrist, so talking can be done easily. Artificial intelligence should be held into consideration when designing this part. The arm wrist should ask questions to the relevant person that just fell. Artificial intelligence can be used to have a conversation with the person based on the situation, and further on determine the state of the current situation. Based on the state of this situation, it can perform actions fully autonomous by notifying health instances in cases when needed. This can speed up the process to help the wounded elderly, reduce of the amount of emergency cases where 911 has to be involved, or even save their lives. | ||
The plans of the final design had to be changed this week. Instead of making the whole system, the design team is going to specifically focus on the innovative part. A prototype of the application of A.E.E.S. will be made instead. The group has decided to not focus on the fall detection component anymore, but only describe it as clearly and concrete as possible, since this technology already exists currently. Multiple researchers has already done experiments with fall detection systems, confirming that detecting a fall can be done with high accuracy, approximately between 85% and 95% usually. How this part of the system will be designed, is not within the group project's scope. The time that can be saved, can be used to focus more on the USE aspects, research and how this product can be | The plans of the final design had to be changed this week. Instead of making the whole system, the design team is going to specifically focus on the innovative part. A prototype of the application of A.E.E.S. will be made instead. The group has decided to not focus on the fall detection component anymore, but only describe it as clearly and concrete as possible, since this technology already exists currently. Multiple researchers has already done experiments with fall detection systems, confirming that detecting a fall can be done with high accuracy, approximately between 85% and 95% usually. How this part of the system will be designed, is not within the group project's scope. The time that can be saved, can be used to focus more on the USE aspects, research and how this product can be successful for users, society and enterprise. Time can for example be spend on doing interviews to gather information of the real-world. | ||
Another reason why it has been chosen to change the plans for the final design, is that the design team could not provide a clear and correct anwer on the question 'What is the artificial intelligence in your fall detecting emergency system?', what has been proposed by the tutors multiple times. By choosing to work out the speech recognition component only, it would be better for us to show the use of the artificial intelligence and explain in more detail what improvements artificial intelligence can bring. | Another reason why it has been chosen to change the plans for the final design, is that the design team could not provide a clear and correct anwer on the question 'What is the artificial intelligence in your fall detecting emergency system?', what has been proposed by the tutors multiple times. By choosing to work out the speech recognition component only, it would be better for us to show the use of the artificial intelligence and explain in more detail what improvements artificial intelligence can bring. | ||
Line 210: | Line 206: | ||
In conclusion, instead of designing the whole A.E.E.S., an prototype of the iOS-application of A.E.E.S. will be the final design and final deliverable, which is going to be demonstrated and discussed during the last presentation. This application will be put on an Apple's iPhone, which represents the speech recognition system on the body. | In conclusion, instead of designing the whole A.E.E.S., an prototype of the iOS-application of A.E.E.S. will be the final design and final deliverable, which is going to be demonstrated and discussed during the last presentation. This application will be put on an Apple's iPhone, which represents the speech recognition system on the body. | ||
=== Preparations interview === | |||
Last week, several retirement homes has been contacted to find out whether there are elderly living there, wearing an emergency service system. Then, it has been decided to visit one of these homes to interview 10-15 elderly who possess such an emergency system. Most preferably, five of them already use such an emergency system. The design team is especially curious about the thoughts of the elderly on the current device, and what they would think of the A.E.E.S.. '''Bram''' and '''Man-Hing''' are responsible for this task, as well as the preparation and statement of the questions. | |||
The group is also interested in companies or institutes that utilize medical alert systems. To get to know what happens after an incoming emergency call, the instance has to be contacted with a survey. '''Steef''' and '''Lennard''' will be responsible for this task. | |||
The targeted interview locations are respectively Vitalis Berckelhof and CSI Leende. | |||
== | = Week 5 = | ||
=== General tasks === | |||
* Evaluation of second tutor meeting. | |||
* | |||
== | === Individual tasks === | ||
* | * '''Ken''': Do research on European projects, finalize state-of-the-art section and update planning appendix (eventually help can be provided with stating questions for CSI). | ||
* '''Bram''': Work out the information what has been gathered during the interviews (part 1), and do research on how uncertainty get involved in the speech recognition component. | |||
* '''Lennard''': Make clear what the advantages of the artificial intelligence aspect is, and find out whether 'symptom analysis' is compatible with the system. | |||
* ''' Man-Hing''': Work out the information what has been gathered during the interviews (part 2). Also find how uncertainty can affect the system and how to deal with it. | |||
research European projects. | * '''Steef''': Start on decision tree by looking for conversation options after a fall has been registered, and do research on European projects. | ||
* '''Pieter''': Continue implementing the application. | |||
**'''Pieter's subtasks:''' | |||
*** Text to speech | |||
*** Actual speech recognition | |||
*** Question file format | |||
=== Evaluation of the interviews === | |||
= Week 6 = | = Week 6 = | ||
Line 283: | Line 270: | ||
= Week 7 = | = Week 7 = | ||
=== Individual tasks === | |||
* '''Ken''': | |||
* '''Bram''': - | |||
* '''Lennard''': - | |||
* ''' Man-Hing''': | |||
* '''Steef''': | |||
* '''Pieter''': | |||
Lennard & Bram: Prepare presentation | Lennard & Bram: Prepare presentation | ||
Pieter: Finish App | Pieter: Finish App | ||
=== The final presentation === | |||
The final presentation is going to be held by '''Bram''' and '''Lennard'''. | |||
== Citations == | == Citations == |
Revision as of 12:46, 5 April 2017
Week 1
General tasks
- A design team of six students has to be formed.
- Brainstorm session on subjects for the project.
- The first presentation, mainly focussed on the actual chosen subject, has to be prepared.
The design team
The design team consists of six students of different departments from the University of Technology Eindhoven:
- Bram Grooten (Applied Mathematics)
- Ken Hommen (Industrial Engineering)
- Lennard Buijs (Mechanical Engineering)
- Man-Hing Wong (Electrical Engineering)
- Pieter van Loon (Software Science)
- Steef Reijntjes (Electrical Engineering)
Initial subject
After a small brainstorm session during the first introduction lecture, the design team came up with the idea to design a smart, autonomous beer bottle sorting machine that can be used as an innovative extension on the control systems what are currently in use in supermarkets. Nowadays, employees are still needed to sort the different beer bottles and put them correspondingly in the correct crate. According to current technologies, this task can easily be done by a control system. Our design team want to especially do research on how artificial intelligence could be used to improve current technologies in control systems. As an example of a control system, the team mainly wants to target a beer bottle sorting machine for supermarkets, that can be demonstrated as a model. The system should be able to, according to sensors and an artificial intelligent vision system, take the empty beer bottles from a conveyer, and put them in the right crate. Is artificial intelligence able to optimize and improve this process, by for example recognizing the labels on the bottles? Can this information what is obtained from the external environment be used in current or future technologies?
The first presentation
The first presentation is going to be held by Ken and Steef.
Week 2
General tasks
- The second presentation, mainly focussed on the actual defined planning, has to be prepared.
- Brainstorm session on new subjects for the project.
- Evaluation of the first presentation.
- Define a final set of requirements, preferences and constraints.
- Define set of needs and demands of user, society and enterprise.
- Choose and state a specialization.
- Define project planning.
- Create subgroups and divide tasks to each subgroup.
Evaluation
After our presentation, it could be conlcluded that the subject was not sufficient, since there was no clear problem that had to be solved. Therefore we had to think of a new subject or find a problem involving the old subject.
Brainstorm session
During this brainstorm session, each member had to come up with an idea what could be used as subject. This subject should contain a clear problem statement what can be resolved with robotics or artificial intelligence, including USE aspects that can be targeted. Also, the feasibility of our project according to each idea is discussed.
Ken
Advanced Elderly Emergency System (A.E.E.S.): A device that should be worn by, mostly, elderly that detects when one has fallen. It can automatically send a warning to ‘ICE’-persons or even call 112. Automatically sending its location along with it. A microphone and camera can in this case be used to observe the situation even faster. By connecting the device to the internet, this all can be made possible even faster. Also the device can ‘ask’ questions to the owner in case of emergency, which can be answered by simple answers (yes/no).
Bram
HIV Testing Robot: This robot can help people with HIV in certain region in Africa, who do not know that they have HIV (or another STD). The robot drives around from village to village to perform tests on people. They need to let the robot take some blood or urine, so the robot can do the tests. The robot has a curtain (or foldable box) with it for privacy while the person is urinating. Then inside the robot the lab for tests is built and it runs the test. It shows a clear result to the person (for example: happy or sad smiley) via an integrated screen.
It might need to educate people first on what STD’s are, how they can go from person to person, and why it is bad to have them. It might also deliver condoms. This educating can be difficult, because there might not be many people speaking English. So the robot needs to learn the language of the different regions as well. Also, it could use many pictures to try to explain things simply.
If a person has HIV or another disease, the robot remembers the location where he is, and sends it to the nearest doctor. To ask the doctor if he can come over to the house of the person. It might take a picture of the person, or save the fingerprint of the person and also send it to the doctor, so the doctor knows who this patient is. Also, it could show the patient the route to the nearest doctor and explain why he/she needs to go there.
According to this idea, it is possible to make several things, such as: The explanation video in English, or the design of the robot. While designing the robot, severe important aspects should be taken in mind, since the robot needs to comfort people and not scare them off. Taking blood can be scary for some patients, so details as the amount of blood what should be taken, has to be considered.
Pieter
Self-adjusting monitor/chair (including energizer): It is really hard to have a correct seating position at your desk and it is also really hard to get the monitor height and/or tilt set up correctly. A computer chair or monitor that automatically adjusts the height of the monitor or the seat settings can be used to resolve this. It measures from your seating position whether you are sitting correctly and adjusts accordingly. The monitor uses a camera to see how your head is angled and adjusts the height and/or tilt to the most ideal position. [1]
Steef
Container opener: Some people are unable to open a container, because they have a disability or they simply do not have enough strength. A container opener adapts its shape to different types of containers which opens these containers for people who are unable to effectively open these containers.
Man-Hing Wong
Adaptive cruise control (ACC): A system that can be implemented in nowaday's vehicles, as an addition on the current cruise control system. The system is able to keep track of its environment in such a way that it can allow a vehicle to adapt its velocity to the vehicle that is driving in front of him, especially used to prevent traffic congestion and traffic accidents.
Lennard
Mosquito catcher: Mosquitos are one of the most annoying insects of mankind. It is even stated that certain species are one of the most dangerous animals on planet, since these species can carry diseases (i.e. HIV). A robotic version of a bug-eating plant can resolve this problem. This robot lures a mosquito with a lamp or some scent. By using a motion sensor, a heat-sensor and microphone, it will 'bite' when the mosquito gets close enough.
New subject
To determine the new subject for our project, our design team has made a points-matrix to determine which subject suits us best according to individual opinions and relevant course requirements. All robot technologies that are implemented are discussed and clearly explained during the brainstorm session.
Points for problem statement | Points for consisting robotics or artificial intelligence | Points for feasibility | Total points | |
Elderly rescue bracelet | 4 5 5 5 5 | 5 3 4 4 5 | 3 5 5 5 5 | 68 |
Artificial intelligent HIV testing robot | 2 2 2 2 4 | 1 2 3 2 2 | 2 1 1 2 2 | 30 |
Mosquito catcher | 5 3 2 3 3 | 1 4 3 3 3 | 4 3 2 1 1 | 41 |
Container opener | 3 1 1 1 1 | 2 1 1 1 1 | 1 2 1 2 3 | 22 |
Adaptive cruise control (ACC) | 1 1 4 3 4 | 3 4 5 3 4 | 4 3 2 4 4 | 49 |
Self-adjusting monitor/chair (incl. enegizer) | 4 2 3 4 5 | 5 2 5 2 5 | 5 4 3 3 5 | 57 |
According to this points-matrix, it has been decided that the elderly rescue bracelet will be the new subject for the project, what is able to detect when someone is falling and help afterwards is recommended. This system asks questions to the relevant person that just fell, and determine correspondingly whether emergency actions should be taken. A model of this technology will be designed by the group to demonstrate its abilities and functions, keeping USE and technical aspects in mind.
Specialization
To choose and state a specialization for our design, the current state-of-the-art of the wearable system has been tracked. A specialization is needed to specify our design and to differ from existing designs. All existing designs of the wearable system consist of a manual button what has to pressed in after falling. Of course, this a safe and recommended solution to prevent elderly from worse, but is it actually that practical? What if the elderly fell on the ground in such a way it is not capable anymore to move its arms due broken bones? To be more ensured, our final design has some extended functionalities compared with existing models, since it is not needed to manually press a button.
The second presentation
The second presentation is held by Pieter and Man-Hing.
Week 3
General tasks
- Evaluation of the second presentation.
- Discuss design options.
- Preliminary model.
- State a concrete desciption of the final design.
- Document all relevant ethical aspects.
Individual tasks
Each week, each member will individually get a task to accomplish before the start of the next meeting. The tasks has been specified according to the evaluation and the group -and tutor meetings.
- Ken: Writing and detailing the state-of-the-art section.
- Bram: Doing research on the implementation of the artificial intelligence.
- Lennard: Explaining how to build a fall detecting system according to a scientific paper.
- Man-Hing: Work out list of needs and demands of the users, society and enterprise and find out in which cases a certain person should be held responsible.
- Steef: Find information of all components that should be gathered, including all costs and availability.
- Pieter: Doing research on the speech recognition.
Evaluation
The presentation was pretty clear. Most proposed questions from the audience could be answered properly. Nonetheless, there were some minor points to think about as group to make the final design, tasks and planning more concrete. Specific questions as 'How can you keep the amount of false positives as low as possible?' and 'What is the artificial intelligence in the system?' has been focused on afterwards, which have all been answered on the Wiki-page. Considering the planning of the design project, it all looks decent with clear milestones and deliverables. Unfortunately, the planning, milestones and deliverables had to be changed during the project, since the plans for designing the final design had been changed.
Week 4
General tasks
- Evaluation of first tutor meeting.
- Contact instances for interviews.
- Prepare relevant interview questions for elderly.
Individual tasks
- Ken: Working on expanding the state-of-the-art and updating the project progress (log) and the planning.
- Bram: iOS speech recognition API: https://developer.apple.com/videos/play/wwdc2016/509/.
- Lennard: Describe the fall detecting system of Falin Wu et al.
- Man-Hing: Start on writing on the final design section and define a design description of how the prototype will function and look like.
- Steef:
- Pieter: Find Google Cloud speech recognition API and do research on feasibility of using a smartwatch.
- Pieter's subtasks:
- Research text to speech API's
- Starting app
- Basic speech recognition interface
- Simulation buttons
- Fall detected
- Cancel simulation
- Enter text to simulate voice
- Pieter's subtasks:
Changes on deliverables and final design
According to the first tutor meeting, the tutors suggested that the group should consider changing the final design, as well as the deliverables. Not because it was incorrect, but it was too time-consuming and not enough focused on 'something new or innovative'.
Initially, the final design would be a prototype that consists of two components: a fall detection components and a speech recognition component, all integrated into one wearable emergency system for elderly. The fall detection component will be attached on the belt, since this location of the human body is the most-balanced while moving and performing human maneuvers. The speech recognition component is an arm wrist, so talking can be done easily. Artificial intelligence should be held into consideration when designing this part. The arm wrist should ask questions to the relevant person that just fell. Artificial intelligence can be used to have a conversation with the person based on the situation, and further on determine the state of the current situation. Based on the state of this situation, it can perform actions fully autonomous by notifying health instances in cases when needed. This can speed up the process to help the wounded elderly, reduce of the amount of emergency cases where 911 has to be involved, or even save their lives.
The plans of the final design had to be changed this week. Instead of making the whole system, the design team is going to specifically focus on the innovative part. A prototype of the application of A.E.E.S. will be made instead. The group has decided to not focus on the fall detection component anymore, but only describe it as clearly and concrete as possible, since this technology already exists currently. Multiple researchers has already done experiments with fall detection systems, confirming that detecting a fall can be done with high accuracy, approximately between 85% and 95% usually. How this part of the system will be designed, is not within the group project's scope. The time that can be saved, can be used to focus more on the USE aspects, research and how this product can be successful for users, society and enterprise. Time can for example be spend on doing interviews to gather information of the real-world.
Another reason why it has been chosen to change the plans for the final design, is that the design team could not provide a clear and correct anwer on the question 'What is the artificial intelligence in your fall detecting emergency system?', what has been proposed by the tutors multiple times. By choosing to work out the speech recognition component only, it would be better for us to show the use of the artificial intelligence and explain in more detail what improvements artificial intelligence can bring.
In conclusion, instead of designing the whole A.E.E.S., an prototype of the iOS-application of A.E.E.S. will be the final design and final deliverable, which is going to be demonstrated and discussed during the last presentation. This application will be put on an Apple's iPhone, which represents the speech recognition system on the body.
Preparations interview
Last week, several retirement homes has been contacted to find out whether there are elderly living there, wearing an emergency service system. Then, it has been decided to visit one of these homes to interview 10-15 elderly who possess such an emergency system. Most preferably, five of them already use such an emergency system. The design team is especially curious about the thoughts of the elderly on the current device, and what they would think of the A.E.E.S.. Bram and Man-Hing are responsible for this task, as well as the preparation and statement of the questions.
The group is also interested in companies or institutes that utilize medical alert systems. To get to know what happens after an incoming emergency call, the instance has to be contacted with a survey. Steef and Lennard will be responsible for this task.
The targeted interview locations are respectively Vitalis Berckelhof and CSI Leende.
Week 5
General tasks
- Evaluation of second tutor meeting.
Individual tasks
- Ken: Do research on European projects, finalize state-of-the-art section and update planning appendix (eventually help can be provided with stating questions for CSI).
- Bram: Work out the information what has been gathered during the interviews (part 1), and do research on how uncertainty get involved in the speech recognition component.
- Lennard: Make clear what the advantages of the artificial intelligence aspect is, and find out whether 'symptom analysis' is compatible with the system.
- Man-Hing: Work out the information what has been gathered during the interviews (part 2). Also find how uncertainty can affect the system and how to deal with it.
- Steef: Start on decision tree by looking for conversation options after a fall has been registered, and do research on European projects.
- Pieter: Continue implementing the application.
- Pieter's subtasks:
- Text to speech
- Actual speech recognition
- Question file format
- Pieter's subtasks:
Evaluation of the interviews
Week 6
Ideas for the coming weeks to make the AI more intelligent:
Calling different contact persons depending on the situation.
Fall detection system data about: the probability that the detection was a real fall, how hard the person fell.
We could perform a simple version of symptom analysis with an example of a set of only 5 diseases.
Categories which influence falling. Asking question to detect those. Risk factor categories:
- Biological
- Behavioural
- Environmental
- Socioeconomic
Week 6 finalizing
Creating the symptom analysis in the application. - Pieter
Finding out how and why our design will be of benefit. (Looking at what the device will do and what results from these applications) - Bram
Come up with a list of symptoms to be used by the application which will allow for a more accurate insight of the current situation. - Man-Hing, Lennard, Ken
Show why our current design decreases the amount of false positives compared to existing fall detection systems. - Steef
Week 7
Individual tasks
- Ken:
- Bram: -
- Lennard: -
- Man-Hing:
- Steef:
- Pieter:
Lennard & Bram: Prepare presentation
Pieter: Finish App
The final presentation
The final presentation is going to be held by Bram and Lennard.
Citations
- ↑ Supporting Implicit Human-to-Vehicle Interaction: Driver Identification from Sitting Postures https://pdfs.semanticscholar.org/a6ba/1de940b50603b3440365bcda8c82939f74f7.pdf