PRE2019 4 Group4: Difference between revisions
Line 335: | Line 335: | ||
The neural network also contains two dense layers with 20 neurons on the first layer with ReLu activation, and 8 neurons on the second layer with softmax activation. <br/> | The neural network also contains two dense layers with 20 neurons on the first layer with ReLu activation, and 8 neurons on the second layer with softmax activation. <br/> | ||
There was also a batch normalization before before every layer. <br/> | There was also a batch normalization before before every layer. <br/> | ||
[[File:CNN_design. | [[File:CNN_design.png| center]] | ||
'''What's done:''' <br\> | '''What's done:''' <br\> |
Revision as of 12:58, 30 May 2020
Emotion Recognition in companion Robots for elderly people
Group Members
Name | Student Number | Study | |
---|---|---|---|
Cahitcan Uzman | 1284304 | Computer Science | c.uzman@student.tue.nl |
Zakaria Ameziane | 1005559 | Computer Science | z.ameziane@student.tue.nl |
Lex van Heugten | 0973789 | Applied Physics | l.m.v.heugten@student.tue.nl |
Problem Statement
As the years pass, every people get older and they start to lose some abilities, due to the nature of biological human body. Older people get vulnerable in terms of health, their body movements slow down, the communication rate between neurons on their brain decreases which might cause mental problems, etc. Thus, their dependency on other people increases, in order to maintain their lives properly. In other words, older people may need to be cared by someone else, due to deformation of their bodies. However, some of the old people aren’t lucky to find someone to receive their support. For those people, the good news is that the technology and artificial intelligence is developing and one of the great applications is care robots for elder people. Although “care robots” is a great idea and most likely to ease the human life, the technology brings some concerns and ambiguities along with it. Since “communication with elder people” is an essential for caring, the benefit of care robots increases as the communication between the user and the robot gets clearer and easier. Moreover, emotion is one of the most important tools for communication, that’s why we aim to investigate the use of emotion recognition on care robots with the help of artificial intelligence. Our main goal is to make the communication between an old person and the robot more powerful by making the robot understand the current emotional situation of the user and behave correspondingly.
Users and their needs
The healthcare service: The healthcare services have a big shortage of human careers, which lead to the use of robots to take care of elderly people. The healthcare services want to provide care robots that are as similar as possible to the human careers.
Elderly people: The elderly people need someone to take care of them both physically, and also share their emotions with, so the care robot need to understand his/her emotions. That way it can understand what the person needs and act based on that.
Enterprises who develop the care robot: The enterprise aim to improve the care robots technology and make it as effective as a human career by adding features that can allow the robot to interact emotionally with people.
USE analysis
Users: <br\> The population of old people keeps growing in a high rate. In Europe, citizens aged 65 and over comprised 20.3 of the population in 2019(1) and it’s expected to keep increasing every year. <br\> As people get older, not only they need someone to take care of them physically, but they also need human interactions, share their emotions, otherwise they become vulnerable to loneliness and social isolation. The leader of Britain’s GPs said in 2017 :“Being lonely can be as bad for someone’s health as having a long term illness such as diabetes or high blood pressure” (2). The fight against social isolation and loneliness is an essential reason for social companion robots.<br\> Introducing the technology of facial emotion recognition in social companion robots for elderly people will help the robot understand the emotions of the person and react based on that, as for example if the person is sad, the robot may use some techniques to cheer up the person, or if the person is happy the robot can ask the person to share what made him/her happy. By sharing emotions, the elderly people will feel less lonely, and by understanding the emotion of the person, the robot will know more about what the person really needs.
<br\>
Society:
This technology can have a big impact on the society. In many countries, the healthcare system can’t provide enough careers to cover all the needs, so this turns to be so difficult for the elderly people and especially for their families. Especially because the phenomena of “living alone” at the end of life has grown enormously in many societies in last decades, even in societies in which traditionally there are strong family ties(1). <br\>
Society will benefit from this technology because it will increase the overall happiness by preventing the social isolation and loneliness of the elderly.
<br\>
enterprise: <br\> Over the years, the companies who develop care robots made a huge development in improving the efficiency of the robot, as it started with a simple robot that can only accomplish some basic physical task, to a robot that can communicate in human language with people, then a robot that can entertain the elderly people in different ways. Those companies always aim to improve its robots to satisfy the needs of the users, and adding the facial emotion recognition to the robot will be a big improvement and is expected to rise the desire of elderly people to have one of these care robots. The companies will definitely benefit from this technology as the need of care robots is expected to rise in the coming years. Studies have shown that in year 2050, the population of people over the age of 65 in Europe will represent 28.5% of the population. So this investment will come up with a huge profits to those companies
Approach
The subject of face recognition has strongly developed in the past few years. One can find face recognition and face tracking in daily life in camera's that automate the focus on the faces in view and of course snapchat filters. This technology will be a good starting point for this project. However, thee consistency of face recognition is very low. Factors like skin color, lighting and deformaties of the face are certain to upset most face recognition systems every once in a while. Especially the point of deformaties is important for our target user. As mentioned above, old age comes with many different effects and the decay of skin is especially relevant for this project. Skin on the face begins to hang, wrinkle and colorize at older age and might create issues for our face recognition software. Tuning this software to work with elderly will be the first milestone.
The second milestone will, of course, be the recognition of emotion. Recognition of emotion is complex. It needs a relevant large database to work correctly. Here we will probably come against the same problems. Namely working with older elderly as target group. Our software should be fine tuned to their face structure as well as creating a database that is created from relevant data. At last, the system should be consistent. This is designed to be part of a care robot. Care robots should relieve a part of the care that is given by human professionals and should therefore be consistent enough te actually take some of the work pressure of their hands.
Planning
Week 1:
- Brainstorming ideas
- Decide upon a subject
- Who are the users and their needs
- Study literatures about the topic
Week 2:
- USE aspects analysis
- Gathers RPCs
- Care robots that already exist
- Make research about the elderly opinion and their wishes/concerns
Week3:
- Analyze how facial expressions changes for different emotions <br\>
- Analyze different aspects of this technology ( for example how the robot should react to different emotions ..)
- The use of convolutional Neural Network for facial emotion recognition
Week4:
- Find a database of face pictures that is large enough for CNN training
- Start the implementation of the facial emotion recognition
- Companionship and Human-Robot relations
Week5:
- Implementation of the facial emotion recognition
Week6:
- Testing and improving
Week7:
- Prepare presentation and finalize the wiki page
Week8:
- Derive conclusions and possible future improvements
- Finalize presentation.
Deliverables:
- The wiki page: this will include all the steps we took to make this project, as well as all the analysis we’ve made and results we achieved.
- A software that will be able to recognize facial emotions
- A final presentation of our project
State of the art
Companion robots that already exist
SAM robotic concierge
Luvozo PBC, which focuses on developing solutions for improving the quality of life of elderly people, started testing its product SAM in a leading senior living community in Washington. SAM is a concierge smiling robot that is the same size as a human, it’s task is to check on residents in long term care settings. This robot made the patients more satisfied as they feel that there is someone checking on them all the time, and it reduced the cost of care.<br\>
ROBEAR
Developed by scientists from RIKEN and Sumitomo Riko Company, ROBEAR is a nursing care robot which provides physical help for the elderly patients, such as lifting patients from a bed into a wheelchair, or just help patients who are able to stand up but require assistance. It’s called ROBEAR because it is shaped like a giant gentle bear.
PARO
Elderly wishes and concerns
The interview will be conducted with working professionals in the elderly care. Below are the questions with generalized answers from the caregivers that conducted this interview.
Interview Questions and Answers
- What are common measurements that elderly take against loneliness? <br\>
- Elderly are passive towards their loneliness or often physically or mentally not capable to change it.
- Are the elderly motivated to battle their loneliness? If so, how? <br\>
- There are lot of activities that are organized.<br\>
- Caregivers think about ways to spend the day and provide companionship as much as possible. <br\>
- Do lonely elderly have more need of companionship or more need of a interlocutor? For example, a dog is a companion but not someone to talk to while frequently calling a family member is a interlocutor but no companionship. <br\>
- This differs per person. <br\>
- It is hard to maintain a conversation because it is difficult to follow or hear. Most are not able to care for a pet. Many insecurities that come with age are reasons to remain at home. <br\>
- Are there, despite installed safety measures, a lot of unnoticed accidents? For example, falling or involuntary defecation. <br\>
- Alcohol addiction. <br\>
- Bad eating habits. <br\>
- If the caregivers could be better warned against one type of accident, which one would it be and why?
- Are there clear signals that point to loneliness or sadness? <br\>
- Physical neglect of their own person.<br\>
- Sadness and emotionally distant.<br\>
- What is your method to cheer sad and lonely elderly up? <br\>
- Giving them extra personal attention. <br\>
- Supplying options to battle sadness and help them accomplish this. <br\>
Results
Loneliness is one of the biggest challenges for caregivers of the elderly. Theoretically, elderly do not have to be lonely. There are many options to find companionship and activity. These are provided or promoted by caregivers. However, old ages comes with many disadvanteges. Activities become to difficult or intens, they may have difficulties keeping a conversation in real life or via phone and they are not able to care for companions like pets.
Common signs of loneliness are visible sadness, confusion, bad eating habits and general self neglect. Attention is the only real cure according to caregivers. Because the elderly lose the abbility to look for this companionship this attention should come from others without input from the elderly.
Conclusions
Since the elderly lose the abillity to find companionship and hold meaningful conversations, loneliness is a certainty, especially when the people close to them die or move out of their direct surrounding. A companion robot that provides some attention may be a useful supplement to the attention that caregivers and family.
RPCs
Requirements
Emotion recognition software requirements
- The software shall capture the face of the person
- The software shall recognize when the person is happy
- The software shall recognize when the person is sad
- The software shall recognize when the person is in pain
Companion robot requirements
- When the person is happy, the robot shall act correspondingly.
- When the person is sad, the robot shall cheer him up
- When the person is in pain for 15 seconds, the robot shall ask the person if he needs him to call an emergency number for help.
- When the user asks the robot for help, the robot shall start a contact with an emergency service within 5 seconds.
- When the robot percieves a pain situation and getting no responses, it shall start a contact with an emergency service within 5 seconds.
Preferences
- The software can recognize different other emotions.
Constraints
- The facial expression differs from a person to another.
- It’s hard to achieve a very high accuracy.
- The software needs to be built within 6 weeks
How do facial expressions convey emotions?
Introduction
Emotions are one of the main aspects of non-verbal communication in human life. They are the reflections of what does a person think and/or feel towards a specific experience s/he had in the past or at the present time. Therefore, they give clues for how to act towards a person which eases and strengthens the communication between two people. One of the main methods to understand someone’s emotions is through their facial expressions. Because humans either consciously or unconsciously express them by the movements of the muscles on their faces. For the goal of our project, we want to use emotions as a tool to strengthen human-robot interaction to make our user to feel safe. Although it’s obvious that having an interaction with a robot never can replace having an interaction with another person, we believe that the effect of this drawback can be minimized by the correct use of emotions for human-robot interaction. Despite that there are different types of emotions, for the sake of simplicity and the focus on our care robot proposal, we will evaluate three main emotional situation which are happiness, sadness and pain through the features of the human face.
Happiness
File:Happyy.png Smile is the greatest reflector of happiness on human face. The following changes on human face describes how a person smile and show his/her happiness:
- Cheeks are raised
- Teeth are mostly exposed
- Corners of the lips are drawn back and up
- The muscles around the eyes are tightened
- “Crow’s feet” wrinkles occur near the outside of the eyes.
Sadness
File:Sad.png Sadness is the hardest facial expression to identify. The face looks like a frown-look and the followings are clues for sad face:
- Lower lip pops out
- Corners of the mouth are pulled down
- Jaw comes up
- Inner corners of the eyebrows are drawn in and up
- Mouth is most likely to be closed
Pain
File:Pain.png Facial expression of pain is not hard to identify and the followings describe the facial changes in a pain situation:
- Eyebrows are lowered
- Eyelids are tightened
- Cheeks are raised
- Mouth can be either wide open or as only teeth are exposed
- Eyes are likely to be closed
Our robot proposal
Our robot is a companion robot that will help the elderly people in living independently. It will have humanoid shape, and will have a screen attached to it that will allow the robot to display some pictures/video to the elderly person. It will also be a robot that can move from a place to another. <br\>
Our robot's shape
<br\> <br\> <br\> (make a drawing of how it can look) <br\> <br\> <br\> <br\> <br\> <br\> <br\>
The assumptions on robot's abilities
- Facial recognition: The robot has to recognize the face of the elderly person <br\>
- Speaking: The robot has to verbally communicate with the elderly. <br\>
- Movement: The robot has to be able to move. <br\>
- Calling: The robot has to be able to connect the elderly with other people, both by alarming and creating a direct connection between the client and a relevant person. <br\>
For calling, we assume the communication with caregivers or emergency contacts will processed with “pre-prepared” texts, similar with speaking with a human. The communication functionality will be provided to our robot with some systems embedded to it and our robot will be trained to call a contact properly, when needed.
Different scenarios for different emotions
Scenarios for pain
Scenario 1 - light pain<br\> Light pains may come in different forms. It could be a sudden pain, for expample by bumping once head or a more chronic discomfort like a chronic pain in the hip. Especially chronic pains are easily to miss for caregivers but may have a big influence on the life of the client. Communicating with the client may be a solution to find out what the pain is. However, a client might be dismissive towards the robot about the pain. Therefore the robot will also record this kind of pains to the caregivers. Caregivers might be able to find the pain and a solution on the effect it has on daily life. -Robot detects elderly face, analyzes it, and classifies it as low degree pain. <br\> -Robot opens conversation on the topic of the pain <br\> -Robot classifies the pain and reports it to the caregiver Or<br\> -Robot cannot classify the pain through conversation and reports an unknown pain at the caregivers <br\> <br\> Scenario 2 - heavy pain<br\> Pain can be sudden and intense. If an client seriously wounds itself by falling or due to illness pain is expressed different, signs to consider are screams or a call for help. These kinds of pain require direct attention by a proffesional and the caregivers will be alarmed. <br\> -Robot detects elderly face, analyzes it, and classifies it as high degree pain. <br\> -Robot tries to seek contact with the client. <br\> -Unless the client affirms not needing help a caregiver will be alarmed. <br\> <br\>
Depending on the degree of pain, the robot can react in different ways. If the degree of expressed pain is a low on, then the robot might ask the elderly if he wants to look up something on the internet or if he wants to make him an appointment with a doctor. Otherwise if the expressed pain is a severe one, then the robot needs to ask the elderly whether they it should request help by calling someone or an emergency number. <br\> an example scenario of that: <br\> -Robot approaches the elderly and asks if he wants to make an appointment with the doctor. <br\> -Elderly responds with a yes <br\> -Robot look available times and inform the elderly both verbally and by displaying date/time on the screen. <br\> -The elderly chooses which one is most convenient for him, and tells the robot <br\> -The robot confirms the appointment <br\> -The Robot advises the elderly to have some rest <br\> <br\>
Scenarios for sadness
Scenario 1 There are many things the robot can do. Something he can do is start a conversation and listen. When life gets overwhelming, it helps to have someone willing to listen to your problems, so the robot needs to allow the elderly person to air out their problems. At this time, the robot needs to take the information given by the person, and try to categorize the reasons behind the sadness of the person (loosing someone, missing someone, feeling lonely …) This will help the robot to know how to react making sure it will not make the situation worse. Another think he can do is give a hug, as hugging someone relieves stress and can make another feel lot better, it was proved scientifically that a chemical that is natural stress reliever called oxytocin is released in the brain while hugging. Another option is calling someone that can make the elderly feel better, that can be done if the robot doesn’t succeed in cheering up the person. <br\> An example scenario of that:<br\> -Robot detects elderly face, analyzes it, and classifies it as low degree sadness. <br\> -Robot approaches the elderly to start a conversation and ask what’s wrong? <br\> -Elderly responds with the reason of his sadness “ missing someone” <br\> -The robot asks the elderly if he wants him to call that person. <br\> -Elderly responds that the person died. <br\> -The robot responds “I’m sorry to hear that” and propose a hug to the elderly. <br\> <br\>
Scenario 2<br\> The world of a the elderly can be very small, especially if dementia plays a role. People start to get lost in the world with nothing to hold on to. This results in an apathic sadness. The robot should recognize this and try to bring the person back. With dementia this is often achieved by raising memories about the past. Showing pictures of their youth, former homes and family members helps the client to come back to the world and live happy memories. While they are activated again the robot can also start to play a game that keeps the brain fit. This is considered healthy and good exersize for demented people. For example: <br\> -The client seems distant and is not eating, drinking or dressing. They seem sad and do not react to input.<br\> -The robot plays music from back in the day to activate the person.<br\> -When the robot knows it has the attention of the client it may show pictures from the old home they grew up in.<br\> -If the client gets active enough they play a memory game to train the brain and keep them from going back into the apathic state of mind. <br\> <br\>
Scenarios for happiness
When the elderly person is happy, it’s not necessarily for the robot to take immediate actions. Nonetheless, it needs to try to keep the elderly as much as possible, in other words it can maximize the happiness of the elderly. Studies have shown that the person is more likely to maximize his happiness when he finds someone to talk to about what made him happy (need to look for some official studies cause this is something I came up with :)))) ) . So the robot has to start a conversation a try to talk to the elderly about it. Another thing it can do is to also act happy, by using some facial expression (eg., smiling face) , or by using its capability of movement and ask the elderly to dance with him. <br\> An example scenario of that: <br\> -Robot detects elderly face, analyzes it, and classifies it as happy. <br\> -Robot approaches the elderly with a smiling face <br\> -Robot starts a conversation and say “ What a beautiful day, how are you doing” <br\> -Elderly responds “ I’m so happy, (reason why he’s happy)” <br\> -The robot responds “What a good news, let’s celebrate, come dance with me” <br\> -The robot puts the elderly favorite playlist and start dancing. <br\> <br\> The activity diagram below present how the robot should react to different emotions expressed by the elderly person. (to be updated )<br\>
<br\>
Natural Language processing of the robot:
Artificial Intelligence is developing day by day and Natural Language Processing (NLP) is one of the greatest subfields of AI. Natural Language Processing is a functionality which lets devices to understand human language, thus provides a communication between humans and devices that use artificial intelligence. Two of the mostly known example technologies are Apple’s Siri and Amazon’s Alexa. They both are accepted as voice-activated helpers which are able to process the input coming from a human in a human language and give an output correspondingly. In addition to these technologies, there are existing few humanoid robots that are able to perform NLP and speak. Sophia is one of the examples to that kind of robots. Sophia is a robot that are able to speak but her speech is not like humans’. Sophia recognizes the speech and then classifies according to its algorithms and answers back by mostly scripted answers or information on internet rather than reasoning. We also want our robot to behave similar to Sophia, we assume that it can recognize human speech and answer back by selecting its answer in a pool of our “pre-prepared” answers according to different scenarios for emotion recognition.
Ensure safety of the elderly
Hugging can provide great comfort to a person and builds a thrusted relationship between two people. Therefore we want our robot to be able to give this physical affection. The elderly are often fragile. They bruise easily, bones are brittle and joints hurt. Some will not be able to give hugs but many will be able to give and recieve some sort of physical affection. However, thrusting these fragile bodies to a robot raises immediate concerns about safity. Therefore we have looked at state of the art robots that are build to give hugs. We look at two systems that provide a non-human hug. The first one is the HaptiHug, a kind of belt that simulates a hug and is used in online spaces like Second Live. Not a robot per se but the result of the psychological effect are very promising that an artificial hug could go a long way. <br\> <br\> Next is HuggieBot, currently in development, an actual robot that gives hugs. Just as with the HaptiHug, Huggiebot's main goal is to mimic the psychological effect of a hug. It uses a soft chest and arms that are also heated to mimic humans. Since this robot uses actual robotic arms this proves that a robot hug can be safe on an average human. As mentioned before our target group is especially vulnerable for physical damage but that there are robots with a sophisticated enough haptic feedback system gives us the confidence that this is possible to apply in our robot.
Companionship and Human-Robot relations
Relationships are complicated social constructs. Present in in some form in almost all forms of life, relationships know many forms and complexity. We are aiming for a human-robot relation that reaches the level of complexity between an animal companion and a human friend. Even the most superficual form of companionship can battle loneliness, a common and serieus problem with elderly.
Relationships are complex but can be generalized in two pillars on which they are build. The first one is trust and the second is depth. While creating the behaviour of the robot we can test our solutions by asking the questions: "Is this building or compremising trust?" and "Does this action deepen the relationship?". The answers to these questions will be a guideline toward a companion robot. First lets talk about trust.
Trust
Trust can be generalized for human-robot relations and defined as the following: "If the robot reacts and behaves as a human expect it to react and behave, the robot is considered trustworthy". To abide to this we as designers need to know to things. What is expected of the robot by first impressions? Which behaviour is considered normal according these expectancies. To answer the first question we need an extensice design cycle and user tests. Somethings that we do not have the ability to do, considering the main users and the times we live in now. Therefore we might need to assume that our robot has an chosen aesthetic and calls on a predetermined set of first impressions and expectancies from the user. Doing that we can create a set of expected behavior.
First impressions are important. Elderly are often uncomfortable with new technology. They should be eased into it with a easy to read, trustworthy robot. Because of this uncomfortability there is another challenge. Eldery might not know what they can expect from a robot. Logical behaviour that we expect to be normal and therefore trustworthy might be unexpected and unthrustworthy for an elder person who does not know the abilities a modern robot has. To gain this knowledge it might be so that we again need a user test with which might, again, be impossible with the current pandamic.
To conclude, we need to design a robot which reacts and behaves in a way that our user group, the lonely elder, expects it to behave and react.
Depth
Relations are not binary but rather build slowly over time. In human relationships people get to know another. While building trust they also start to share interests and experiences. Of course this is very limited in our robot. Our main goal is to battle loneliness not to make a full fletched friend. However, a robot can learn about its human. Emotion recognition will play a big part here. If the robot starts to learn when and how it should respond. Based on the emotion before and after the reaction of the robot, it could create a more accurate image of the needs and wants of the human. It would "get to know" the human. This creates opppertunity for more complex beheavior and therefore an evolving relation.
Facial emotion recognition software
The use of convolutional Neural Network:
In our project, we will create a convolutional Neural Network that will allow our software to make a distinction between three main emotions (pain, happiness, sadness, and maybe more emotions) by analyzing the facial expression.
What’s Convolutional Neural Network? <br\>
Convolutional neural network are neural networks which are sensitive to spatial information, they are capable of recognizing complex shapes and patterns in an image, and it’s one of the main categories of neural networks to do images recognition and images classifications. <br\>
How it works? <br\>
CNN image classifications takes an input image, process it and classify it under certain categories (eg:sad, happy, pain). The Computers see the input image as an array of pixels and based on the image resolution, it will see h x w x d(height, width, dimension).
Technically, deep learning CNN models to train and test, this training is done by providing to the CNN data that we want to analyze, each input image will pass it through a series of convolution layers with filters, Pooling, fully connected layers and apply Softmax function to classify an object, each classification with a percentage accuracy values. <br\> <br\>
<br\> <br\> Different layers and their purposes: <br\>
- Convolution layer: <br\>
Is the first layer that extract features from the input image, it maintains the relation between pixels by learning image features using small squares of input data. Mathematically, it’s an operation that multiplies the image matrix (h x w x d) with a filter (fh x fw x d) and it outputs a volume dimension of (h – fh + 1) x (w – fw + 1) x 1 <br\>
<br\>Convolutions of an image with different filters are used in order to blur and sharpen images, and perform other operations such as enhance edges and emboss.
- Pooling layer: <br\>
Reduce the number of parameters when the images are too large. Max pooling takes the largest element from the rectified feature map.
- Fully connected layer: <br\>
We flattened our matrix into vector and feed it into a fully connected layer like a neural network. Finally, we have an activation function to classify the outputs.
About the software
We will build a CNN which is capable of classifying three different facial emotions (happiness, sadness, anger), and try to maximize its accuracy.
We’ll use python 3.6 as programming language.
The CNN for detecting facial emotion will be build using keras.
To detect the faces on the images, we will use LBP Cascade Classifier for frontal face detection; it’s available in OpenCV.
dataset: <br\>
We used the fer2013 datast from Kaggle expended with the CK+ datasets. The data from fer2013 consists of 48x48 pixel grayscale images of faces, while the data from CK+ consists of 100x100 pixel grayscale images of faces, so we merged the two making sure all the pictures are 100x100 pixel. The faces have been automatically registered so that the face is more or less centered and occupies about the same amount of space in each image. The task is to categorize each face based on the emotion shown in the facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). But we'll only use (Angry,Happy, Sad, Neutral).
Program Modules:
dataInterface.py | process the images from dataset |
faceDetectorInterface.py | detect faces on the images using LBP Cascade Classifier |
emotionRecNetwork.py | create and train the CNN |
cnnInterface | allow applications to use the network |
applicationInterface | emotion recognition using webcam or screen images |
masterControl | user interface |
The Neural Network:
The neural network consist of three convolutional layers with 32 filters in the first layer with ReLu, 64 filters on the second layer, and 128 filters on the third layer. To improve speed of the network, max Pooling was done after every layer.
The neural network also contains two dense layers with 20 neurons on the first layer with ReLu activation, and 8 neurons on the second layer with softmax activation.
There was also a batch normalization before before every layer.
What's done: <br\>
week 5: <br\>
Built an emotion classification network using a vgg architecture. Ready for data training. <br\>
Trained the dataset (training took around 8 hours), had an accuracy of almost 60%. <br\>
Programed test.py, that detects a facial expression via the webcam and display the emotion expressed. Works really good for neutral, happy , angry, and sad. Less accurately for pain (that's because I couldn't find a large training dataset for pain facial expression) <br\>
week 6: <br\>
Made the dataset bigger by merging both fer2013 and CK+ <br\>
Changed Pain recognition with angriness recognition due to lack of pain datasets which leads to very low accuracy for pain recognition.
Made some changes to the model, and trained it according to the new dataset. Got a great accuracy of 98.2 %
Changed the application interface so it can recognize the facial expression both using the webcam or that appears on the screen.
Testing: works perfectly both with both webcam and some random pictures of elderly from internet.
Papers
- Blechar, Ł., & Zalewska, P. (2019). The role of robots in the improving work of nurses. Pielegniarstwo XXI wieku / Nursing in the 21st Century, 18(3), 174–182. https://doi.org/10.2478/pielxxiw-2019-0026
Nurses cannot be replaced by robots. Their task is more complex than just the routine tasks that they deliver. However due to the enormous shortage of nurses the pressure on nurses also inhibits them to give this human side of care. Therefore robots cannot replace but can relieve the nurses from the routine care and create more time for empathic and more human care.
- Shibata, T., & Wada, K. (2011). Robot Therapy: A New Approach for Mental Healthcare of the Elderly A Mini-Review. Gerontology, 57(4), 378–386. https://doi.org/10.1159/000319015
The elderly react positive to a robot companion (animal) when it reacts as they would expect. The more knowledge people have about the animal the robot is mimicking the more critical they are of their performance.
- Shibata, T. (2004). An overview of human interactive robots for psychological enrichment. Proceedings of the IEEE, 92(11), 1749–1758. https://doi.org/10.1109/jproc.2004.835383
Humans learn about the behavior of the robot and this changes the relationship. If a robot can learn about the behavior of the human the relation may deepen even more since the relation is no longer one sided. Intelligence and learning capabilities are therefore important in a care robot.
- Xiao, W., Li, M., Chen, M., & Barnawi, A. (2020). Deep interaction: Wearable robot-assisted emotion communication for enhancing perception and expression ability of children with Autism Spectrum Disorders. Future Generation Computer Systems, 108, 709–716. https://doi.org/10.1016/j.future.2020.03.022
Inability to recognize emotion is a serious problem for autistic children. A system to recognize emotions was build. Incorporation of visual and audio cues to recognize emotion. Emotion recognition can be improved if audio cues are also considered.
- Pepito, J. A., & Locsin, R. (2019). Can nurses remain relevant in a technologically advanced future? International Journal of Nursing Sciences, 6(1), 106–110. https://doi.org/10.1016/j.ijnss.2018.09.013
Nurses should be more involved in the development of care robots and other care technology. Nurses can oversee, use and apply the right technology for each specific patient. Seeing nurses as a main user should influence technology.
- Tarnowski, P., Kołodziej, M., Majkowski, A., & Rak, R. J. (2017). Emotion recognition using facial expressions. Procedia Computer Science, 108, 1175–1184. doi: 10.1016/j.procs.2017.05.025
Abstract: In the article there are presented the results of recognition of seven emotional states (neutral, joy, sadness, surprise, anger, fear, disgust) based on facial expressions. Coefficients describing elements of facial expressions, registered for six subjects, were used as features
- Jonathan, Lim, A. P., Paoline, Kusuma, G. P., & Zahra, A. (2018). Facial Emotion Recognition Using Computer Vision. 2018 Indonesian Association for Pattern Recognition International Conference (INAPR). doi: 10.1109/inapr.2018.8626999
Abstract:This paper examines how human emotion, which is often expressed by face expression, could be recognized using computer vision
- Singh, D. (2012). Human Emotion Recognition System. International Journal of Image, Graphics and Signal Processing, 4(8), 50–56. doi: 10.5815/ijigsp.2012.08.07
Abstract: This paper discusses the application of feature extraction of facial expressions with combination of neural network for the recognition of different facial emotions (happy, sad, angry, fear, surprised, neutral etc..)
- Meder, C., Iacono, L. L., & Guillen,, S. S. (2018). Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines. World Academy of Science, Engineering and Technology International Journal of Mechanical and Mechatronics Engineering. doi: 10.1999/1307-6892/10009027
Abstract:In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions
- Draper, H., & Sorell, T. (2016). Ethical values and social care robots for older people: an international qualitative study. Ethics and Information Technology, 19(1), 49–68. doi: 10.1007/s10676-016-9413-1
This article focuses on the values care robots for older people need to have. Many participants from different countries, including old people, informal careers, and formal careers for old people, were given different scenarios of how robots should be used. Based on that discussion, a set of values was derived and prioritized.
- Birmingham, E., Svärd, J., Kanan, C., & Fischer, H. (2018). Exploring emotional expression recognition in aging adults using the Moving Window Technique. Plos One, 13(10). doi: 10.1371/journal.pone.0205341
This article studies the facial expressions for different age groups. Moving Window Technique (MWT) was used to identify the different facial expression between old adults and young adults.
- Ko, B. (2018). A Brief Review of Facial Emotion Recognition Based on Visual Information. Sensors, 18(2), 401. doi: 10.3390/s18020401
In this paper, Conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames.
- Tan, L., Zhang, K., Wang, K., Zeng, X., Peng, X., & Qiao, Y. (2017). Group emotion recognition with individual facial emotion CNNs and global image based CNNs. Proceedings of the 19th ACM International Conference on Multimodal Interaction - ICMI 2017. doi: 10.1145/3136755.3143008
This paper focuses on classifying an image into one of the group emotion such as positive, neutral or negative. The approach used is based on Convolutional Neural Networks (CNNs)
- Levi, G., & Hassner, T. (2015). Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction - ICMI 15. doi: 10.1145/2818346.2830587
This paper present a novel method for classifying emotions from static facial images. The approach leverages on the success of Convolutional Neural Networks (CNN) on face recognition problems
- https://www.researchgate.net/publication/221011940_HaptiHug_A_Novel_Haptic_Display_for_Communication_of_Hug_over_a_Distance (NOT YET APA)
Weekly contribution
Week 1
Name | Tasks | Total hours |
---|---|---|
Cahitcan | Research(3h), Problem statement(1.5h), studying papers(2h),brainstorming ideas with the group(2h), help on planning (0.5h) | 9 hours |
Zakaria | Introduction lecture(2h) - brainstorming ideas with the group(2h) - research about the topic(2h)- planning(0.5h) - Users and their needs(0.5h)- study scientific papers (3h) | 10 hours |
Lex | Introduction and brainstorm (4h) - Research (3h) - Approach (1.5h) - planning (0.5h) | 9 hours |
Week 2
Name | Tasks | Total hours |
---|---|---|
Cahitcan | meeting (1.5h) - RPCs (1.5h) - state of the art (2h) - USE analysis (1h) | 6 hours |
Zakaria | meeting(1.5h)- USE analysis(3h)-RPCs(2.5h)- robots that already exist(2h) | 9 hours |
Lex | meeting (1.5h) - writing the interview (3h) - distribuing the interview (1h) | 5.5 hours |
Week 3
Name | Tasks | Total hours |
---|---|---|
Cahitcan | meeting (1h) - research on how facial expressions convey emotions (4h) - finding&reading papers(3.5h) | 8.5 hours |
Zakaria | How the robot should react to different emotions (3h)- meeting (1h)- research about CNN(5h) | 9 hours |
Lex | Meeting (1h) - research on trust and companionship (7h) - processing interview (1h) | 9 hours |
Week 4
Name | Tasks | Total hours |
---|---|---|
Cahitcan | analysis of how facial expressions convey emotions (3.5h), meeting (30 min), reflection on groupmate's works (1.5h), research (1.5h) | 7h |
Zakaria | The use of convolutional Neural Network (4h), work on building up the software (download tools, make specifications, start programming, select a dataset)(4.5h) , Our Robot part (4h) , programming(faceDetectorInterface.py) (3h) , meeting (30min) | 16h |
Lex | Meeting (0.5h), Writing about relations and companionship(3h), reflecting current scenarios with prof (2h), editing, writing and checking scenarios with professionals (5h) | 10.5h |
Week 5
Name | Tasks | Total hours |
---|---|---|
Cahitcan | - | - |
Zakaria | meeting(30min)- programming(built emotion classification CNN)(5 hours) - trained the CNN (1h) - programmed test.py for facial emotion recognition via the webcam (3h) | - 9.5 hours |
Lex | meeting (0,5h) - translated and generalized interview answers (2h) - Research robotic hugs (3h) - Research calling features (1.5h) | 7 hours |
Week 6
Name | Tasks | Total hours |
---|---|---|
Cahitcan | - | - |
Zakaria | - | - |
Lex | - | - |