PRE2019 3 Group2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(327 intermediate revisions by 5 users not shown)
Line 1: Line 1:
<font size = '6'>Research on Idle Movements for Robots</font>
<font size = '6'>Research on Idle Movements for Robots</font>
== Abstract ==


== Group Members ==
== Group Members ==
Line 24: Line 19:
|}
|}


=='''Planning V1'''==
== '''Abstract''' ==
'''Division of work'''<br/>
It is likely that in the future, robots will be implemented into society. This study focuses on improving human-robot interaction by researching idle movements. First, the idle movements of humans are determined by a literature search and an experiment. This was done in three categories: ‘casual conversation’,  ‘waiting / in rest’ and ‘serious conversation’ Then, the found idle movements at different categories were tested using a NAO robot. The participants needed to fill in the Godspeed questionnaire before and after a conversation with a NAO robot, which was performing idle movements. Furthermore, a conversation of the NAO robots which performed different kinds of idle movements was put online. The participant of this online test needed to full in the Godspeed questionnaire before and after watching the conversation.
The research is performed in a total of eight weeks. In those weeks two experiments are done:<br />   
Results showed that the different types of conversations had different idle movements, resulting in different combinations for the conversation experiments. These two experiments with the NAO robots show two completely different outcomes. The result of the experiment with the participants having a conversation of the NAO robot shows that the combination of a ''casual conversation with serious movement'' and the combination of a ''serious conversation with casual movements'' were the best in most of the categories of the Godspeed questionnaire. The online experiment showed that the expected result, ''casual conversation with casual movements'' peaking together with ''serious conversation with serious movements''. Overall, the usage of movements positively stimulates the participants (and, thus, speakers). The best set of movements seems to be dependent on whether the speaker is directly in front of the robot or sees the robot on their screen. For when people see the robot in real life, the best combinations would be the casual conversation with serious movements and the serious conversation with casual movements. For when people see the robot on a screen, the best combinations would be the casual conversation with casual movements and the serious conversation with serious movements.
1. Experiment A: .....;<br />  
 
2. Experiment B:.......
== '''Introduction''' ==
 
=== '''Problem statement''' ===
The future will contain robots, which will interact with humans in a very socially intelligent way. [[File:Nao_robot.png|right|thumb|300px|Figure 1: Nao Robot]] Robots will, then, demonstrate humanlike social intelligence and non-experts will not be able to distinguish robots and other human agents anymore. To accomplish this, robots need to develop a lot further: the social intelligence of robots needs to be increased a lot, but also the movement of the robots. Nowadays, robots do not move the way humans do. For instance, when moving your arm to grab something, humans tend to overshoot a bit<ref>Yamauchi, M., Imanaka, K., Nakayama, M., & Nishizawa, S. (2004). Lateral difference and interhemispheric transfer on arm-positioning movement between right and left-handers. Perceptual and motor skills, 98(3_suppl), 1199-1209.</ref>. A robot specifies the target and moves towards to object in the shortest way possible. Humans try to take the least resistance path. This means they also use their surroundings to reach for their target. For instance, lean on a table to cancel out the gravity force. Furthermore, humans use their joints more than robots do.
 
This lack of movement creates a big problem for a robot's motion for idle movements. An idle movement is a movement during rest or waiting (e.g touching your face). For humans and every other living creature in the world, it is physically impossible to stand precisely still. Robots, however, when not in action stand completely lifeless. Creating a static (and, therefore, an unnatural) environment during the interaction between the robot and the corresponding person. It is unsure if the robot is turned on and can respond to the human, and it can also create an uncomfortable scenario for the human<ref>Homes, S., Nugent, B. C., & Augusto, J. C. (2006). Human-robot user studies in eldercare: Lessons learned. Smart homes and beyond: Icost, 2006, 4th.</ref>. As it is suggested, in the latter paper, that social ability (which also covers includes movements) in a robotic interface contributes to feeling comfortable talking to it. Another thing is that humans are always doing something or holding something while having a conversation.
To improve the interaction with humans and robots, the questions that have to be answered are: "Which idle movements are considered the most appropriate human-human interaction?", and "Which idle movements are considered the most appropriate human-robot interaction?". In this research, we will look at all these things by observing human idle movements, test robot idle movement, and research into the most preferable movements according to participants.
 
=== '''Objectives''' ===
It is still very hard to get a robot to appear in a humanistic natural way as robots tend to be static (whether they move or not). As a more natural human-robot interaction is wanted, the behavior of social robots needs to be improved on different levels. In this project, the main focus will be on the movements that make a robot appear more natural/lifelike during idling, which is called 'idle movements'. The objectives of the research will be to find out which set of idle movements make a robot appear in a more natural way. In this research, the information will be based on previous research that has been done on this subject (which is explained in the chapter 'State of the art'). More information will be gathered by observing people in videos; these videos will differ in categories as will be explained later in the chapter 'Approach'. These videos will give a conclusion on what the best idle movements are for certain conversations.
 
Next to this, the NAO robot will be experimented with. The NAO robot can be seen in figure 1. This robot has many human characteristics such as human proportions and facial properties. The only big difference is the height as the NAO robot is just 57 centimeters. The NAO robot will be performing different idle movements and future users give their responses to these movements. With these acquired responses, the received data is supposed to give a conclusion on what the best idle movements are and will hopefully make the robot appear more life-like. Due to the fact that in the future humanoid robots will also improve, possible expectations on the most important idle movements will also be given. Altogether, it is hoped that the information will give greater insight into the use of idle movements on humanoid robots to be used in future research and projects on this subject.
 
== '''USE''' ==
 
When taking a look at the stakeholders of social robots, a lot of groups can be taken into account. This is caused by the fact that it is uncertain to what extent robots will be used in the future. Focusing on all the possible users for this project would be impossible and 'designing for everyone, is designing for no-one'. A selection has been made of users who will likely be the first to benefit from or have to deal with social robots. This selection is based on collected research from other papers<ref name=Torta>Torta, E. (2014). Approaching independent living with robots. https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf</ref>, where these groups where highlighted and researched on their opinion on robot behavior.
The main focus of research papers on social robots is for future applications in elderly care, and therefore elderly people are the main subject. Another group of users who will reap the benefits of social robots are physically or mentally disabled people.
 
=== '''Users''' ===
The primary users are:
*'''elderly'''
 
It is possible that in the future socially assistive robots will be implemented into care homes, to provide care and companion to elderly people. The elderly are the primary users as they will have the most contact with the robots. For those people in need of care, it is essential that these social robots are as human-like as possible. This will help them better accept the presence of a social robot in their private environment. One key element of making a robot appear life-like is the presence of (the to be studied) idle motions.
 
The nurses and doctors who care for these people are secondary users of socially assistive robots. They can feel threatened by social robots as the robots might take over their jobs, but can also be relieved that the robot will assist them<ref>Rantanen, T., Lehto, P., Vuorinen, P., & Coco, K. (2018). The adoption of care robots in home care—A survey on the attitudes of Finnish home care personnel. Journal of clinical nursing, 27(9-10), 1846-1859.</ref>.
 
*'''physically disabled persons'''
Another field of application of socially active robots is helping physically disabled persons. Post-stroke application is a possible application of the socially active robot since stroke is a big cause of severe disability in the growing aging society. 'Motivation is recognized as the most significant challenge in physical rehabilitation and training' <ref name=Tapus>Tapus, A. (2013). The Grand Challenges in Socially Assistive Robotics.https://hal.archives-ouvertes.fr/hal-00770113/document</ref>. The challenge is 'aiming toward establishing a common ground between the human user and the robot that permits a natural, nuanced, and
engaging interaction' in which idle motion could be of a big role.
 
*'''children'''
Humanoid robots are used in various contexts in child-robot interactions, for example, autism therapy. In order to be accepted, the robot should be (perceived) as socially intelligent. In order to improve human-like abilities and social expression, idle movements should be implemented.
 
 
 
There has been done a loopback towards the users, this will be addressed at the end of this research (chapter 'Feedback loop on the users').
 
=== '''Society''' ===
 
The main interest of the society is the implementation of socially assistive robots in elderly care. The population of the world is getting older. It is estimated that in the year 2050, there will be three times more people over the age of 85 than there are today, while a shortage of space and nursing staff is already a problem today<ref name=Tapus>Tapus, A. (2013). The Grand Challenges in Socially Assistive Robotics.https://hal.archives-ouvertes.fr/hal-00770113/document</ref>. The implementation of socially assistive robots could help to overcome this problem as these robots may assist the caregivers during their day at work, reducing their stress and work pressure.
 
=== '''Enterprise''' ===
Companies and manufacturers of socially assistive robots will benefit from this research, implementing it in their products. They want to offer their customers the best social robot, they are able to produce, which in turn has to be as human-like as possible. Thus, it is key to include idle movements in their design as this would increase their sales, resulting in more profit over time. There are already some companies selling social robots, for example, the PARO robot. PARO is an advanced interactive robot made by AIST, a Japanese industrial pioneer. It is described as animal therapy administered to patients in elderly care and results show reduce patient stress, caregiver stress and improve the socialization of the patients.
 
== '''Approach, Milestones and Deliverables''' ==
=== Approach ===
To get knowledge about idle movements, observations have to be done on humans. Though, the idle movements will be different for various categories (see experiment A), which requires research beforehand. This research will be done via state of the art. Papers, such as papers from R. Cuijpers and/or E. Torta and Hyunsoo Song, Min Joong Kim, Sang-Hoon Jeong, Hyeon-Jeong Suk, and Dong-Soo Kwon, contain a lot of information concerning the idle movements that are considered important. Therefore, it is important to read the papers of the state of the art carefully. The state of the art will be explained in the chapter ‘State of the art’, which is positioned at the bottom of this document due to its size. After the research, observations can be done. The perfect observation method is dependent on the category. Examples of such methods are observing people walking (or standing) around public spaces, such as on the campus of the university or on the train. However, for this research, videos or live streams will be watched on (e.g) Youtube. The noticed idle movements can be listed in a table and can be tallied. The most tallied idle movement will be considered the best for that specific category. However, that does not mean it will work for a robot as the responses of users might be negatively influenced. Experiment B will clarify this.
 
The best idle movement per category will be used in experiment B. This experiment will be done by using the NAO robot. The experiment makes use of a large number of participants (which optimally would consist out of the primary users, see chapter 'Users'). The NAO robot will start a conversation with the participant for a given amount of time; this is done twice. At first, the plan was to let the robot use different idle movements per conversation: ''the NAO robot will talk without using any idle movements and, then, using different idle movements for different types of conversations. The used idle movements will be based on the research as listed above and are in the same order for every participant.'' However, as will be explained in 'chapter Experiment B, Setting up experiment B', the setup has changed: the NAO robot will have 3 different types of motion sets: casual, serious, and no. The NAO robot will perform two conversations with the participants while using one of the motion sets (the combinations of the conversations and motion sets are fixed).
 
The participants will have to fill in a Godspeed questionnaire<ref name=godspeed>Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1(1), 71–81.</ref> before the conversation, after the first conversation, and after the last conversation. This ensures that there can be looked into the way people's thoughts change relative to the robot's behavior. The Godspeed questionnaire includes many different questions such as animacy or anthropomorphism. These questions should also be answered with a scale between 1-5. By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.
 
=== Milestones and Deliverables ===
The milestones and deliverables can be found in the planning.
 
== '''Experiment A''' ==
The first experiment will be a human-only experiment. To understand the meaning of robot idle-movement better humans have to be observed. People will be observed in multiple places such as mentioned in the Approach but still have different movements. Listed down below, it is stated which type of video has been used (e.g youtube videos or live streams). The videos differ as the idle movements most likely differ in different categories. For instance, when people are having a conversation with a friend a possible idle move will be shaking with their leg and or feet or biting their nails. However, when a job interview people tend to be a lot more serious and try to focus on acting 'normal'. The different categories for the idle movement with multiple examples to implement on the robot are listed down below. Furthermore, these examples are based on eleven motions suggested by this paper<ref name=japanpaper>Song, H., Min Joong, K., Jeong, S.-H., Hyen-Jeong, S., Dong-Soo K.: (2009). Design of Idle motions for service robot via video ethnography. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 195–99 https://ieeexplore.ieee.org/abstract/document/5326062</ref> and these eleven motions are eye stretching, mouth movement, coughing, touching the face, touching hair, looking around, neck movement, arm movement, hand movement, lean on something and body stretching. In this research, nothing will be done with the speaking research. However, it is good to see that the motions differ per action (listening or speaking) and that the eleven motions can be applied to these.
 
{| class="wikitable" style="border-style: solid; border-width: 1px;" cellpadding="3"
!style="text-align:left;"| Category
!style="text-align:left"| Examples of idle movements
|-
| Casual conversation  || nodding, briefly look away, scratch head, putting arms in the side, scratch shoulder, change head position.
|-
| Waiting / in rest || lightly shake leg/feet up and down or to the side, put fingers against head and scratch, blink eyes (LED on/off), breath.
|-
| Serious conversation || nodding, folding arms, hand gestures, nod, lift eyebrows slightly, touch the face, touch/hold arm.
|-
 
|}
The listed eleven motions are used for the tallying of the idle movements so that its full comparison can be given between the categories. The type of tape/video used is, thus, dependent on such a category. For each category, it will be explained which type of video has been used and why. Furthermore, the analysis of the video has been done twice, as that was one of the main problems stated during the presentation.
 
=== Casual conversation ===
Casual conversation is talking about non-serious activities, which can occur between two strangers but also between two relatives. The topics of those conversations are usually not really serious, resulting in laughter or just calm listening. No different from the serious conversation, attention should be paid to the speaker. During conversations, the listeners will show such attention towards the speaker, which can generally be done by having eye contact. Furthermore, lifelike behavior is also important as it results in a feeling of trust towards the listener. Therefore, it is important for a robot to have the ability to establish such trust via idle movements.
 
Eye contact for the NAO robot has already been researched and it is established that it is important<ref>Andrist, S., Tan, X. Z., Gleicher, M., & Mutlu, B. (2014, March). Conversational gaze aversion for humanlike robots. In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 25-32). IEEE.</ref>. The usage of idle movements for the gain of trust is also considered important. Examples of such idle movements are nodding, briefly looking away, scratching the head, putting the arms in the side (and changing its position), scratching the shoulder and changing the head position. These idle movements are corresponding with what has been found in the idle movement research<ref name=japanpaper />. In casual conversations, nodding can confirm the presence of the listener and ensure that attention is being paid. Briefly looking away can mimic the movement of thought as people tend to look away and think while talking. Scratching your head might mimic the same idea, as scratching your head is associated with thinking. Moreover, scratching your head might also give the speaker a feeling of confidence due to the mimic of liveliness. Putting the arms in the side is generally a gesture of confidence and also of relaxation. Mixing up the number of arms in the side and the position of the legs will give the speaker an idea that the robot will relax, which gives the speaker a feeling of confidence. Scratching the shoulder is just a general mimic that ensures that the robot will look alive and seems confident. The change of the head position has the same purpose, the change of head position generally is a movement that occurs when thinking and during questioning what has been said. Both reasons give the speaker an idea that the robot is more lifelike. All of these examples of idle movements come back to the (already spoken about) research, resulting in the stated eleven motions (see the first paragraph of 'Experiment A').
 
After knowing what idle movements to look for, it was important to investigate ways to tally the motions. Since there are many different ways to casually talk to someone, three different videos were taken (all from youtube). One of them is a podcast between four women <ref> Magical and Millennial Episode on Friends Like Us Podcast, Marina Franklin, https://www.youtube.com/watch?v=Vw9M4bVZCSk</ref>, another one is a conversation between a dutch politician and a dutch presenter <ref>Een gesprek tussen Johnny en Jesse, GroenLinks, https://www.youtube.com/watch?v=v6dk6eI1qFQ</ref> and the last one is of an American host show<ref>Joaquin Phoenix and Jimmy Fallon Trade Places,
The Tonight Show Starring Jimmy Fallon, https://www.youtube.com/watch?v=_plgHxLyCt4</ref>. The three videos were all watched fully and the number of each motion was being noted. The tallying was done for the listener and for the speaker so that the differences can also be spotted. The data of both can be seen in the bar graphs in Figures 2 and 3.
 
 
 
[[File:image007.png|left|thumb|550px|Figure 2: Bar chart of the idle motions for listening during a casual conversation]][[File:image005.png|center|thumb|550px|Figure 3: Bar chart of the idle motions for speaking during a casual conversation]]
 
 
In figure 2 it can be seen that neck movements are the most important motion, followed by mouth movements, arm movements and touching face (respectively). This can be explained due to the fact that people tend to nod a lot as a way to show acceptance and agreement with what has been said. However, head shaking is also seen as such a neck movement. The mouth movement is wetting their lips or just saying "yes", "okay" or "mhm". Again, with the underlying meaning being acceptance and agreement. Both motions are also a way of showing that attention is being paid. The arm movement can be explained by the fact that people scratch or just feel bored. Nonetheless, this movement shows life-like characteristics. Touching their faces is the same principle as the arm movement as it can happen due to boredom, an itch or something else. Next to these four, there were no motions that stood out.
 
From figure 3, the arm movement, eye stretching, hand movement, neck movement, and looking around stood out. The arm and hand movement can be explained by the fact that people tend to explain things via gestures. These gestures are done by using your arm and hand. The eye stretching is used to create emotion (e.g being in shock or being surprised). The neck movement is a result of approval or disapproval of things that are being told. Looking around is a factor in thinking and, if people look away, they are thinking about what they remember of a scenario or what the best way is to tell something. Furthermore, nothing really stood out.
 
To confirm that the results are viable, a chi-squared statistic analysis was performed in both situations. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.
*The significance for the motions of the listener in a casual conversation is 0,218 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
*The significance for the motions of the speaker in a casual conversation is 0,339 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
 
=== Waiting / in rest ===
 
Often people find themselves in waiting or in a rest position. People normally think that they are really doing nothing in those situations. However, everybody makes some small movements when doing nothing. This is one of the things that makes a person or animal look alive. There are many different things that can be described as idle movements, but this research will only look into a couple of them. When you take a close look at people when they are doing nothing you will realize that they do these movements more often than you think. In this part of the experiment, three different videos were looked into where people are in the situation of doing nothing or are in rest. All the movements above are checked when done in the videos <ref> Doing nothing for 8 hours straight, Rens Cools, https://www.youtube.com/watch?v=sqD4f5MlPTc&t=3061s</ref> <ref>Sitting in a Chair for One Hour Doing Nothing,
Robin Sealark, https://www.youtube.com/watch?v=jEjBz9FwUpE&t=1221sk </ref> <ref>Sitting in a Chair for One Hour Doing Nothing, Povilas Ramanauskas, https://www.youtube.com/watch?v=90lI-mU2o-I</ref>. All the results are then put together and added to a bar chart. In this way, we can find out which idle movements are most performed when in waiting or in rest.  Figure 4 shows the bar chart of the results.
 
[[File:image003.png|center|thumb|550px|Figure 4: Bar chart for the idle motions during waiting/in rest]]
 
When looking at figure 4, it can be concluded that the neck movements are done the most. This can be explained by the fact that people move their heads while looking at their mobile phones or just nodding to people walking by. The second most frequent idle movements are looking around and mouth movement. Looking around is normal during waiting as people want to make sure what is happening around them and this movement makes them seem more alive and involved in the environment. The mouth movement is mainly people licking their lips or smiling to people walking by. The lip-licking is a preparation for a conversation that would maybe be happening in the near future and smiling is showing other people that you are present. The other movement that stood out is the hand movement, which can be explained by the fact that people are nervous when alone or just anxious.   
The movements that are most often performed should probably be implemented in a robot when it is in rest/waiting to acquire a better human-like appearance of the robot.
 
To confirm that the results are viable, a chi-squared statistic analysis was performed in this situation. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.
*The significance for the motions of a person waiting or in rest is 0,166 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
 
=== Serious conversation  ===
The human idle movements in a serious or formal environment have been done by studying the idle movements in conversations like job interviews <ref> Job Interview Good Example copy (2016),
Katherine Johnson, https://www.youtube.com/watch?v=OVAMb6Kui6A</ref>, ted talks <ref>How motivation can fix public systems | Abhishek Gopalka (2020), TED, https://www.youtube.com/watch?v=IGJt7QmtUOk</ref> and other relative formal interactions like a debate <ref> Climate Change Debate | Kriti Joshi | Opposition, OxfordUnion, https://www.youtube.com/watch?v=Lq0iua0r0KQ</ref> or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch their face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements, in this category, that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research ( rev Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. Two bar charts have been made counting all the idle motions in the videos mentioned at the top of this paragraph. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However, the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. The two bar charts look like the following:
 
 
[[File:image009.png|left|thumb|550px|Figure 5: Bar chart of the idle motions for listening during a serious conversation]][[File:image100.png|center|thumb|550px|Figure 6: Bar chart of the idle motions for speaking during a serious conversation]]
 
 
What can be seen in figure 5, is that neck movements really stand out. This is caused by the fact that people tend to nod a lot during a serious conversation, nodding ensures that the person comes across as understanding or smart. Furthermore, looking around and leaning on something are standing out. Looking around can be a factor of confidence and calmness during a serious conversation, showing that the person is comfortable being there. The leaning on something is mostly a table in the job interview and ensures that the person listening seems to be present and fully concentrated about the conversation (which really is helpful with, for example, a job interview). Furthermore, hand movement is an important factor. As already said, hands are put over each other on the table to show presence as well.
 
Figure 6 shows a high frequency at hand movement. This is caused by the fact that people want to come across smart or interesting, which is achieved by using gestures while talking. That also explains the peak for arm movements as they are also important in gestures. Looking around is caused by people thinking. While thinking during a serious conversation, people look away to look concentrated. Furthermore, looking at someone while thinking might make them uncomfortable. Lastly, body stretching has a relatively high frequency. Body stretching generally gives a feeling that someone is comfortable in a conversation; it also shows that the person is present (if not done extremely abundant).
 
To confirm that the results are viable, a chi-squared statistic analysis was performed in both situations. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.
*The significance for the motions of the listener in a serious conversation is 0,712 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
*The significance for the motions of the speaker in a serious conversation is 0,577 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
 
== '''Experiment B1''' ==
 
 
Experiment B1 is divided into different parts. These are:
*The NAO robot
*Determining (NAO compatible) Idle movements from literature and *Experiment A.
*The participants
*Setting up experiment B1
*The script
*The questionnaire
*The results
*Limitations
 
=== The NAO robot ===
The NAO robot (see figure 1) is an autonomous, humanoid robot developed by SoftBank Robotics. It is widely used in education, research, and healthcare, with a great example being the research of Elena Torta. In the paper from Torta<ref name=Torta />, where the idea of this project originates from, is the NAO robot used as well. In one of her researches, the NAO robot is used with the elderly which is a great example that includes our user group as well.
 
The NAO robot is small compared to a human being. With its approximate 57 centimeters, it is about one-third of a human. The robot has 25 degrees of freedom, eleven of which are positioned in the lower part, including legs and pelvis and fourteen of them, are present in the upper part including its trunk, arms, and head<ref>N.A. How many degrees of freedom does my robot has? SoftBank Robotics. Retrieved from: https://community.ald.softbankrobotics.com/en/resources/faq/using-robot/how-many-degrees-freedom-does-my-robot-has</ref>. This high number of degrees of freedom made it possible to make the robot look more lifelike, despite its length. The robot is packed with sensors, for example, cameras that can detect a face, position sensor, velocity sensor, acceleration sensors, motor torque sensor, and range sensors<ref> NAO - Technical overview. http://doc.aldebaran.com/2-1/family/robots/index_robots.html</ref>. It can speak multiple languages, including Dutch. As will be spoken about later, our participants were dutch. Therefore, the experiment will be done in Dutch to get the best results, without having a language barrier influencing the quality of the conversation.
 
=== Determining (NAO compatible) idle movements from literature and Experiment A ===
 
Unfortunately, the NAO robot is not able to perform every motion a human can do due to a limited amount of links and joints as already mentioned. The list of human idle movements from experiment A and the literature are:
 
*Eye stretching
*Mouth movement
*Cough
*Touching face
*Touching hair
*Looking around
*Neck movement
*Arm movement
*Hand movement
*Lean on something
*Body stretching
 
 
 
All the human idle movements, that are relevant for the research, were made in Choregraphe 2.1.4<ref> Choregraphe software. https://community.ald.softbankrobotics.com/en/resources/software/language/en-gb
</ref>, to make the robot perform them during the experiment. The chosen idle movements were decided upon during experiment A. The best ones were to be chosen, this is when a movement has a high frequency in the given graph. On top of that, some movements were impossible to do as the robot was not able to hold on to objects or able to lean on something. The following table includes the motions and their usage for a casual or serious conversation and the action (listening or speaking). n.p stands for not possible and means that the robot was not able to perform the idle movement.
[[File:Graph_idle.jpg|center|400px|]]
A lot of human idle movements cannot be tested due to the limitations of the NAO robot or were not relevant enough for the conversation with the NAO robot. It is important to see that casual and serious have one different idle movement. This is due to the fact that the seriousness of the conversation is different, meaning that during a serious conversation people think by looking away. Furthermore, the touching of the face is just a form of comforting for the speaker and shows anthropomorphism.
 
=== The participants ===
As explained in chapter 'Users', there are a lot of users that can be taken into account. Due to limitations, it was decided to take one group fo participants to participate in experiment B. 18 Participants have been doing the experiment, ranging from 18-22 years old. To make sure that no problems would occur the participants all filled in a consent form, giving their consent to let their data be used for this experiment and for this experiment only. As all of the participants were volunteers, no compensation was given to them. Furthermore, it was noted that all of their information could be deleted if they mentioned that they did not want to be apart of the experiment anymore (within 24 hours).
 
=== Setting up experiment B ===
The experiment will be done in the robotics lab on the TU/e and will be done in week 6. The experiment will be performed as follows:
*1. The participant needs to fill in a Godspeed questionnaire.
*2. The NAO robot will start a conversation, this includes the robot asking open questions. The participant will have to give an elaborate answer to these questions.
*4. During the answer, an idle movement is performed by the NAO robot.
*5. After the answer is given, the NAO robot responds and will ask another question.
*6. During the answer, the same idle movement is performed by the NAO robot.
*7. After the conversation, the participant has to fill in another Godspeed questionnaire.
 
At first, our plan was to pick the three most performed idle movements per category and test them all separately on the different conversations. However, having the same movement over and over again during a conversation seemed odd and unnatural, humans do not do this either. With a robot, it looks even weirder. To let the conversations be more natural, the experiment was changed to the following. Pick the four most used idle movements per category (serious and casual) for speaking and listening. Some idle movements were not an option due to the overall limitations (see 'Determining (NAO compatible) Idle movements from literature and Experiment A'). Now a file on Choregraphe is made for the serious movements and the casual movements. A total of six movements per category (three speaking and three listening) were placed randomly in the Choregraphe file to simulate human behavior. Another file was made with no movements at all, so in total there are three movements files (serious movements, casual movements, and no movements).
 
Our goal of this research is to see if idle movements make a difference in a conversation and if different idle movements in different types of conversation will make a difference. For which two different conversations were made for a new setup. Each conversation will last about two minutes, having around eight questions asked by the robot. Where one of the three idle movement programs are done. Before and after a conversation is done, a questionnaire is taken by the participant. The order would be the same and as every participant would perform two conversations, there would be a total of three different versions of conversations that the participant could receive:
 
*'''Casual''' conversation with '''casual''' movements and '''serious''' conversation with '''serious''' movements
*'''Casual''' conversation with '''no''' movements and '''serious''' conversation with '''casual''' movements
*'''Casual''' conversation with '''serious''' movements and '''serious''' conversation with '''no''' movements
 
The participant will undergo two conversations with the NAO robot, firstly the casual conversation, in our case a chat about the weather and plans for the summer. The second conversation is a serious conversation, in our case a job interview. The three idle movements files from Choregraphe will be linked to the conversations, having nine different conversations in total (as can be seen above). The serious conversation has one with the serious idle movements, the casual movements and no movements at all. The casual conversation has the same three different types of options. To have as much data for every conversation a total of 18 participants were used.
 
The setup also changed as the setup listed above does not include the various movements and versions:
*1. The participant needs to fill in a Godspeed questionnaire.
*2. The NAO robot will start a casual conversation, this includes the robot asking open questions. The participant will have to give an elaborate answer to these questions. The robot will perform a type of idle movement or not during the listening (depending on which version the participant has).
*3. After the conversation, the participant will have to fill in another Godspeed questionnaire.
*4. The participant has to do perform another conversation with the NAO robot, but this time it would be a serious conversation. The robot would perform a set of idle movement or not (depending on which version the participant has).
*5. After the conversation, another Godspeed questionnaire has to be filled in.
 
With 18 participants, it means that every type of conversation with different movements has been done six times and 54 questionnaires have been filled in.
 
=== The script (Dutch and English) ===
Down below, the two scripts for the casual and serious conversations can be seen. The experiments were performed in Dutch, but it has been translated for non-Dutch people.
 
 
'''Casual conversation'''
 
'''Robot''':
Hoi goed om je weer te zien. Hoe gaat het met je?
''Hi, good to see you again. How are you?''
 
'''u''':
Het gaat lekker.
''It is going well.''
 
'''Robot''':
Dat is goed om te horen. Alleen het weer is wat minder vandaag helaas.
''That's good to hear. Only the weather is bad today, unfortunately.''
 
'''u''':
Ja klopt. ik heb zin in de zomer.
''Yes. I'm looking forward to the summer.''
 
'''Robot''':
Inderdaad. Ik kan niet wachten om aan het strand te liggen. en jij?
''Indeed. I can't wait to lie on the beach. and you?''
 
'''u''':
Ik ga liever barbecuen/terassen
''I prefer to barbecue / go to a terrace''
 
'''Robot''':
Dat lijkt me inderdaad ook erg leuk om te doen.
''I think it would be fun to do.''
 
'''u''':
Met vrienden samen iets doen is altijd gezellig.
''Doing something with friends is always fun.''
 
'''Robot''':
Jazeker, ik ga met mijn maatjes een rondreis maken tijdens de vakantie, heb jij nog plannen?
''Yes, I'm going on a tour with my buddies during the holidays, do you have any plans?''
 
'''u''':
Ik ga denk ik vaak zwemmen.
''I think I will go swimming often.''
 
'''Robot''':
Wat leuk. dan heb je iets om naar uit te kijken.
''How nice. then you have something to look forward to.''
 
'''u''':
Inderdaad heb er veel zin in.
''Indeed I am looking forward to it.''
 
'''Robot''':
Ik moet er weer vandoor. Leuk je weer gesproken te hebben.
''I have to run again. Nice to have talked to you again.''
 
'''u''':
Oke, tot de volgende keer!
''Okay, until next time!''
 
'''Robot''':
Doei!
''Bye!''
 
 
'''Job interview'''
 
'''Robot''':
Hallo, welkom bij dit sollicitatiegesprek. Wat zijn uw beste vaardigheden?
''Hello, welcome to this interview. What are your best skills?''
 
'''u''':
Ik ben goed in samenwerken.
''I am good at collaborating''
 
'''Robot''':
Dat is goed om te horen. Welke opleiding heeft u gedaan?
''That's good to hear. What study did you do?''
 
'''u''': 
(kies je eigen opleiding)
''(choose your own education)''
 
'''Robot''':
Dat komt goed uit. Zulke mensen kunnen we goed gebruiken in ons bedrijf. Bent u op zoek naar een volledige werkweek of een halve werkweek?
''That is convenient. We can put such people to good use in our company. Are you looking for a full working week or a half working week?''
 
'''u''': 
(kies uit een volledige werkweek of een halve werkweek)
''(choose from a full working week or a half working week)''
 
'''Robot''':
Oke, daarvoor hebben we nog een plek beschikbaar. Binnenkort kunt u een mail verwachten met het contractvoorstel.
''Okay, we still have a place available for that. You can expect an email with the contract proposal soon.''
 
'''u''':
Daar kijk ik naar uit bedankt.
''Thank you for that.''
 
'''Robot''':
Tot ziens!
''Bye!''
 
=== The questionnaire ===
To determine the user's perception of the NAO robot the Godspeed questionnaire<ref name=godspeed/> will be used. The Godspeed uses a 5-point scale. Five categories are used. These are Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety. As noted in the questionnaire as well, there is an overlap between Anthropomorphism and Animacy. The link to the actual questionnaire can be found here(https://www.overleaf.com/7569336777bwmwdmhvmmbc). The different categories will be used to represent the importance of idle movements. Ideally, all of the categories would be better with idle movements, but that will be further discussed in the chapter 'The results', down below. Nonetheless the size of the questionnaire, the questionnaire has its flaws and limitations. These are discussed in the chapter 'limitations'.
 
=== The results ===
 
The acquired data which we achieved from performing experiment B.1 was analyzed using SPSS. Because we used the godspeed questionnaire the data was acquired in Likert scale scores. We used a five-point Likert scale. The results were analyzed by taking the mean score of each of the five categories per conversation type. In total, we tested seven different conversations which are all explained in previous chapters. The following bar chart shows the mean score of every category per conversation type.
 
[[File:image77.png|center|thumb|550px|Figure 7: Bar chart for the mean scores of the godspeed questionnaire]]
 
To give a small explanation of the graph: on the y-axis, we can see the mean scores of the five points Likert scale. On the x-axis, we can see the five categories that are acquired from the godspeed questionnaire. These five categories are subdivided in different bars which represent a conversation type.
*Before experiment: Before the experiment started we let the participants fill in the godspeed questionnaire to set a standard to compare the results to.
*Casual Casual: The casual conversation script mentioned previously, supplemented with the casual idle movements.
*Serious Serious: The serious conversation script mentioned previously, supplemented with the serious idle movements.
*Casual No: The casual conversation script mentioned previously, using no movements at all.
*Serious Casual: The serious conversation script mentioned previously, supplemented with the casual idle movements.
*Casual Serious: The casual conversation script mentioned previously, supplemented with the serious idle movements.
*Serious No: The serious conversation script mentioned previously, using no movements at all.
 
===='''Chart analysis'''====
 
'''Anthropomorphism'''
 
Contradicting with expectation, the ''serious conversation with casual movements'' and ''casual conversation with serious movements'' are peaking with the serious conversation being the highest. Followed by a ''serious conversation with serious movements'' and ''casual conversation with casual movements'' (without taking no-movements into account).  The only big difference is that a ''casual conversation with casual movements'' has a lower mean than the mean before the experiment. This would suggest that the movements for the casual and serious conversation should be turned around for a higher mean value for anthropomorphism. What is remarkable about this result is that the no-movement combinations are overall lower for every other bar chart except the ''casual conversation with casual movements''. This is totally in contradiction with what is expected (as it was expected to be one of the largest). However, even though there is a difference in the mean value between all the combinations, the difference is not significant.
 
'''Animacy''' 
 
The bar chart for animacy has one big peak compared to the mean of "before experiments", which is the "serious conversation with casual movements" with an increase of about 0.5 points on the 1-5 scale. Followed by the ''serious conversation with serious movements'' and ''casual conversation with casual movements''. All three had an increase compared to the mean ''before experiment'' (this increase is not immense for ''serious serious'' and ''casual casual''). The opposite has happened for ''casual conversation with serious movements'', ''casual conversation with no-movements'' and ''serious conversation with no-movements'', as they have decreased compared to the measurement before (become worse relatively). The only increase and decrease that are really big are the ''serious conversation with casual movements'' and ''serious conversation with no-movements'' relatively. The other four combinations do not change significantly.
 
'''Likeability'''
 
Here, only one combination had an increase relative to the beginning mean value. The bar chart of the ''serious conversation with serious movements'' performs better than the beginning mean value (by just a tiny bit). The other four bar charts are scoring lower than the premeasured mean value. With the ''casual conversation with casual movements'' scoring second best, followed by ''casual conversation with serious movements'' and ''casual conversation with no movements'' and ''serious conversation with casual movements''. Again, the ''serious conversation with no movements'' scores the lowest. This is corresponding with the expectation as the two 'right' combinations are present in the top two bars. However, as before this increase and decrease is so small that it is uncertain if it can be trusted and whether a definite conclusion can be drawn from them.
 
'''Perceived Intelligence'''
 
The two bars that peak relatively higher than the premeasured mean are both the ''serious conversation'', but with the casual movements peaking higher than the serious movements. The others are decreased relative to the premeasured values. The ''casual conversation with casual movements'' is just a little smaller followed by the ''casual conversation with serious movements''. The two combinations with no-movements are again the smallest. The increase and decrease are for all but the ''serious conversation with casual movements'' and ''no-movements conversations'' relatively small and cannot be drawn a direct conclusion from. On top of that, the result is also contradicting the expectation.
 
'''Perceived Safety'''
 
The only increase relative to the premeasured is the ''serious conversation with serious movements''. A little decrease in the mean is present in the ''casual conversation with casual movements''. The ''serious conversation with casual movements'' is the lowest after the latter followed by almost a three equal no-movements combination and the ''casual conversation with serious movements''. This result is corresponding with the expectation and the difference is rather big. This result can, thus, be used to draw conclusions.
 
===Conclusion ===
Overall, it can be said that no movements give worse results in all five categories than the two conversations with movements. This will most likely be the result of the robot looking real static and fake when not moving during talking. This can be seen by the fact that every bar is lower than the mean before the experiment. Especially the ''serious conversation without movements'' scores low compared to the others. It can be concluded that serious conversations are hugely impacted by the use of movements, more than casual conversations. As will be spoken about later on in this paragraph: the serious conversations score the best overall when looking at these bar charts. 
 
From all the conversations with movements, one combination peaks in three of the five graphs. This is the ''serious conversation with casual movements'' (orange bar). The combination peaks in anthropomorphism, animacy and perceived intelligence. On top of that, the combination that peaks in every bar chart consist of a serious conversation (with either a serious movement or casual movement). Suggesting that serious conversations are in general perceived as better than casual conversations, resulting in the robots maybe being destined for serious conversations only? Adding to that, that the runner up is usually the same type of conversation but with the other movement type. For example, if the best result is given by the ''serious conversation with casual movements'', the second-best is ''serious conversation with serious movements'' (look at animacy and perceived intelligence). The opposite combination is happening for likeability and perceived safety. Anthropomorphism's runner up is ''casual conversation with serious movements'', but after that, the ''serious conversation with serious movements'' comes up, suggesting almost the same.
 
Though, as the increase is not immense between all the bars it cannot be said for certain. What can be said, as a result of this experiment, is that no movements during listening are a no-go and the robot should always perform a movement during listening and that serious conversations seem to be the better type of conversation for the NAO robot to perform.
 
=== Limitations ===
''' The participants '''
 
In the ideal case, more participants could be used. The users' groups would be better represented. This would mean that the data could show differences in data between different user groups. Unfortunately, with the resources, limitations, and duration of the project, this is not possible. On top of that, since the robot does not know what he is saying, the participants might still find the robot static.
 
 
 
''' The Godspeed questionnaire '''
 
The interpretations of the results of the Godspeed questionnaire<ref name=godspeed/> do have limitations, as explained above. These are:
*Extremely difficult to determine the ground truth. As it is all relative, it is hard to find the real truth what people think about it.
*Many factors influence the measurements (e.g. cultural background, prior experience with robots, etc.), this would create outliers. If there are too many outliers, the data would be unusable.
 
These limitations of the questionnaires result in the fact that the results of the measurements can not be used as an absolute value, but rather to see which option of idling is better.
 
== '''Experiment B2''' ==
Experiment B1 is divided into different parts. These are:
*Setting up experiment B2
*The participants
*The questionnaire
*Results
*Chart analysis
*Limitations
 
=== Setting up experiment B2 ===
During experiment B1, conversations with a team member and the NAO robot are filmed (these can be seen in the questionnaires mentioned down below). Two different conversations, serious and casual, were filmed with different idle movements:
*Serious conversation with serious idle movements   
*Serious conversation with casual idle movements   
*Serious conversation with no idle movements       
 
 
*Casual conversation with serious idle movements
*Casual conversation with casual idle movements
*Casual conversation with no idle movements
 
At first, the participant did first a questionnaire, after seeing a picture of an NAO robot. Then twice a conversation following a questionnaire. The 6 different conversations were divided between the participants so that the online survey would not take too much time. This resulted in three questionnaires, which can be found below:
 
1: https://forms.gle/zx7HH38f8Tjqqwee6
 
2: https://forms.gle/9zhfTbCHRCNMgR1b9
 
3: https://forms.gle/ZjX54tzKW89LntgH9
 
where the first questionnaire has a casual conversation with according idle movements as well as a serious conversation with according idle movements.
 
The second questionnaire has a casual conversation without idle movements and a serious conversation with the idle movements intended for the casual conversation.
 
The third questionnaire has a casual conversation with the idle movements intended for a serious conversation and a serious conversation without idle movements.
 
=== The participant ===
A total of 24 participants took part in this online experiment. The range of age was between 17 and 60 years.
 
=== The questionnaire ===
The same questionnaire as in B1, the Godspeed questionnaire, was used.
 
=== The results ===
 
The acquired data which we achieved from performing experiment B.2 was analyzed using SPSS. Because we used the godspeed questionnaire the data was acquired in Likert scale scores. We used a five-point Likert scale. The results were analyzed by taking the mean score of each of the five categories per conversation type. In total, we tested seven different conversations which are all explained in previous chapters. The following bar chart shows the mean score of every category per conversation type.
 
[[File:imageJohnny.png|center|thumb|550px|Figure 8: Bar chart for the mean scores of the godspeed questionnaire]]
 
To give a small explanation of the graph: on the y-axis, we can see the mean scores of the five points Likert scale. On the x-axis, we can see the five categories that are acquired from the godspeed questionnaire. These five categories are subdivided in different bars which represent a conversation type.
*Before experiment: Before the experiment started we let the participants fill in the godspeed questionnaire to set a standard to compare the results to.
*Casual Casual: The casual conversation script mentioned previously, supplemented with the casual idle movements.
*Serious Serious: The serious conversation script mentioned previously, supplemented with the serious idle movements.
*Casual No: The casual conversation script mentioned previously, using no movements at all.
*Serious Casual: The serious conversation script mentioned previously, supplemented with the casual idle movements.
*Casual Serious: The casual conversation script mentioned previously, supplemented with the serious idle movements.
*Serious No: The serious conversation script mentioned previously, using no movements at all.
 
===='''Chart analysis B2'''====
 
'''Anthropomorphism'''
 
In line with expectation, the ''serious conversation with serious movements'' and ''casual conversation with casual movements'' are peaking with the serious conversation being the highest. Followed by ''a serious conversation with casual movements''. What is remarkable about this result is that the no-movement combinations are overall lower for every other bar chart except the ''casual conversation with serious movements''. This is in contradiction with what is expected. Furthermore, the means of all categories lie higher than before the experiment.
 
'''Animacy'''
 
The bar chart for animacy has three peaks which lay quite close to each other: ''casual conversation with casual movements'',  ''serious conversation with serious movements'' and ''serious conversation with casual movements''.  After that come ''casual conversation with no movements'' and below that comes ''casual conversation with serious movements'' and ''serious conversation with no movements''. The means of all categories are higher than the mean before the experiment.
 
'''Likeability'''
 
The peak is with the ''casual conversation with casual movements'' category. The means of ''casual conversation with serious movements'' and
''serious conversation with no movements'' lay lower than the mean of before the experiment. The means of ''serious conversation with serious movements'', ''casual conversation with no movements'' and ''serious conversation with casual movements'' are a bit higher than before the experiment. What stands out is that ''casual conversation with no movements'' has a large mean score, which was not expected.
 
'''Perceived Intelligence'''
 
It can be noted that the mean of ''serious conversation with serious movements'' is the highest.  After that ''casual conversation with casual movements'', then ''serious conversation with casual movements''. What stands out is that the means of all categories are higher than the mean before the experiment. Meaning that people found that seeing the robot positively stimulates the thought of perceived intelligence.
 
'''Perceived Safety'''
 
The category standing out is the ''casual conversation with serious movements''. The mean is lower than before the experiment. This is a result that was not expected. The means of other categories are a bit higher than before the experiment, with ''casual conversation with casual movements'' the highest mean. What stands out as well, is the fact that the ''casual conversation with no movements'' is the second-highest. This can be caused by the reason that no movements may give the person watching the idea that the robot is harmless as it cannot do precision movements anyway.
 
=== Conclusion ===
Overall, it can be said that the right combination (''casual conversation with casual movements'' or ''serious conversation with serious movements'') gives the expected result: namely that it is received better overall. The ''casual conversation with casual movements'' peaks twice over the five categories and ''serious conversation with serious movements'' peaking, the resulting, three times. It is remarkable to see that the range of the means is overall smaller for the ''casual conversation with casual movements'' than for the ''serious conversation with serious movements'', meaning that the results of the latter can be less trusted than the ''casual conversation with casual movements''.
 
Furthermore, it is remarkable to see that there is an increase of mean for all combinations in three categories (relative to the premeasured mean),. With a fourth having all but one higher than the premeasured mean and the last one, two bars being lower than the premeasured. This means that (nonetheless the combination of conversation and movement) the view of the NAO robot is positively influenced by seeing the robot on a video, with likeability and perceived safety is less certain.
 
What is on the contrary, to being expected, is that ''casual conversation with no-movements'' has a high mean with an average (to comparison) range, while the ''serious conversation with no-movements'' is perceived as the overall worse. On top of that, the ''casual conversation with no-movements'' scores better in all categories than the ''casual conversation with serious movements''. On the opposite combination, the ''serious conversation with no-movements'' scores worse than the ''serious conversation with casual movements'', suggesting that the wrong type of movement only influences the casual conversation really much.
 
Last but not least, there seems not to be a large difference between the two types of conversation (serious and casual). Meaning that over the internet, one conversation type is not perceived as good as the other.
Since the increases between the different types of movements are significant, it can be said that the use of the right movements for each conversation contributes to the perceiving of the robot over a video.
 
=== Limitations ===
 
Conducting an online questionnaire always brings limitations to your experiment, especially when the participants are asked what they think about a social robot. As the participants are not interacting with the robot, nor they are physically there to see the robot in person, we still wanted to conduct the online questionnaire. This would give a better representation of the user's base, which was one of our limitations from experiment B1.
 
== '''Feedback loop on the users''' ==
In the first two experiments, the primary focus was to find out if the robot's appearance would be more 'natural' while having the right idle motions. This was determined while not telling the participants what type of movements they got with their conversation. This was a good way to try and understand the human perception of robots. Another way to have more interaction with the users was to just ask them. 'Do you think these idle motions make the robot better'.
=== '''Primary users''' ===
At first, the primary users were looked at. Since in elderly care it is more relevant to have a casual conversation. They want to be comfortable and have fun with the robot. Several elderly were shown the videos of the casual conversations and explained the differences between the movements. Their reaction was very positive. Something that stood out with the other user groups was that the elderly are just really impressed by the robot. They tend to anthropomorphize a bit more. They explained how the robots were so smart and liked all the functions on the robot. However, they still agreed on the idea of implementing casual motions rather than no or serious movements, because in the future elderly will have more experience with robots and this will change.
 
=== '''Secondary users''' ===
The secondary users were asked the same questions as the primary users. The secondary users available for questioning in this short time and due to corona restrictions consisted of a general practitioner, a nurse, and a salesman. They reacted more normally to the videos instead of, 'wow this is cool'. They understood the reason for researching different idle motions better. Most of the secondary users fall in the 'serious' category, because for instance in a business or when going to the doctor you do not know this person very well. These users advised when using social robots in the future for their professions to use serious movements. It could make a difference.
 
=== '''Final takeaway''' ===
As read before, the reactions of the users were relatively positive, especially in the first group. The explanation of the project made sense to all users. However when seeing the video of the NAO robot and the limitations in degrees of freedom some users were slightly less positive. They explained that now with these robots the movements could look weird and the distinguishing of serious and casual movements would not really be noted, only the weird movements the robot made. So for in the future with better human-like robots, they would use these idle motions for when robots would work in their industry, but for now, it is still too complicated for the robot motions.
 
== '''Conclusion and Discussion'''==
=== '''Conclusion'''===
'''Which idle movements are considered the most appropriate human-human interaction?'''
 
By using two different types of conversation (casual and serious) and a neutral position (waiting/in rest), it was figured out what kind of idle movements are most appropriate in human-human interaction. The first indication to find the idle movements was by using a Japanese paper suggesting eleven movements that are done often by humans<ref name=japanpaper/>. By tallying the number of repetitions of each movement per category, it was found that (for listening) a lot of the same movements resulted in peaking higher than others. The peaks of the casual conversation were for neck movement the highest, followed by arm movement, mouth movement, touching face and hand movement. The peaks of the serious conversation were to be found for hand movement being the highest, being followed by looking around, arm movement, body stretching, touching face and neck movement. The big difference between the two is, is that the frequency differences between the peaks were much higher for the serious conversation. Meaning that the difference in idle movements is clearly visible between the two conversation types as the serious conversation consists of less variation in the eleven used movement types.
The waiting's peaks were to be found for neck movement, looking around, and mouth movement. Furthermore, the waiting movements were not surprising and were not to be further used in the research.
 
 
'''Which idle movements are considered the most appropriate human-robot interaction?'''
 
In the following experiment, combination sets were created for the different types of conversation together with different types of movement combinations. As was being told in the first paragraph, there were different movements for the different types of conversation. The best four movements were chosen for the robot, adding to this that the movements had to be doable by the robot. This resulted in movements not being able to be used, such as mouth movement or body stretching (as the robot tended to do it too slow or to collapse once doing it faster). This resulted (see table 1) in the neck arm and hand movement being chosen for both the conversations and touching face for the casual conversation and looking around for the serious conversation.
 
Followed to this setup, is the execution of experiment B.1. Experiment B.1 consisted of the aforementioned movements and a conversation. The conversation type differed and, thus, also needed a different topic. It was chosen that the casual conversation was a conversation about upcoming plans for the summer and the serious conversation was a job interview. It was made so that a movement was performed after the question or sentence of the NAO robot. Participants (in total 18) were used to talking to the robot and were needed to fill in Godspeed questionnaires (in total three per participant), covering anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety.
The latter experiment was done in real life, but another experiment was done (B.2) which was done via the internet. Participants (in total 24) were shown a video with the same Godspeed questionnaires. The video was a conversation being recorded of one of our own and consisted of exactly the same combinations as for experiment B.1.
 
This resulted in two completely different outcomes.
Experiment B.1 showed the opposite of what was expected: the combination of a casual conversation with serious movements and vice versa was concluded to be the best in most of the categories of the Godspeed questionnaire. This would suggest that experiment A is either wrong or that the perceiving of the movements between human-robot interaction is different.
On top of that, it was concluded that the serious conversation was perceived as being the best in all categories, when not looking at what type of movements were performed.  Suggesting that in real life, serious conversations are perceived by humans better in terms of the five categories. However, the usage of movements has improved the result overall. On the other hand, there is a large range of means and a small difference in mean suggesting that the data can not completely be trusted.
 
Experiment B.2 showed that the expected result, casual conversation with casual movements peaking together with the vice versa. This is nearly in all of the categories the result. On top of that, the range of means seems to be pretty small for the casual conversation with casual movements, suggesting that this type is perceived as much better. The serious conversation with serious movements has a large range but is often the highest peak.  On top of this, the peaks of the two latter are significantly bigger than the others, suggesting that this data can be used to conclude from. On the downside, there seems to be a peak for the casual conversation with no-movement, which seems off. This will be explained in the discussion.
 
Overall, the usage of movements positively stimulates the participants (and, thus, speakers). The best set of movements seems to be dependent on whether the speaker is directly in front of the robot or sees the robot on their screen.  For when people see the robot in real life, the best combinations would be the casual conversation with serious movements and the serious conversation with casual movements. For when people see the robot on a screen, the best combinations would be the casual conversation with casual movements and the serious conversation with serious movements.  However, since the difference for means was small for the categories in experiment B.1, it would not matter if it were used as casual – casual and serious – serious. This would result in an easier to program software.
 
=== '''Discussion'''===
 
'''Experiment A'''
 
The most important result from experiment A is that people tend to use idle movements as a means to show acceptance or agreement with what has been said. This is important to our research, to make robots more human-like. It is important that robots use idle movements in conversations to show their understanding of what has been said. It can also be noted that people always move, from scratching their faces to moving their arms around. These results were implemented in experiment B.
Unfortunately, the NAO robot has a limited range of motion. Eight specific idle movements were chosen with the help of literature research and eventually determined by experiment A, but unfortunately, the NAO robot was only able to perform six of eight idle movements.
In experiment A, it was difficult to determine in what category an idle movement would fall in. All of the videos used in experiment A were taken from YouTube. This could have affected the results of experiment A, as people might unconsciously behave differently when they know they are recorded, i.e. they might use less idle movements, or use their hands more to show social affection. To try to counter this, the experiment has been done by two people. On top of that, the significance has been used as well, to see whether the results were a coincidence (but they were not).
 
'''Experiment B1'''
 
Conversations without movements clearly score lower than conversations with movements, whether the movements were intended for the casual or serious conversation. This is likely due to the fact that the robot is completely static, which results in looking lifeless as expected. This tells us that the idle movements are key to making a robot look human-like in a conversation. However, due to the fact that not every movement is tested individually, the difference between the casual movements and serious movements cannot be seen clearly in the results.
Ideally, the idle movements are tested individually to have a clear result on what idle movements work best, especially with robots that have a limited range of motion, like the NAO robot. However, to test this thoroughly, a lot more participants were needed.
Experiment B1 would have benefited if more participants were used. The users' groups would be better represented, especially if those participants worked in healthcare. This would mean that the data could show differences in data between different user groups. This would benefit the feedback loop for the primary users. Unfortunately, with the resources, limitations, and duration of the project, this is not possible.
 
'''Experiment B2'''
 
In contradiction to experiment B1, the right combination of movements did show the expected results. This shows that the view of the NAO robot is positively influenced by seeing it on a video. However, the results of experiment B2 cannot be compared to a real-life conversation with a robot.
Experiment B2 would have benefited if more participants were used. Because this experiment was conducted online, participants who work in healthcare were asked to participate, in order to counteract the limitation of experiment B1. However, not enough primary users were contacted to draw a conclusion from this data. Due to the set up used in experiment B1, it was decided upon to not change the set-up of experiment B2.
Although experiment B2 is an extension of experiment B1, its data cannot be used on its own, due to the fact that the social interaction with the NAO robot is replaced in the form of an online video.
 
== ''' Follow up study '''==
For studies that want to dig in deeper into this subject, the following advice is given:
*More research can be conducted into the idle movements of humans. As explained before, the idle movements that were found were of people that knew they were being filmed.
 
*Perform experiments B1 and B2 with a robot that is able to perform all the idle movements that were found in experiment A and literature.
 
*Do experiments B1 and B2 with a higher number of participants so the human-robot interaction of different users can be researched.
 
== '''State of the Art''' ==
<I>The papers are assigned to different categories, these categories indicate the contribution the papers can have towards our project.</I>
 
=== Foundation ===
<I>These papers lay the foundation of our project, our project will build on the work that has been done in these papers.</I>
 
Research work presented in this paper<ref name=Torta /> addressed the introduction of a small–humanoid robot in elderly people’s homes providing novel insights in the areas of robotic navigation, non–verbal cues in human-robot interaction, and design and evaluation of socially–assistive robots in smart environments. The results reported throughout the thesis could lie in one or multiple of these areas giving an indication of the multidisciplinary nature of research in robotics and human-robot interaction. Topics like robotic navigation in the presence of a person, adding a particle filter to already existing navigational algorithms, attracting a person's attention by a robot, and the design of a smart home environment are discussed in the paper. Our research will add additional research to this paper.
 
This paper<ref>Kanda, T., Ishiguro, H., Imai, M., and Ono, T. (2003). Body Movement Analysis of Human-Robot Interaction http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf</ref> presents a method for analyzing human-robot interaction by body movements. Future intelligent robots will communicate with humans and perform physical and communicative tasks to participate in daily life. A human-like body will provide an abundance of non-verbal information and enable us to smoothly communicate with the robot. To achieve this, we have developed a humanoid robot that autonomously interacts with humans by speaking and making gestures. It is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion capturing system, which allows us to measure the body movements in detail. We have performed experiments to compare the body movements with subjective impressions of the robot. The results reveal the importance of well-coordinated behaviors and suggest a new analytical approach to human-robot interaction. This paper lays the foundation for our project, so it is key to read it carefully!
 
The behavior of autonomous characters in virtual environments is usually described via a complex deterministic state machine or a behavior tree driven by the current state of the system. This is very useful when a high level of control over a character is required, but it arguably does have a negative effect on the illusion of realism in the decision-making process of the character. This is particularly prominent in cases where the character only exhibits idle behavior, e.g. a student sitting in a classroom. In this article<ref>Streck, A., Wolbers, T.: (2018). Using Discrete Time Markov Chains for Control of Idle Character Animation 17(8). https://ieeexplore.ieee.org/document/8490450</ref>, we propose the use of discrete-time Markov chains as the model for defining realistic non-interactive behavior and describe how to compute decision probabilities to normalize by the length of individual actions. Lastly, we argue that those allow for more precise calibration and adjustment for the idle behavior model then the models being currently employed in practice.
 
=== Foundation / Experiment ===
This paper<ref name=japanpaper /> describes the design idle motions in which service robots act in its standby state. In the case of the robots developed so far, they show proper reactions only when there is a certain input from humans or the environment. In order to find the behavioral patterns at the standby state, we observed clerks at the information center via video ethnography, and idle motions are extracted from video data. We analyzed idle motions, and apply extracted characteristics to our robot. Moreover, by using the robot which can show those series of expressions and actions, our research needs to find that people can feel like they are alive.
 
=== Experiment ===
<I>These papers will be useful when preparing or setting up our experiments.</I>
 
The design of complex dynamic motions for humanoid robots is achievable only through the use of robot kinematics. In this paper<ref>Kofinas, N., Orfanoudakis, E., & Lagoudakis, M. G. (2015). Complete analytical forward and inverse kinematics for the NAO humanoid robot. Journal of Intelligent & Robotic Systems, 77(2), 251-264.</ref>, we study the problems of forward and inverse kinematics for the Aldebaran NAO humanoid robot and present a complete, exact, analytical solution to both problems, including a software library implementation for real-time onboard execution. The forward kinematics allow NAO developers to map any configuration of the robot from its own joint space to the three-dimensional physical space, whereas the inverse kinematics provide closed-form solutions to finding joint configurations that drive the end effectors of the robot to desired target positions in the three-dimensional physical space. The proposed solution was made feasible through decomposition into five independent problems (head, two arms, two legs), the use of the Denavit-Hartenberg method, the analytical solution of a non-linear system of equations, and the exploitation of body and joint symmetries. The main advantage of the proposed inverse kinematics solution compared to existing approaches is its accuracy, its efficiency, and the elimination of singularities. In addition, we suggest a generic guideline for solving the inverse kinematics problem for other humanoid robots. The implemented, freely-available, NAO kinematics library, which additionally offers center-of-mass calculations and Jacobian inverse kinematics, is demonstrated in three motion design tasks: basic center-of-mass balancing, pointing to a moving ball, and human-guided balancing on two legs.
 
The ability to realize human‐motion imitation using robots is closely related to developments in the field of artificial intelligence. However, it is not easy to imitate human motions entirely owing to the physical differences between the human body and robots. In this paper<ref name=Rosenthal>Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582. https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2018-0057</ref>, we propose a work chain‐based inverse kinematics to enable a robot to imitate the human motion of upper limbs in real-time. Two work chains are built on each arm to ensure that there is motion similarity, such as the end effector trajectory and the joint‐angle configuration. In addition, a two-phase filter is used to remove the interference and noise, together with a self‐collision avoidance scheme to maintain the stability of the robot during the imitation. Experimental results verify the effectiveness of our solution on the humanoid robot Nao‐H25 in terms of accuracy and real‐time performance.
 
Content‐based human motion retrieval is important for animators with the development of motion editing and synthesis, which need to search similar motions in large databases. Obtaining text‐based representation from the quantization of mocap data turned out to be efficient. It becomes a fundamental step of many pieces of research in human motion analysis. Geometric features are one of these techniques, which involve much prior knowledge and reduce data redundancy of numerical data. This paper<ref>Zhu, M., Sun, H., Lan, R., Li, B.: (2011). Human motion retrieval using topic model, 4(10), pp. 469-476. https://onlinelibrary.wiley.com/doi/abs/10.1002/cav.432</ref> describes geometric features as a basic unit to define human motions (also called mo‐words) and view a human motion as a generative process. Therefore, we obtain topic motions, which possess more semantic information using latent Dirichlet allocation by learning from massive training examples in order to understand motions better. We combine the probabilistic model with human motion retrieval and come up with a new representation of human motions and a new retrieval framework. Our experiments demonstrate its advantages, both for understanding motions and retrieval.
 
Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man-machine interfaces, content-based image storage and retrieval, and video conferencing. This paper<ref>Aggarwal, J., Cai, Q.: (1999). Human Motion Analysis: A Review, 1(3), pp. 428-440. https://www.sciencedirect.com/science/article/pii/S1077314298907445</ref> gives an overview of the various tasks involved in motion analysis of the human body. We focus on three major areas related to interpreting human motion: motion analysis involving human body parts, tracking a moving human from a single view or multiple camera perspectives, and recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion from a single view or multiple perspectives focuses on higher-level processing, in which moving humans are tracked without identifying their body parts. After successfully matching the moving human image from one frame to another in an image sequence, understanding the human movements or activities comes naturally, which leads to our discussion of recognizing human activities.
 
=== Experiment / Extend ===
The motion of a humanoid robot is one of the most intuitive communication channels for human-robot interaction. Previous studies have presented related knowledge to generate speech-based motions of virtual agents on screens. However, physical humanoid robots share time and space with people, and thus, numerous speechless situations occur where the robot cannot be hidden from users. Therefore, we must understand the appropriate roles of motion design for a humanoid robot in many different situations. We achieved the target knowledge as motion-design guidelines based on the iterative findings of design case studies and a literature review. The guidelines are largely separated into two main roles for speech-based and speechless situations, and the latter can be further subdivided into idling, observing, listening, expecting, and mood-setting, all of which are well-distributed by different levels of intention. A series of experiments proved that our guidelines help create preferable motion designs of a humanoid robot. This study<ref> Jung, J., Kanda, T., & Kim, M. S. (2013). Guidelines for contextual motion design of a humanoid robot. International Journal of Social Robotics, 5(2), 153-169. https://link.springer.com/article/10.1007/s12369-012-0175-6</ref> provides researchers with a balanced perspective between speech-based and speechless situations, and thus they can design the motions of a humanoid robot to satisfy users in more acceptable and pleasurable ways.
 
=== Extend ===
<I>These papers include work that is related to our project but does not overlap.</I>
 
The main objective of the research project<ref>Karwowski, W. (2007). Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots https://www.tandfonline.com/doi/abs/10.1080/00140139108967335</ref> was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. The second experiment can be useful in our research, they investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited for only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for the operational limitations of robot systems are discussed. This research can be useful to further convince people that idle motions are not only to make people feel comfortable near robots, but also as a safety measure.
 
A framework and methodology to realize robot-to-human behavioral expression is proposed in the paper<ref>Nakata, T., Sato, T. and Mori, T. (1998). Expression of Emotion and Intention by Robot Body Movement https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf</ref>. Human-robot symbiosis requires to enhance nonverbal communication between humans and robots. The proposed methodology is based on movement analysis theories of dance psychology researchers. Two experiments on robot-to-human behavioral expression are also presented to support the methodology. One is an experiment to produce familiarity with a robot-to-human tactile reaction. The other is an experiment to express a robot's emotions by its dances. This methodology will be key to realize robots that work close to humans cooperatively and thus will also couple idle movements to emotions, for example, when a robot wants to express it is nervous.
 
Research has been done towards the capability of a humanoid robot to provide enjoyment to people. For example, who picks up the robot and plays with it by hugging, shaking and moving the robot in various ways. <ref>Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. https://link.springer.com/article/10.1007/s12369-013-0212-0</ref> Inertial sensors inside a robot can capture how its body is moved when people perform such “full-body gestures”. A conclusion of this research was that the accuracy of the recognition of full-body gestures was rather high (77%) and that a progressive reward strategy for responses was much more successful. The progressive strategy for responses increases perceived variety; persisting suggestions increase understanding, perceived variety, and enjoyment; and users find enjoyment in playing with a robot in various ways. This resulted in the conclusion that motions should be meaningful, responses rewarding, suggestions inspiring and instructions fulfilling. This also results in increased understanding, perceived variety, and enjoyment. This conclusion suggests that the movements of a robot should make meaningful (to humans) motions, which is what we are trying to achieve with idle movements.
 
Another paper presents a study that investigates human-robot interpersonal distances and the influence of posture, either sitting or standing on the interpersonal distances.<ref>Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155</ref> The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintains either a sitting or standing posture while being approached. The results revealed that humans allow shorter interpersonal distances when a robot is sitting or is in a more passive position, and leave more space when being approached while standing. The paper suggests that in future work more non-verbal behaviors will be investigated between robots and humans and their effect, combined with the robot’s posture, on interpersonal distances. As idle movements might influence the feelings of safety positively which results in shorter personal distances, but that might be discussed later.  However, this research suggests that the idle movement might have different effects on people during different postures of the people. This has to be taken into account once testing the theorem suggested in this research.
 
This paper<ref>Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Alami, R. and Siméon, T. (2006). How may I serve you?: a robot companion approaching a seated person in a helping context. https://hal.laas.fr/hal-01979221/document</ref> presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.
 
The paper<ref>Broadbent, E., Stafford, R. and MacDonald, B. (2009). Acceptance of healthcare robots for the older population: review and future directions.https://link.springer.com/article/10.1007/s12369-009-0030-6</ref> reviews the literature about human responses to healthcare robots and summarises the variables that have been found to influence responses. It may be possible to increase the acceptance of healthcare robots by properly assessing the needs of the human user and then matching the robot’s role, appearance and behavior to these needs. Because robots have limitations in their abilities, another way to increase acceptance may be to modify the expectations of users to better match robots’ abilities. However, it is stated that more research needs to be done in this field as the potential users’ needs and expectations in specific situations need to be found out. Furthermore, research has to be put into the question of how to increase the acceptance match between robots and humans.
 
=== Comparable ===
<I>Work in these papers is comparable to our project.</I>
 
In this paper<ref>Asselborn, T., Johal, W. and Dillenbourg, P. (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments</ref>, we explored the effect of a robot’s subconscious gestures made during moments when idle on anthropomorphic perceptions of five-year-old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts. This research focusses only on children, but the concept of the research is the same as ours.
 
Research has already been done by looking into the subtle movements of the head while waiting for a reaction to the environment.<ref>Kocoń, M., & Emirsajłow, Z. (2012). Modeling the idle movements of the human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022</ref> The main task of this research was to describe the subtle head movements when a virtual person is waiting for a reaction from the environment. This technique can increase the level of realism while performing human-computer interactions. Even though it was tested for a virtual avatar, the head movements will still be the same for a humanoid robot, such as the NAO robot. These head movements might be of importance for the idle movements of the NAO robot, therefore it is important to use this paper for its complications and accomplices.
 
Furthermore, research has been done for the emotions of robots.<ref>Beck, A., Hiolle, A., & Canamero, L. (2013). Using Perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf</ref> To be exact for the latter, poses were used to create emotions. Five poses were created of the emotions: sadness, anger, neutral, pride and happiness. The recognition of all poses was not the same percentage and the velocity of a movement also has an influence on the interpretation of the pose. This has a lot in common with idle movements. Idle movements have to positively influence the emotions of the user (as to giving the robot a more life-like representation and, thus, a safe feeling for the user). By keeping this research in mind, the velocity of the movement has to be set right as that might influence the interpretation of the movement (aggressive or calm) and, if set wrong, can give the user an insecure feeling.
 
In the field of starting a conversation, research has also been done.<ref>Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human-robot interaction (pp. 109-116). https://dl.acm.org/doi/pdf/10.1145/1514095.1514117</ref> This research looks into a model of approaching people while they are walking. The research, the latter research,  concluded that a proposed model (which made use of efficient and polite approach behavior with three phases: finding an interaction target, interaction at a public distance and initiating the conversation at a social distance) was much more successful than a simplistic way (proposed: 33 out of 59 successes and simplistic: 20 out of 57 successes). Moreover, during the field trial, it was observed that people enjoyed receiving information from the robot, suggesting the usefulness of a proactive approach in initiating services from a robot. This positive feedback is also wanted for the use of idle movements, even though the idle movements might not be as obvious as the conversation approach in this research. It is important to notice that in this research an efficient and polite approach is more successful and even more efficient.
 
This article<ref>Srinivasan, V., Murphy, R. R., & Bethel, C. L. (2015). A reference architecture for social head gaze generation in social robotics. International Journal of Social Robotics, 7(5), 601-616. https://link.springer.com/article/10.1007/s12369-015-0315-x</ref> outlines a reference architecture for social head gaze generation in social robots. The architecture discussed here is grounded in human communication, based on behavioral robotics theory, and captures the commonalities, essence, and experience of 32 previous social robotics implementations of social head gaze. No such architecture currently exists, but such an architecture is needed to serve as a template for creating or re-engineering systems, provide analyses and understanding of different systems, and provide a common lexicon and taxonomy that facilitates communication across various communities. The resulting reference architecture guides the implementation of a social head gaze in a rescue robot for the purpose of victim management in urban search and rescue. Using the proposed reference architecture will benefit social robotics because it will simplify the principled implementations of head gaze generation and allow for comparisons between such implementations. Gazing can be seen as a part of idle movements.
 
This paper<ref>Straub, I. (2016). ‘It looks like a human!’The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society, 31(4), 553-571. https://link.springer.com/article/10.1007/s00146-015-0632-5</ref> will focus on the establishment of a social event that refers to our case study to a social android robot simulating interaction with human users. Within an open-field study setting, we observed a shifting of interactive behavior, presence setting and agency ascription by the human users toward the robot, operating in parallel with the robots’ different modulations of activity. The activity modes of the robot range from a simple idling state, to a reactive facetrack condition and finally an interactive teleoperated condition. By relating the three types of activity modes with three types of (increasing social) presence settings—ranging from co-location to co-presence and finally, to social presence—we observed modulations in the human users’ interactive behavior as well as modulations of agency ascription toward the robot. The observations of the behavioral shift in the human users toward the robot lead us to the assumption that a fortification of agency ascription toward the robot goes hand in hand with the increase in its social activity modes as well as its display of social presence features.
 
This research<ref>Brooks, A. G. and Arkin, R. C. (2006). Behavioral overlays for non-verbal communication expression on a humanoid robot. https://smartech.gatech.edu/bitstream/handle/1853/20540/BrooksArkinAURO2006.pdf?sequence=1&isAllowed=y</ref> details the application of non-verbal communication display behaviors to an autonomous humanoid robot, including the use of proxemics, which to date has been seldom explored in the field of human-robot interaction. In order to allow the robot to communicate information non-verbally while simultaneously fulfilling its existing instrumental behavior, a “behavioral overlay” model that encodes this data onto the robot’s pre-existing motor expression is developed and presented. The state of the robot’s system of internal emotions and motivational drives is used as the principal data source for non-verbal expression, but in order for the robot to display this information in a natural and nuanced fashion, an additional para-emotional framework has been developed to support the individuality of the robot’s interpersonal relationships with humans and of the robot itself. An implementation of the Sony QRIO is described which overlays QRIO’s existing EGO architecture and situated schema-based behaviors with a mechanism for communicating this framework through modalities that encompass posture, gesture and the management of interpersonal distance.
 
=== Support ===
<I>These papers can verify or support our work.</I>
 
In this study<ref>Cuijpers, R. H., Knops, M. A. H. (2015). Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions</ref>, a simple joint task was used to expose our participants to different levels of social verification. Low social verification was portrayed using idle motions and high social verification was portrayed using meaningful motions. Our results indicate that social responses increase with the level of social verification in line with the threshold model of social influence. This paper verifies why our research on idle motions is needed in the first place and states that in order to have a high social human-robot interaction, idle motions are a necessity.
 
Some robots lack the possibility to produce humanlike nonverbal behavior (HNB). Using robot-specific nonverbal behavior (RNB) such as different eye colors to convey emotional meaning might be a fruitful mechanism to enhance HRI experiences, but it is unclear whether RNB is as effective as HNB. This paper<ref name=Rosenthal /> presents a review on affective nonverbal behaviors in robots and an experimental study. We experimentally tested the influence of HNB and RNB (colored LEDs) on users’ perception of the robot (e.g. likeability, animacy), their emotional experience, and self-disclosure. In a between-subjects design, users interacted with either a robot displaying no nonverbal behavior, a robot displaying affective RNB, a robot displaying affective HNB or a robot displaying affective HNB and RNB. Results show that HNB, but not RNB, has a significant effect on the perceived animacy of the robot, participants’ emotional state, and self-disclosure. However, RNB still slightly influenced participants’ perception, emotion, and behavior: Planned contrasts revealed having any type of nonverbal behavior significantly increased perceived animacy, positive affect, and self-disclosure. Moreover, observed linear trends indicate that the effects increased with the addition of nonverbal behaviors (control< RNB< HNB). In combination, our results suggest that HNB is more effective in transporting the robot’s communicative message than RNB.
 
== '''Presentation and Choregraphe file for the idle movements''' ==
Via the link to google docs: https://drive.google.com/open?id=1tl8hjBziPE5PN1l10R9BYrB9WI-FaVUk
 
== '''Peer review''' ==
We think the project went well and every member put the same amount of effort into the project/research. We do not want any differences in grades for this project between the group members.
 
=='''Planning'''==




Line 155: Line 739:
|}
|}


== '''Introduction''' ==
== '''Time spent''' ==
 
 
== '''Problem statement''' ==
In an ideal world and in the future robots will interact with humans in a very socially intelligent way. Robots demonstrate humanlike social intelligence and non-experts will not be able to distinguish robots and other human agents anymore. To accomplish this, robots need to develop a lot further. The social intelligence of robots needs to be increased a lot, but also the movement of the robots. Nowadays, robots don't move the way humans do. For instance when moving your arm to grab something. Humans tend to overshoot a bit[Source?]. A robot specifies the target and moves the shortest way to the object. Humans try to take the least resistance path. So this means they also use their surroundings to reach for their target. For instance, lean on a table to cancel out the gravity force. Humans use their joints more than robots do. Another big problem for a robot's motion to look human is idle movement. For humans and every other living creature in the world, it is physically impossible to stand precisely still. Robots, however, when not in action stand completely lifeless. Creating a problem for the interaction between the robot and the corresponding person. It is unsure if the robot is turned on and can respond to the human, and it also feels unnatural[source?]. Another thing is that humans are always doing something or holding something while having a conversation. To improve the interaction with humans and robots there has to be looked at human idle movements, which idle movements are most beneficial for the human-robot interaction, and which idle movements are most realistic and manageable for a robot to perform without looking too weird. In this research, we will look at all these things by observing human idle movements, test robot idle movement, and research into the most preferable movements according to contestants.
 
== '''Objectives''' ==
It is still very hard to get a robot to appear in a humanistic natural way as robots tend to be static (whether they move or not). As a more natural human-robot interaction is wanted, the behavior of social robots needs to be improved on different levels. In this project, the main focus will be on the movements that make a robot appear more natural/lifelike, which are the idle movements. The objectives of the research will be to find out which idle movements make a robot appear in a more natural way. In this research, the information will be based on previous research that has been done on this subject. More information will be gathered by applying interviews and using surveys to get the opinion of possible future users (these users will be mentioned in the chapter 'Users'). Next to this, the NAO robot will be experimented with. The NAO robot will be performing different idle movements and future users give their responses to these movements. With these acquired responses we will retrieve data that can be used to find out which idle movements make a robot appear more life-like. Due to the fact that in the future humanoid robots will also improve, possible expectations on the most important idle movements will also be given. All together we hope that this information will give greater insight into the use of idle movements on humanoid robots to be used in future research and projects on this subject.
 
== '''Users''' ==
When taking a look at the users of social robots, a lot of groups can be taken into account. This is because we don’t know what the future holds and where social robots will be used. Because focusing on all the possible users for this project would be impossible, a selection has been made of users who will likely be the first to benefit from or have to deal with social robots. This selection is based on collected research from other papers<ref>https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf</ref>, where these groups where highlighted and researched on their opinion on robot behavior.
 
The main focus of research papers on social robots is for future applications in elderly care, and therefore elderly people are the main subject. Another group of users who will reap the benefits of social robots are physically or mentally disabled people. The hospitals, care centers, and nurses who care for these people nowadays will also be users of social robots.
 
For the people in need of care, it is essential that these social robots are as human-like as possible. This will help them better accept the presence of a social robot in their private environment. One key element of making a robot appear life-like is the presence of idle motions.
 
Companies and manufacturers of social robots will benefit from this research, implementing it in their product and will therefore also be users. They want to offer their customers the best social robot they are able to produce, which in turn has to be as human-like as possible, and therefore it is key to include idle movements into their design.
 
== '''Approach, Milestones and Deliverables''' ==
=== Approach ===
To get knowledge about idle movements, observations have to be done on humans. Though, the idle movements will be different for various categories (see experiment A), which requires research beforehand. This research will be done via state of the art. Papers, such as papers from R. Cuijpers and/or E. Torta and Hyunsoo Song, Min Joong Kim, Sang-Hoon Jeong, Hyeon-Jeong Suk, and Dong-Soo Kwon, contain a lot of information concerning the idle movements that are considered important. Therefore, it is important to read the papers of the state of the art carefully. The state of the art will be explained briefly in the chapter ‘State of the art’. After the research, observations can be done. The perfect observation method is dependent on the category. Examples of such methods are observing people walking (or standing) around public spaces, such as on the campus of the university or on the train. However, for this research, videos or live streams will be watched on (e.g) Youtube. The tapes must film the people 'secretly' as people might behave on camera differently than off-camera. The noticed idle movements can be listed in a table and can be tallied. The most tallied idle movement will be considered the best for that specific category. However, that does not mean it will work for a robot as the responses of users might be negatively influenced. Experiment B will clarify this.
 
The best idle movement per category will be used in experiment B. This experiment will be done by using the NAO robot. The experiment makes use of a large number of participants (which will be the users, see ‘Users’). The NAO robot will start a conversation with the participant for x amount of time. This is done multiple times (depending on the number of idle movements used), once with the NAO robot not using any idle movements and, then, using different idle movements for every conversation. The used idle movements will be based on the research as listed above and are in the same order for every participant. After each conversation, the participant has to fill in Godspeed questionnaire<ref>Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1(1), 71–81.</ref>. The Godspeed questionnaire also includes a question about animacy. This question should also be answered with a scale between 1-5. By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.
 
=== Milestones and Deliverables ===
See Planning V1.
 
== '''Experiment A''' ==
The first experiment will be a human-only experiment. To understand the meaning of robot idle-movement better humans have to be observed. People will be observed in multiple places such as mentioned in the Approach but still have different movements. Listed down below, it is stated which type of video has been used (e.g youtube videos or live streams). The videos differ as the idle movements most likely differ in different categories. For instance, when people are having a conversation with a friend a possible idle move will be shaking with their leg and or feet or biting their nails. However, when a job interview people tend to be a lot more serious and try to focus on acting 'normal'. The different categories for the idle movement with multiple examples to implement on the robot are listed down below. Furthermore, these examples are based on eleven motions suggested by {{{ref}}} and these eleven motions are eye stretching, mouth movement, coughing, touching the face, touching hair, looking around, neck movement, arm movement, hand movement, lean on something and body stretching.
 
{| class="wikitable" style="border-style: solid; border-width: 1px;" cellpadding="3"
!style="text-align:left;"| Category
!style="text-align:left"| Examples of idle movements
|-
| Casual conversation  || nodding, briefly look away, scratch head, putting arms in the side, scratch shoulder, change head position.
|-
| Waiting / in rest || lightly shake leg/feet up and down or to the side, put fingers against head and scratch, blink eyes (LED on/off), breath.
|-
| Serious conversation || nodding, folding arms, hand gestures, nod, lift eyebrows slightly, touch the face, touch/hold arm.
|-
 
|}
The listed eleven motions are used for the tallying of the idle movements so that its full comparison can be given between the categories. The type of tape/video used is, thus, dependent on such a category. For each category, it will be explained which type of video has been used and why.
 
After this research, another research has to be done. This will supplement the first experiment and be a setup for the second experiment when the actual social intelligent robot will be tested. In the same categories as used before, people will be asked to come up with idle movement or choose from a list of idle movements that they think are most natural for robots to perform. Such as mentioned in the problem statement, not all human idle movements will be good idle movements for robots. After this survey, a selection of idle movements in every category will be chosen and will make it to testing with the social robots in experiment B.
 
=== Casual conversation ===
Casual conversation is talking about non-serious activities, which can occur between two strangers but also between two relatives. The topics of those conversations are usually not really serious, resulting in laughter or just calm listening. No different from the serious conversation, attention should be paid to the speaker. During conversations, the listeners will show such attention towards the speaker, which can generally be done by having eye contact. Furthermore, lifelike behavior is also important as it results in a feeling of trust towards the listener. Therefore, it is important for a robot to have the ability to establish such trust via idle movements.
 
Eye contact for the NAO robot has already been researched and it is established that it is important.{{{ref}}} The usage of idle movements for the gain of trust is also considered important. Examples of such idle movements are nodding, briefly looking away, scratching the head, putting the arms in the side (and changing its position), scratching the shoulder and changing the head position. These idle movements are corresponding with what has been found in the idle movement research. {{ref}} In casual conversations, nodding can confirm the presence of the listener and ensure that attention is being paid. Briefly looking away can mimic the movement of thought as people tend to look away and think while talking. Scratching your head might mimic the same idea, as scratching your head is associated with thinking. Moreover, scratching your head might also give the speaker a feeling of confidence due to the mimic of liveliness. Putting the arms in the side is generally a gesture of confidence and also of relaxation. Mixing up the number of arms in the side and the position of the legs will give the speaker an idea that the robot will relax, which gives the speaker a feeling of confidence. Scratching the shoulder is just a general mimic that ensures that the robot will look alive and seems confident. The change of the head position has the same purpose, the change of head position generally is a movement that occurs when thinking and during questioning what has been said. Both reasons give the speaker an idea that the robot is more lifelike. All of these examples of idle movements come back to the (already spoken about) research, resulting in the stated eleven motions (see first paragraph of 'Experiment A').
 
After knowing what idle movements to look for, it was important to investigate ways to tally the motions. Since there are many different ways to casually talk to someone, three different videos were taken (all from youtube). One of them is a podcast between four women <ref> Magical and Millennial Episode on Friends Like Us Podcast, Marina Franklin, https://www.youtube.com/watch?v=Vw9M4bVZCSk</ref>, another one is a conversation between a dutch politician and a dutch presenter <ref>Een gesprek tussen Johnny en Jesse, GroenLinks, https://www.youtube.com/watch?v=v6dk6eI1qFQ</ref> and the last one is of an American host show<ref>Joaquin Phoenix and Jimmy Fallon Trade Places,
The Tonight Show Starring Jimmy Fallon, https://www.youtube.com/watch?v=_plgHxLyCt4</ref>. The three videos were all watched fully and the number of each motion was being noted. The tallying was done for the listener and for the speaker so that the differences can also be spotted. The data of both can be seen in the bar graphs in figure 1 and 2.
 
[[File:Casual_conversation_lis.png|left|thumb|550px|Figure 1: Bar graph for the idle motions during listening]][[File:Casual_conversation_sp.png|center|thumb|550px|Figure 2: Bar graph for the idle motions during speaking]]
 
 
What can be seen in figure 1 is that neck movements are the most important motion, followed by mouth movements, arm movements and touching face (respectively). This can be explained due to the fact that people tend to nod a lot as a way to show acceptance and agreement with what has been said. However, head shaking is also seen as such a neck movement. The mouth movement is wetting their lips or just saying "yes", "okay" or "mhm". Again, with the underlying meaning being acceptance and agreement. Both motions are also a way of showing that attention is being paid. The arm movement can be explained by the fact that people scratch or just feel bored. Nonetheless, this movement shows life-like characteristics. Touching their faces is the same principle as the arm movement as it can happen due to boredom, an itch or something else. Next to these four, there were no motions that stood out.
 
From figure 2, the arm movement, eye stretching, hand movement, neck movement, and looking around stood out. The arm and hand movement can be explained by the fact that people tend to explain things via gestures. These gestures are done by using your arm and hand. The eye stretching is used to create emotion (e.g being in shock or being surprised). The neck movement is a result of approval or disapproval of things that are being told. Looking around is a factor in thinking and, if people look away, they are thinking about what they remember of a scenario or what the best way is to tell something. Furthermore, nothing really stood out.
 
In this research, nothing will be done with the speaking research. However, it is good to see that the motions differ per action (listening or speaking) and that the eleven motions can be applied on these.
 
=== Waiting / in rest ===
 
Often people find themselves in waiting or in a rest position. People normally think that they are really doing nothing in those situations. But everybody makes some small movements when doing nothing. This is one of the things that makes a person or object look alive. This phenomenon can be described as idle movements. There are many different things that can be described as idle movements, but we are only going to research a couple of them. We will research the following: Eye stretching, mouth movement, cough, touching the face, touching hair, looking around, neck movement, arm movement, hand movement, lean on something and body stretching. When you take a close look at people when they are doing nothing you will realize that they do these movements more often than you think. In this part of the experiment we will look at different videos where people are in the situation of doing nothing or are in rest. All the movements above are checked when done in the videos. All the results are then put together and added to a bar chart. In this way, we can find out which idle movements are most performed when in waiting or in rest. When we look at the results that can be seen in the bar graph below we notice that most people make movements with their neck and also look around and move their mouth. Arm and hand movements are also often done. The movements that are most often performed should probably be implemented in a robot when it is in rest/waiting to acquire a better human-like appearance of the robot.
 
[[File:bar_chart.jpg|center|thumb|550px|Figure 1: Bar graph for the idle motions when waiting/in rest]]
 
=== Serious conversation  ===
After spending a lot of timing researcher human idle movements in a serious or formal environment by studying the idle movements in conversations like job interviews <ref> Job Interview Good Example copy (2016) https://www.youtube.com/watch?v=OVAMb6Kui6A</ref>, ted talks <ref>How motivation can fix public systems | Abhishek Gopalka (2020) https://www.youtube.com/watch?v=IGJt7QmtUOk</ref>and other relative formal interactions like a debate <ref> Climate Change Debate | Kriti Joshi | Opposition https://www.youtube.com/watch?v=Lq0iua0r0KQ</ref> or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch it's face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements in this category that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research (Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. The 11 idle motions that will be counted are eye stretching, mouth movement, cough, touching the face, touching hair, looking around, neck movement (includes nodding), arm movement, hand movement, lean on something (including holding something) and body stretching. Two bar chart have been made counting all the idle motions in the videos mentioned at the top of this chapter. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. So the two bar charts look like the following:
[[File:listening.jpg|left|thumb|550px|Figure 1: Bar graph for the idle motions during listening]][[File:speaking.jpg|center|thumb|550px|Figure 2: Bar graph for the idle motions during speaking]]
 
== '''Experiment B''' ==
Experiment B is divided into different parts. These are:
*The NAO robot
*Determining (NAO compatible) Idle movements from literature and *Experiment A.
*The participants
*Setting up experiment B
*The script
*The questionnaire
*Processing the result
*Hypothesis
*Limitations
 
=== The NAO robot ===
The NAO robot is an autonomous, humanoid robot developed by Aldebaran Robotics. It is widely used in education and research. In the paper from Torta[], where the idea of this project is originated, is the NAO robot used as well. The robot is packed with sensors, for example. ….[ref]. It can speak multiple languages, however, not Dutch. Therefore the experiment will be done in English. 
=== Determining (NAO compatible) Idle movements from literature and Experiment A ===
 
Unfortunately, the NAO robot is not able to perform every motion a human can do due to a limited amount of links and joints. The list of human idle movements from experiment A and the literature is :
 
*Human idle movement 1
*Human idle movement 2
*Human idle movement 3
*...
 
The human idle movements 1 until X were either found in the database[ref] or made in Choregraphe 2.1.4[ref].
*Idle movements of the NAO robot 1
*Idle movements of the NAO robot 2
*Idle movements of the NAO robot 3
*...
The other human idle movements cannot be tested due to the limitations of the NAO robot.
 
=== The participants ===
As explained in [ref users] there are a lot of users that can be taken into account. Due to limitations, it was decided to take random participants to participate in experiment B. 20 Participants will be participating in this experiment.
 
=== Setting up experiment B(Open Question or ask to read something?-> implementation of categories in A?) ===
A room will be booked close to the social robotics lab. The experiment will be done in week 6.  The experiment will be performed as follows:
*The NAO robot will start a conversation
*The NAO robot will ask an open question, to which the participant needs to give an elaborate answer.
*During the answer, an idle movement is performed by the NAO robot.
*After the answer is given, the NAO asks another question
*During the answer, the same idle movement is performed by the NAO robot
 
Each conversation will have 3 questions, where the same Idle movement is performed. After the conversation is done, a questionnaire is taken by the participant.
The participant will undergo X(depends on the amount of Idle movements) conversations with the NAO robot.
 
=== The script ===
 
 
=== The questionnaire ===
(add questions to hide obvious parts?)
To determine the user's perception of the NAO robot the Godspeed questionnaire[2] will be used. The Godspeed uses a 5-point scale. 5 categories are used. These are Anthropomorphism, Animacy, Likeability, Perceived Intelligence and Perceived Safety. As noted in [2] as well, there is an overlap between Anthropomorphism and Animacy. The link to the actual questionnaire can be found here(https://www.overleaf.com/7569336777bwmwdmhvmmbc)
 
This questionnaire does have limitations, these are discussed in the results 'limitations'.
 
=== Processing the data ===
Deliverables.
 
By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.
 
 
The result can also occur in a combination of various idle movements as being the best.
 
=== Hypothesis ===
 
=== Limitations ===
''' The participants '''
 
In the ideal case, more participants could be used. The users' groups would be better represented. This would mean that the data could show differences in data between different users groups. Unfortunately, with the resources, limitations and duration of the project, this is not possible.
 
''' The Godspeed questionnaire '''
 
The interpretations of the results of the Godspeed questionnaire does have limitations, as explained in [2]. These are:
*Extremely difficult to determine the ground truth
*Many factors influence the measurements(eg. cultural background, prior experience with robots, etc.)
 
These limitations of the questionnaires result in the fact that the results of the measurements can not be used as an absolute value, but rather to see which option of idling is better.
 
== '''State of the Art''' ==
<I>The bold text at the beginning of each paper indicates the category the paper is assigned to. The papers are given different categories to indicate the contribution they can have towards our project. The different categories are: Foundation, Verify, Extend, Experiment.</I>
 
<B>Foundation:</B> Research work presented in this paper<ref>Torta, E. (2014). Approaching independent living with robots. Eindhoven: Technische Universiteit Eindhoven https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf</ref> addressed the introduction of a small–humanoid robot in elderly people’s homes providing novel insights in the areas of robotic navigation, non–verbal cues in human-robot interaction, and design and evaluation of socially–assistive robots in smart environments. The results reported throughout the thesis could lie in one or multiple of these areas giving an indication of the multidisciplinary nature of research in robotics and human-robot interaction. Topics like robotic navigation in the presence of a person, adding a particle filter to already existing navigational algorithms, attracting a person's attention by a robot, and the design of a smart home environment are discussed in the paper. Our research will add additional research to this paper.
 
<B>Extend:</B> The main objective of the research project<ref>Waldemar Karwowski (2007). Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots https://www.tandfonline.com/doi/abs/10.1080/00140139108967335</ref> was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. The second experiment can be useful in our research, they investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited for only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for the operational limitations of robot systems are discussed. This research can be useful to further convince people that idle motions are not only to make people feel comfortable near robots, but also as a safety measure.
 
<B>Verify:</B> In this study<ref>Raymond H. Cuijpers, Marco A. M. H. Knops (2015). Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions</ref>, a simple joint task was used to expose our participants to different levels of social verification. Low social verification was portrayed using idle motions and high social verification was portrayed using meaningful motions. Our results indicate that social responses increase with the level of social verification in line with the threshold model of social influence. This paper verifies why our research on idle motions is needed in the first place and states that in order to have a high social human-robot interaction, idle motions are a necessity.
 
<B>Extend:</B> A framework and methodology to realize robot-to-human behavioral expression is proposed in the paper<ref>Toru Nakata, Tomomasa Sato and Taketoshi Mori (1998). Expression of Emotion and Intention by Robot Body Movement https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf</ref>. Human-robot symbiosis requires to enhance nonverbal communication between humans and robots. The proposed methodology is based on movement analysis theories of dance psychology researchers. Two experiments on robot-to-human behavioral expression are also presented to support the methodology. One is an experiment to produce familiarity with a robot-to-human tactile reaction. The other is an experiment to express a robot's emotions by its dances. This methodology will be key to realize robots that work close to humans cooperatively and thus will also couple idle movements to emotions, for example, when a robot wants to express it is nervous.
 
<B>Foundation:</B> This paper<ref>Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono (2003). Body Movement Analysis of Human-Robot Interaction http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf</ref> presents a method for analyzing human-robot interaction by body movements. Future intelligent robots will communicate with humans and perform physical and communicative tasks to participate in daily life. A human-like body will provide an abundance of non-verbal information and enable us to smoothly communicate with the robot. To achieve this, we have developed a humanoid robot that autonomously interacts with humans by speaking and making gestures. It is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion capturing system, which allows us to measure the body movements in detail. We have performed experiments to compare the body movements with subjective impressions of the robot. The results reveal the importance of well-coordinated behaviors and suggest a new analytical approach to human-robot interaction. This paper lays the foundation for our project, so it is key to read it carefully!
 
<B>Comparable:</B> In this paper<ref>Thibault Asselborn, Wafa Johal and Pierre Dillenbourg (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments</ref>, we explored the effect of a robot’s subconscious gestures made during moments when idle on anthropomorphic perceptions of five-year-old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts. This research focusses only on children, but the concept of the research is the same as ours.
 
<B>Extend:</B> Research has been done towards the capability of a humanoid robot to provide enjoyment to people. For example, who picks up the robot and plays with it by hugging, shaking and moving the robot in various ways. <ref>Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. https://link.springer.com/article/10.1007/s12369-013-0212-0</ref> Inertial sensors inside a robot can capture how its body is moved when people perform such “full-body gestures”. A conclusion of this research was that the accuracy of the recognition of full-body gestures was rather high (77%) and that a progressive reward strategy for responses was much more successful. The progressive strategy for responses increases perceived variety; persisting suggestions increase understanding, perceived variety, and enjoyment; and users find enjoyment in playing with a robot in various ways. This resulted in the conclusion that motions should be meaningful, responses rewarding, suggestions inspiring and instructions fulfilling. This also results in increased understanding, perceived variety, and enjoyment. This conclusion suggests that the movements of a robot should make meaningful (to humans) motions, which is what we are trying to achieve with idle movements.
 
<B>Comparable:</B> Research has already been done by looking into the subtle movements of the head while waiting for a reaction to the environment.<ref>Kocoń, M., & Emirsajłow, Z. (2012). Modeling the idle movements of the human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022</ref> The main task of this research was to describe the subtle head movements when a virtual person is waiting for a reaction from the environment. This technique can increase the level of realism while performing human-computer interactions. Even though it was tested for a virtual avatar, the head movements will still be the same for a humanoid robot, such as the NAO robot. These head movements might be of importance for the idle movements of the NAO robot, therefore it is important to use this paper for its complications and accomplices.
 
<B>Comparable:</B> Furthermore, research has been done for the emotions of robots.<ref>Beck, A., Hiolle, A., & Canamero, L. (2013). Using Perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf</ref> To be exact for the latter, poses were used to create emotions. Five poses were created of the emotions: sadness, anger, neutral, pride and happiness. The recognition of all poses was not the same percentage and the velocity of a movement also has an influence on the interpretation of the pose. This has a lot in common with idle movements. Idle movements have to positively influence the emotions of the user (as to giving the robot a more life-like representation and, thus, a safe feeling for the user). By keeping this research in mind, the velocity of the movement has to be set right as that might influence the interpretation of the movement (aggressive or calm) and, if set wrong, can give the user an insecure feeling.
 
<B>Comparable:</B> In the field of starting a conversation, research has also been done.<ref>Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human-robot interaction (pp. 109-116). https://dl.acm.org/doi/pdf/10.1145/1514095.1514117</ref> This research looks into a model of approaching people while they are walking. The research, the latter research,  concluded that a proposed model (which made use of efficient and polite approach behavior with three phases: finding an interaction target, interaction at public distance and initiating conversation at social distance) was much more successful than a simplistic way (proposed: 33 out of 59 successes and simplistic: 20 out of 57 successes). Moreover, during the field trial, it was observed that people enjoyed receiving information from the robot, suggesting the usefulness of a proactive approach in initiating services from a robot. This positive feedback is also wanted for the use of idle movements, even though the idle movements might not be as obvious as the conversation approach in this research. It is important to notice that in this research an efficient and polite approach is more successful and even more efficient.
 
<B>Extend:</B> Another paper presents a study that investigates human-robot interpersonal distances and the influence of posture, either sitting or standing on the interpersonal distances.<ref>Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155</ref> The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintains either a sitting or standing posture while being approached. The results revealed that humans allow shorter interpersonal distances when a robot is sitting or is in a more passive position, and leave more space when being approached while standing. The paper suggests that in future work more non-verbal behaviors will be investigated between robots and humans and their effect, combined with the robot’s posture, on interpersonal distances. As idle movements might influence the feelings of safety positively which results in shorter personal distances, but that might be discussed later.  However, this research suggests that the idle movement might have different effects on people during different postures of the people. This has to be taken into account once testing the theorem suggested in this research.
 
<B>Support:</B> Some robots lack the possibility to produce humanlike nonverbal behavior (HNB). Using robot-specific nonverbal behavior (RNB) such as different eye colors to convey emotional meaning might be a fruitful mechanism to enhance HRI experiences, but it is unclear whether RNB is as effective as HNB. This paper<ref> Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582. https://link.springer.com/article/10.1007/s12369-018-0466-7</ref> presents a review on affective nonverbal behaviors in robots and an experimental study. We experimentally tested the influence of HNB and RNB (colored LEDs) on users’ perception of the robot (e.g. likeability, animacy), their emotional experience, and self-disclosure. In a between-subjects design, users interacted with either a robot displaying no nonverbal behavior, a robot displaying affective RNB, a robot displaying affective HNB or a robot displaying affective HNB and RNB. Results show that HNB, but not RNB, has a significant effect on the perceived animacy of the robot, participants’ emotional state, and self-disclosure. However, RNB still slightly influenced participants’ perception, emotion, and behavior: Planned contrasts revealed having any type of nonverbal behavior significantly increased perceived animacy, positive affect, and self-disclosure. Moreover, observed linear trends indicate that the effects increased with the addition of nonverbal behaviors (control< RNB< HNB). In combination, our results suggest that HNB is more effective in transporting the robot’s communicative message than RNB.
 
<B>Experiment:</B> The design of complex dynamic motions for humanoid robots is achievable only through the use of robot kinematics. In this paper<ref>https://link.springer.com/article/10.1007%2Fs10846-013-0015-4</ref>, we study the problems of forward and inverse kinematics for the Aldebaran NAO humanoid robot and present a complete, exact, analytical solution to both problems, including a software library implementation for real-time onboard execution. The forward kinematics allow NAO developers to map any configuration of the robot from its own joint space to the three-dimensional physical space, whereas the inverse kinematics provide closed-form solutions to finding joint configurations that drive the end effectors of the robot to desired target positions in the three-dimensional physical space. The proposed solution was made feasible through decomposition into five independent problems (head, two arms, two legs), the use of the Denavit-Hartenberg method, the analytical solution of a non-linear system of equations, and the exploitation of body and joint symmetries. The main advantage of the proposed inverse kinematics solution compared to existing approaches is its accuracy, its efficiency, and the elimination of singularities. In addition, we suggest a generic guideline for solving the inverse kinematics problem for other humanoid robots. The implemented, freely-available, NAO kinematics library, which additionally offers center-of-mass calculations and Jacobian inverse kinematics, is demonstrated in three motion design tasks: basic center-of-mass balancing, pointing to a moving ball, and human-guided balancing on two legs.
 
<B>Comparable:</B> This article<ref>Srinivasan, V., Murphy, R. R., & Bethel, C. L. (2015). A reference architecture for social head gaze generation in social robotics. International Journal of Social Robotics, 7(5), 601-616. https://link.springer.com/article/10.1007/s12369-015-0315-x</ref> outlines a reference architecture for social head gaze generation in social robots. The architecture discussed here is grounded in human communication, based on behavioral robotics theory, and captures the commonalities, essence, and experience of 32 previous social robotics implementations of social head gaze. No such architecture currently exists, but such an architecture is needed to serve as a template for creating or re-engineering systems, provide analyses and understanding of different systems, and provide a common lexicon and taxonomy that facilitates communication across various communities. The resulting reference architecture guides the implementation of social head gaze in a rescue robot for the purpose of victim management in urban search and rescue. Using the proposed reference architecture will benefit social robotics because it will simplify the principled implementations of head gaze generation and allow for comparisons between such implementations. Gazing can be seen as a part of idle movements.
 
<B>Experiment/Extend:</B> The motion of a humanoid robot is one of the most intuitive communication channels for human-robot interaction. Previous studies have presented related knowledge to generate speech-based motions of virtual agents on screens. However, physical humanoid robots share time and space with people, and thus, numerous speechless situations occur where the robot cannot be hidden from users. Therefore, we must understand the appropriate roles of motion design for a humanoid robot in many different situations. We achieved the target knowledge as motion-design guidelines based on the iterative findings of design case studies and a literature review. The guidelines are largely separated into two main roles for speech-based and speechless situations, and the latter can be further subdivided into idling, observing, listening, expecting, and mood-setting, all of which are well-distributed by different levels of intension. A series of experiments proved that our guidelines help create preferable motion designs of a humanoid robot. This study<ref> Jung, J., Kanda, T., & Kim, M. S. (2013). Guidelines for contextual motion design of a humanoid robot. International Journal of Social Robotics, 5(2), 153-169. https://link.springer.com/article/10.1007/s12369-012-0175-6</ref> provides researchers with a balanced perspective between speech-based and speechless situations, and thus they can design the motions of a humanoid robot to satisfy users in more acceptable and pleasurable ways.
 
<B>Comparable:</B> This paper<ref>Straub, I. (2016). ‘It looks like a human!’The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society, 31(4), 553-571. https://link.springer.com/article/10.1007/s00146-015-0632-5</ref> will focus on the establishment of a social event that refers in our case study to a social android robot simulating interaction with human users. Within an open-field study setting, we observed a shifting of interactive behavior, presence setting and agency ascription by the human users toward the robot, operating in parallel with the robots’ different modulations of activity. The activity modes of the robot range from a simple idling state, to a reactive facetrack condition and finally an interactive teleoperated condition. By relating the three types of activity modes with three types of (increasing social) presence settings—ranging from co-location to co-presence and finally, to social presence—we observed modulations in the human users’ interactive behavior as well as modulations of agency ascription toward the robot. The observations of the behavioral shift in the human users toward the robot lead us to the assumption that a fortification of agency ascription toward the robot goes hand in hand with the increase in its social activity modes as well as its display of social presence features.
 
<B>Foundation/Experiment:</B> This paper<ref>Song, H., Min Joong, K., Jeong, S.-H., Hyen-Jeong, S., Dong-Soo K.: (2009). Design of Idle motions for service robot via video ethnography. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 195–99 https://ieeexplore.ieee.org/abstract/document/5326062</ref> describes the design idle motions in which service robots act in its standby state. In the case of the robots developed so far, they show proper reactions only when there is a certain input from humans or the environment. In order to find the behavioral patterns at the standby state, we observed clerks at the information center via video ethnography, and idle motions are extracted from video data. We analyzed idle motions, and apply extracted characteristics to our robot. Moreover, by using the robot which can show those series of expressions and actions, our research needs to find that people can feel like they are alive.
 
<B>Foundation</B> The behavior of autonomous characters in virtual environments is usually described via a complex deterministic state machine or a behavior tree driven by the current state of the system. This is very useful when a high level of control over a character is required, but it arguably does have a negative effect on the illusion of realism in the decision-making process of the character. This is particularly prominent in cases where the character only exhibits idle behavior, e.g. a student sitting in a classroom. In this article<ref>Streck, A., Wolbers, T.: (2018). Using Discrete Time Markov Chains for Control of Idle Character Animation 17(8). https://ieeexplore.ieee.org/document/8490450</ref>, we propose the use of discrete-time Markov chains as the model for defining realistic non-interactive behavior and describe how to compute decision probabilities to normalize by the length of individual actions. Lastly, we argue that those allow for more precise calibration and adjustment for the idle behavior model then the models being currently employed in practice.
 
<B>Experiment:</B> The design of complex dynamic motions for humanoid robots is achievable only through the use of robot kinematics. In this paper<ref>Kofinas, N., Orfanoudakis, E., Lagoudakis, M.,: (2014). Complete Analytical Forward and Inverse Kinematics for the NAO Humanoid Robot, 31(1), pp. 251-264 https://link.springer.com/article/10.1007/s10846-013-0015-4</ref>, we study the problems of forward and inverse kinematics for the Aldebaran NAO humanoid robot and present a complete, exact, analytical solution to both problems, including a software library implementation for real-time onboard execution. The forward kinematics allow NAO developers to map any configuration of the robot from its own joint space to the three-dimensional physical space, whereas the inverse kinematics provide closed-form solutions to finding joint configurations that drive the end effectors of the robot to desired target positions in the three-dimensional physical space.
 
<B>Experiment:</B> The ability to realize human‐motion imitation using robots is closely related to developments in the field of artificial intelligence. However, it is not easy to imitate human motions entirely owing to the physical differences between the human body and robots. In this paper<ref>Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582. https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2018-0057</ref>, we propose a work chain‐based inverse kinematics to enable a robot to imitate the human motion of upper limbs in real-time. Two work chains are built on each arm to ensure that there is motion similarity, such as the end effector trajectory and the joint‐angle configuration. In addition, a two-phase filter is used to remove the interference and noise, together with a self‐collision avoidance scheme to maintain the stability of the robot during the imitation. Experimental results verify the effectiveness of our solution on the humanoid robot Nao‐H25 in terms of accuracy and real‐time performance.
 
<B>Experiment:</B> Content‐based human motion retrieval is important for animators with the development of motion editing and synthesis, which need to search similar motions in large databases. Obtaining text‐based representation from the quantization of mocap data turned out to be efficient. It becomes a fundamental step of many researches in human motion analysis. Geometric features are one of these techniques, which involve much prior knowledge and reduce data redundancy of numerical data. This paper<ref>Zhu, M., Sun, H., Lan, R., Li, B.: (2011). Human motion retrieval using topic model, 4(10), pp. 469-476. https://onlinelibrary.wiley.com/doi/abs/10.1002/cav.432</ref> describes geometric features as a basic unit to define human motions (also called mo‐words) and view a human motion as a generative process. Therefore, we obtain topic motions, which possess more semantic information using latent Dirichlet allocation by learning from massive training examples in order to understand motions better. We combine the probabilistic model with human motion retrieval and come up with a new representation of human motions and a new retrieval framework. Our experiments demonstrate its advantages, both for understanding motions and retrieval.
 
<B>Experiment:</B> Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man-machine interfaces, content-based image storage and retrieval, and video conferencing. This paper<ref>Aggarwal, J., Cai, Q.: (1999). Human Motion Analysis: A Review, 1(3), pp. 428-440. https://www.sciencedirect.com/science/article/pii/S1077314298907445</ref> gives an overview of the various tasks involved in motion analysis of the human body. We focus on three major areas related to interpreting human motion: motion analysis involving human body parts, tracking a moving human from a single view or multiple camera perspectives, and recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion from a single view or multiple perspectives focuses on higher-level processing, in which moving humans are tracked without identifying their body parts. After successfully matching the moving human image from one frame to another in an image sequence, understanding the human movements or activities comes naturally, which leads to our discussion of recognizing human activities.
 
<B>Comparable:</B> This research<ref>Brooks, A. G. and Arkin, R. C. (2006). Behavioral overlays for non-verbal communication expression on a humanoid robot. https://smartech.gatech.edu/bitstream/handle/1853/20540/BrooksArkinAURO2006.pdf?sequence=1&isAllowed=y</ref> details the application of non-verbal communication display behaviors to an autonomous humanoid robot, including the use of proxemics, which to date has been seldom explored in the field of human-robot interaction. In order to allow the robot to communicate information non-verbally while simultaneously fulfilling its existing instrumental behavior, a “behavioral overlay” model that encodes this data onto the robot’s pre-existing motor expression is developed and presented. The state of the robot’s system of internal emotions and motivational drives is used as the principal data source for non-verbal expression, but in order for the robot to display this information in a natural and nuanced fashion, an additional para-emotional framework has been developed to support the individuality of the robot’s interpersonal relationships with humans and of the robot itself. An implementation of the Sony QRIO is described which overlays QRIO’s existing EGO architecture and situated schema-based behaviors with a mechanism for communicating this framework through modalities that encompass posture, gesture and the management of interpersonal distance.
 
<B>Extend:</B> This paper<ref>Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Alami, R. and Siméon, T. (2006). How may I serve you?: a robot companion approaching a seated person in a helping context. https://hal.laas.fr/hal-01979221/document</ref> presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.
 
== Time spent ==
Below the link to an overleaf file which shows the time everyone spent per week: https://www.overleaf.com/2323989585tvychvjqthwr
Below the link to an overleaf file which shows the time everyone spent per week: https://www.overleaf.com/2323989585tvychvjqthwr
Als er staat "Gedaan:" kan die in principe weg maar kan beter op het laatst gedaan worden
# <B>GEDAAN:</B> Torta, E. (2014). Approaching independent living with robots. Eindhoven: Technische Universiteit Eindhoven <ref>https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf</ref>
# <B>GEDAAN:</B> Waldemar Karwowski (2007). Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots <ref>https://www.tandfonline.com/doi/abs/10.1080/00140139108967335</ref>
# <B>GEDAAN:</B> Raymond H. Cuijpers, Marco A. M. H. Knops (2015). Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions <ref>https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions</ref>
# <B>GEDAAN:</B> Toru Nakata, Tomomasa Sato and Taketoshi Mori (1998). Expression of Emotion and Intention by Robot Body Movement <ref>https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf</ref>
# <B>GEDAAN:</B> Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono (2003). Body Movement Analysis of Human-Robot Interaction <ref>http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf</ref>
# <B>GEDAAN:</B> Thibault Asselborn, Wafa Johal and Pierre Dillenbourg (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments <ref>https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments</ref>
# <B>GEDAAN:</B> Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. <ref>https://link.springer.com/article/10.1007/s12369-013-0212-0</ref>
# <B>GEDAAN:</B> Kocoń, M., & Emirsajłow, Z. (2012). Modelling the idle movements of human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. <ref>https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022</ref>
# <B>GEDAAN:</B> Beck, A., Hiolle, A., & Canamero, L. (2013). Using perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). <ref>https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf</ref>
# <B>GEDAAN:</B> Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction (pp. 109-116). <ref>https://dl.acm.org/doi/pdf/10.1145/1514095.1514117</ref>
# <B>GEDAAN:</B> Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. <ref>https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155</ref>
# <B>GEDAAN:</B> Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582.<ref>https://link.springer.com/article/10.1007/s12369-018-0466-7</ref>
# <B>GEDAAN:</B> <ref>https://link.springer.com/content/pdf/10.1007/s10846-013-0015-4.pdf</ref>
# <B>GEDAAN:</B> Srinivasan, V., Murphy, R. R., & Bethel, C. L. (2015). A reference architecture for social head gaze generation in social robotics. International Journal of Social Robotics, 7(5), 601-616.<ref>https://link.springer.com/article/10.1007/s12369-015-0315-x</ref>
# <B>GEDAAN:</B> Jung, J., Kanda, T., & Kim, M. S. (2013). Guidelines for contextual motion design of a humanoid robot. International Journal of Social Robotics, 5(2), 153-169.<ref>https://link.springer.com/article/10.1007/s12369-012-0175-6</ref>
# <B>GEDAAN:</B> Straub, I. (2016). ‘It looks like a human!’The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society, 31(4), 553-571.<ref>https://link.springer.com/article/10.1007/s00146-015-0632-5</ref>
# <B>GEDAAN:</B> Song, H., Min Joong, K., Jeong, S.-H., Hyen-Jeong, S., Dong-Soo K.: (2009). Design of Idle motions for service robot via video ethnography. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 195–99 <ref>https://ieeexplore.ieee.org/abstract/document/5326062</ref>
# <B>GEDAAN:</B>  Streck, A., Wolbers, T.: (2018). Using Discrete Time Markov Chains for Control of Idle Character Animation 17(8). <ref>https://ieeexplore.ieee.org/document/8490450</ref>
# <B>GEDAAN:</B> Kofinas, N., Orfanoudakis, E., Lagoudakis, M.,: (2014). Complete Analytical Forward and Inverse Kinematics for the NAO Humanoid Robot, 31(1), pp. 251-264 <ref>https://link.springer.com/article/10.1007/s10846-013-0015-4</ref>
# <B>GEDAAN:</B> Zhang, M., Chen, J., Wei, X., Zhang, D.: (2018). Work chain‐based inverse kinematics of robot to imitate human motion with Kinect, 7(8). <ref>https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2018-0057</ref>
# <B>GEDAAN:</B> Zhu, M., Sun, H., Lan, R., Li, B.: (2011). Human motion retrieval using topic model, 4(10), pp. 469-476. <ref>https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cav.432</ref>
# <B>GEDAAN:</B> Aggarwal, J., Cai, Q.: (1999). Human Motion Analysis: A Review, 1(3), pp. 428-440. <ref>https://www.sciencedirect.com/science/article/pii/S1077314298907445</ref>
# <B>LINK WERKT NIET:</B> Broadbent, E., Stafford, R. and MacDonald, B. (2009). Acceptance of healthcare robots for the older population: review and future directions<ref>https://s3.amazonaws.com/academia.edu.documents/45751247/s12369-009-0030-620160518-24489-1rd1itt.pdf?response-content-disposition=inline%3B%20filename%3DAcceptance_of_Healthcare_Robots_for_the.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWOWYYGZ2Y53UL3A%2F20200206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200206T185802Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=e66808bc708e20134353a70cc3868c63a321523e19c7b5b6d85b0b5bda76d4df</ref>
#Brooks, A. G. and Arkin, R. C. (2006). Behavioral overlays for non-verbal communication expression on a humanoid robot.<ref>https://smartech.gatech.edu/bitstream/handle/1853/20540/BrooksArkinAURO2006.pdf?sequence=1&isAllowed=y</ref>
# Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Alami, R. and Siméon, T. (2006). How may I serve you?: a robot companion approaching a seated person in a helping context.<ref>https://hal.laas.fr/hal-01979221/document</ref>


== '''References''' ==
== '''References''' ==
{{Reflist}}
<references />

Latest revision as of 20:38, 6 April 2020

Research on Idle Movements for Robots

Group Members

Name Study Student ID
Stijn Eeltink Mechanical Engineering 1004290
Sebastiaan Beers Mechanical Engineering 1257692
Quinten Bisschop Mechanical Engineering 1257919
Daan van der Velden Mechanical Engineering 1322818
Max Cornielje Mechanical Engineering 1381989

Abstract

It is likely that in the future, robots will be implemented into society. This study focuses on improving human-robot interaction by researching idle movements. First, the idle movements of humans are determined by a literature search and an experiment. This was done in three categories: ‘casual conversation’, ‘waiting / in rest’ and ‘serious conversation’ Then, the found idle movements at different categories were tested using a NAO robot. The participants needed to fill in the Godspeed questionnaire before and after a conversation with a NAO robot, which was performing idle movements. Furthermore, a conversation of the NAO robots which performed different kinds of idle movements was put online. The participant of this online test needed to full in the Godspeed questionnaire before and after watching the conversation. Results showed that the different types of conversations had different idle movements, resulting in different combinations for the conversation experiments. These two experiments with the NAO robots show two completely different outcomes. The result of the experiment with the participants having a conversation of the NAO robot shows that the combination of a casual conversation with serious movement and the combination of a serious conversation with casual movements were the best in most of the categories of the Godspeed questionnaire. The online experiment showed that the expected result, casual conversation with casual movements peaking together with serious conversation with serious movements. Overall, the usage of movements positively stimulates the participants (and, thus, speakers). The best set of movements seems to be dependent on whether the speaker is directly in front of the robot or sees the robot on their screen. For when people see the robot in real life, the best combinations would be the casual conversation with serious movements and the serious conversation with casual movements. For when people see the robot on a screen, the best combinations would be the casual conversation with casual movements and the serious conversation with serious movements.

Introduction

Problem statement

The future will contain robots, which will interact with humans in a very socially intelligent way.

Figure 1: Nao Robot

Robots will, then, demonstrate humanlike social intelligence and non-experts will not be able to distinguish robots and other human agents anymore. To accomplish this, robots need to develop a lot further: the social intelligence of robots needs to be increased a lot, but also the movement of the robots. Nowadays, robots do not move the way humans do. For instance, when moving your arm to grab something, humans tend to overshoot a bit[1]. A robot specifies the target and moves towards to object in the shortest way possible. Humans try to take the least resistance path. This means they also use their surroundings to reach for their target. For instance, lean on a table to cancel out the gravity force. Furthermore, humans use their joints more than robots do.

This lack of movement creates a big problem for a robot's motion for idle movements. An idle movement is a movement during rest or waiting (e.g touching your face). For humans and every other living creature in the world, it is physically impossible to stand precisely still. Robots, however, when not in action stand completely lifeless. Creating a static (and, therefore, an unnatural) environment during the interaction between the robot and the corresponding person. It is unsure if the robot is turned on and can respond to the human, and it can also create an uncomfortable scenario for the human[2]. As it is suggested, in the latter paper, that social ability (which also covers includes movements) in a robotic interface contributes to feeling comfortable talking to it. Another thing is that humans are always doing something or holding something while having a conversation. To improve the interaction with humans and robots, the questions that have to be answered are: "Which idle movements are considered the most appropriate human-human interaction?", and "Which idle movements are considered the most appropriate human-robot interaction?". In this research, we will look at all these things by observing human idle movements, test robot idle movement, and research into the most preferable movements according to participants.

Objectives

It is still very hard to get a robot to appear in a humanistic natural way as robots tend to be static (whether they move or not). As a more natural human-robot interaction is wanted, the behavior of social robots needs to be improved on different levels. In this project, the main focus will be on the movements that make a robot appear more natural/lifelike during idling, which is called 'idle movements'. The objectives of the research will be to find out which set of idle movements make a robot appear in a more natural way. In this research, the information will be based on previous research that has been done on this subject (which is explained in the chapter 'State of the art'). More information will be gathered by observing people in videos; these videos will differ in categories as will be explained later in the chapter 'Approach'. These videos will give a conclusion on what the best idle movements are for certain conversations.

Next to this, the NAO robot will be experimented with. The NAO robot can be seen in figure 1. This robot has many human characteristics such as human proportions and facial properties. The only big difference is the height as the NAO robot is just 57 centimeters. The NAO robot will be performing different idle movements and future users give their responses to these movements. With these acquired responses, the received data is supposed to give a conclusion on what the best idle movements are and will hopefully make the robot appear more life-like. Due to the fact that in the future humanoid robots will also improve, possible expectations on the most important idle movements will also be given. Altogether, it is hoped that the information will give greater insight into the use of idle movements on humanoid robots to be used in future research and projects on this subject.

USE

When taking a look at the stakeholders of social robots, a lot of groups can be taken into account. This is caused by the fact that it is uncertain to what extent robots will be used in the future. Focusing on all the possible users for this project would be impossible and 'designing for everyone, is designing for no-one'. A selection has been made of users who will likely be the first to benefit from or have to deal with social robots. This selection is based on collected research from other papers[3], where these groups where highlighted and researched on their opinion on robot behavior. The main focus of research papers on social robots is for future applications in elderly care, and therefore elderly people are the main subject. Another group of users who will reap the benefits of social robots are physically or mentally disabled people.

Users

The primary users are:

  • elderly

It is possible that in the future socially assistive robots will be implemented into care homes, to provide care and companion to elderly people. The elderly are the primary users as they will have the most contact with the robots. For those people in need of care, it is essential that these social robots are as human-like as possible. This will help them better accept the presence of a social robot in their private environment. One key element of making a robot appear life-like is the presence of (the to be studied) idle motions.

The nurses and doctors who care for these people are secondary users of socially assistive robots. They can feel threatened by social robots as the robots might take over their jobs, but can also be relieved that the robot will assist them[4].

  • physically disabled persons

Another field of application of socially active robots is helping physically disabled persons. Post-stroke application is a possible application of the socially active robot since stroke is a big cause of severe disability in the growing aging society. 'Motivation is recognized as the most significant challenge in physical rehabilitation and training' [5]. The challenge is 'aiming toward establishing a common ground between the human user and the robot that permits a natural, nuanced, and engaging interaction' in which idle motion could be of a big role.

  • children

Humanoid robots are used in various contexts in child-robot interactions, for example, autism therapy. In order to be accepted, the robot should be (perceived) as socially intelligent. In order to improve human-like abilities and social expression, idle movements should be implemented.


There has been done a loopback towards the users, this will be addressed at the end of this research (chapter 'Feedback loop on the users').

Society

The main interest of the society is the implementation of socially assistive robots in elderly care. The population of the world is getting older. It is estimated that in the year 2050, there will be three times more people over the age of 85 than there are today, while a shortage of space and nursing staff is already a problem today[5]. The implementation of socially assistive robots could help to overcome this problem as these robots may assist the caregivers during their day at work, reducing their stress and work pressure.

Enterprise

Companies and manufacturers of socially assistive robots will benefit from this research, implementing it in their products. They want to offer their customers the best social robot, they are able to produce, which in turn has to be as human-like as possible. Thus, it is key to include idle movements in their design as this would increase their sales, resulting in more profit over time. There are already some companies selling social robots, for example, the PARO robot. PARO is an advanced interactive robot made by AIST, a Japanese industrial pioneer. It is described as animal therapy administered to patients in elderly care and results show reduce patient stress, caregiver stress and improve the socialization of the patients.

Approach, Milestones and Deliverables

Approach

To get knowledge about idle movements, observations have to be done on humans. Though, the idle movements will be different for various categories (see experiment A), which requires research beforehand. This research will be done via state of the art. Papers, such as papers from R. Cuijpers and/or E. Torta and Hyunsoo Song, Min Joong Kim, Sang-Hoon Jeong, Hyeon-Jeong Suk, and Dong-Soo Kwon, contain a lot of information concerning the idle movements that are considered important. Therefore, it is important to read the papers of the state of the art carefully. The state of the art will be explained in the chapter ‘State of the art’, which is positioned at the bottom of this document due to its size. After the research, observations can be done. The perfect observation method is dependent on the category. Examples of such methods are observing people walking (or standing) around public spaces, such as on the campus of the university or on the train. However, for this research, videos or live streams will be watched on (e.g) Youtube. The noticed idle movements can be listed in a table and can be tallied. The most tallied idle movement will be considered the best for that specific category. However, that does not mean it will work for a robot as the responses of users might be negatively influenced. Experiment B will clarify this.

The best idle movement per category will be used in experiment B. This experiment will be done by using the NAO robot. The experiment makes use of a large number of participants (which optimally would consist out of the primary users, see chapter 'Users'). The NAO robot will start a conversation with the participant for a given amount of time; this is done twice. At first, the plan was to let the robot use different idle movements per conversation: the NAO robot will talk without using any idle movements and, then, using different idle movements for different types of conversations. The used idle movements will be based on the research as listed above and are in the same order for every participant. However, as will be explained in 'chapter Experiment B, Setting up experiment B', the setup has changed: the NAO robot will have 3 different types of motion sets: casual, serious, and no. The NAO robot will perform two conversations with the participants while using one of the motion sets (the combinations of the conversations and motion sets are fixed).

The participants will have to fill in a Godspeed questionnaire[6] before the conversation, after the first conversation, and after the last conversation. This ensures that there can be looked into the way people's thoughts change relative to the robot's behavior. The Godspeed questionnaire includes many different questions such as animacy or anthropomorphism. These questions should also be answered with a scale between 1-5. By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.

Milestones and Deliverables

The milestones and deliverables can be found in the planning.

Experiment A

The first experiment will be a human-only experiment. To understand the meaning of robot idle-movement better humans have to be observed. People will be observed in multiple places such as mentioned in the Approach but still have different movements. Listed down below, it is stated which type of video has been used (e.g youtube videos or live streams). The videos differ as the idle movements most likely differ in different categories. For instance, when people are having a conversation with a friend a possible idle move will be shaking with their leg and or feet or biting their nails. However, when a job interview people tend to be a lot more serious and try to focus on acting 'normal'. The different categories for the idle movement with multiple examples to implement on the robot are listed down below. Furthermore, these examples are based on eleven motions suggested by this paper[7] and these eleven motions are eye stretching, mouth movement, coughing, touching the face, touching hair, looking around, neck movement, arm movement, hand movement, lean on something and body stretching. In this research, nothing will be done with the speaking research. However, it is good to see that the motions differ per action (listening or speaking) and that the eleven motions can be applied to these.

Category Examples of idle movements
Casual conversation nodding, briefly look away, scratch head, putting arms in the side, scratch shoulder, change head position.
Waiting / in rest lightly shake leg/feet up and down or to the side, put fingers against head and scratch, blink eyes (LED on/off), breath.
Serious conversation nodding, folding arms, hand gestures, nod, lift eyebrows slightly, touch the face, touch/hold arm.

The listed eleven motions are used for the tallying of the idle movements so that its full comparison can be given between the categories. The type of tape/video used is, thus, dependent on such a category. For each category, it will be explained which type of video has been used and why. Furthermore, the analysis of the video has been done twice, as that was one of the main problems stated during the presentation.

Casual conversation

Casual conversation is talking about non-serious activities, which can occur between two strangers but also between two relatives. The topics of those conversations are usually not really serious, resulting in laughter or just calm listening. No different from the serious conversation, attention should be paid to the speaker. During conversations, the listeners will show such attention towards the speaker, which can generally be done by having eye contact. Furthermore, lifelike behavior is also important as it results in a feeling of trust towards the listener. Therefore, it is important for a robot to have the ability to establish such trust via idle movements.

Eye contact for the NAO robot has already been researched and it is established that it is important[8]. The usage of idle movements for the gain of trust is also considered important. Examples of such idle movements are nodding, briefly looking away, scratching the head, putting the arms in the side (and changing its position), scratching the shoulder and changing the head position. These idle movements are corresponding with what has been found in the idle movement research[7]. In casual conversations, nodding can confirm the presence of the listener and ensure that attention is being paid. Briefly looking away can mimic the movement of thought as people tend to look away and think while talking. Scratching your head might mimic the same idea, as scratching your head is associated with thinking. Moreover, scratching your head might also give the speaker a feeling of confidence due to the mimic of liveliness. Putting the arms in the side is generally a gesture of confidence and also of relaxation. Mixing up the number of arms in the side and the position of the legs will give the speaker an idea that the robot will relax, which gives the speaker a feeling of confidence. Scratching the shoulder is just a general mimic that ensures that the robot will look alive and seems confident. The change of the head position has the same purpose, the change of head position generally is a movement that occurs when thinking and during questioning what has been said. Both reasons give the speaker an idea that the robot is more lifelike. All of these examples of idle movements come back to the (already spoken about) research, resulting in the stated eleven motions (see the first paragraph of 'Experiment A').

After knowing what idle movements to look for, it was important to investigate ways to tally the motions. Since there are many different ways to casually talk to someone, three different videos were taken (all from youtube). One of them is a podcast between four women [9], another one is a conversation between a dutch politician and a dutch presenter [10] and the last one is of an American host show[11]. The three videos were all watched fully and the number of each motion was being noted. The tallying was done for the listener and for the speaker so that the differences can also be spotted. The data of both can be seen in the bar graphs in Figures 2 and 3.


Figure 2: Bar chart of the idle motions for listening during a casual conversation
Figure 3: Bar chart of the idle motions for speaking during a casual conversation


In figure 2 it can be seen that neck movements are the most important motion, followed by mouth movements, arm movements and touching face (respectively). This can be explained due to the fact that people tend to nod a lot as a way to show acceptance and agreement with what has been said. However, head shaking is also seen as such a neck movement. The mouth movement is wetting their lips or just saying "yes", "okay" or "mhm". Again, with the underlying meaning being acceptance and agreement. Both motions are also a way of showing that attention is being paid. The arm movement can be explained by the fact that people scratch or just feel bored. Nonetheless, this movement shows life-like characteristics. Touching their faces is the same principle as the arm movement as it can happen due to boredom, an itch or something else. Next to these four, there were no motions that stood out.

From figure 3, the arm movement, eye stretching, hand movement, neck movement, and looking around stood out. The arm and hand movement can be explained by the fact that people tend to explain things via gestures. These gestures are done by using your arm and hand. The eye stretching is used to create emotion (e.g being in shock or being surprised). The neck movement is a result of approval or disapproval of things that are being told. Looking around is a factor in thinking and, if people look away, they are thinking about what they remember of a scenario or what the best way is to tell something. Furthermore, nothing really stood out.

To confirm that the results are viable, a chi-squared statistic analysis was performed in both situations. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.

  • The significance for the motions of the listener in a casual conversation is 0,218 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
  • The significance for the motions of the speaker in a casual conversation is 0,339 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.

Waiting / in rest

Often people find themselves in waiting or in a rest position. People normally think that they are really doing nothing in those situations. However, everybody makes some small movements when doing nothing. This is one of the things that makes a person or animal look alive. There are many different things that can be described as idle movements, but this research will only look into a couple of them. When you take a close look at people when they are doing nothing you will realize that they do these movements more often than you think. In this part of the experiment, three different videos were looked into where people are in the situation of doing nothing or are in rest. All the movements above are checked when done in the videos [12] [13] [14]. All the results are then put together and added to a bar chart. In this way, we can find out which idle movements are most performed when in waiting or in rest. Figure 4 shows the bar chart of the results.

Figure 4: Bar chart for the idle motions during waiting/in rest

When looking at figure 4, it can be concluded that the neck movements are done the most. This can be explained by the fact that people move their heads while looking at their mobile phones or just nodding to people walking by. The second most frequent idle movements are looking around and mouth movement. Looking around is normal during waiting as people want to make sure what is happening around them and this movement makes them seem more alive and involved in the environment. The mouth movement is mainly people licking their lips or smiling to people walking by. The lip-licking is a preparation for a conversation that would maybe be happening in the near future and smiling is showing other people that you are present. The other movement that stood out is the hand movement, which can be explained by the fact that people are nervous when alone or just anxious. The movements that are most often performed should probably be implemented in a robot when it is in rest/waiting to acquire a better human-like appearance of the robot.

To confirm that the results are viable, a chi-squared statistic analysis was performed in this situation. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.

  • The significance for the motions of a person waiting or in rest is 0,166 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.

Serious conversation

The human idle movements in a serious or formal environment have been done by studying the idle movements in conversations like job interviews [15], ted talks [16] and other relative formal interactions like a debate [17] or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch their face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements, in this category, that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research ( rev Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. Two bar charts have been made counting all the idle motions in the videos mentioned at the top of this paragraph. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However, the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. The two bar charts look like the following:


Figure 5: Bar chart of the idle motions for listening during a serious conversation
Figure 6: Bar chart of the idle motions for speaking during a serious conversation


What can be seen in figure 5, is that neck movements really stand out. This is caused by the fact that people tend to nod a lot during a serious conversation, nodding ensures that the person comes across as understanding or smart. Furthermore, looking around and leaning on something are standing out. Looking around can be a factor of confidence and calmness during a serious conversation, showing that the person is comfortable being there. The leaning on something is mostly a table in the job interview and ensures that the person listening seems to be present and fully concentrated about the conversation (which really is helpful with, for example, a job interview). Furthermore, hand movement is an important factor. As already said, hands are put over each other on the table to show presence as well.

Figure 6 shows a high frequency at hand movement. This is caused by the fact that people want to come across smart or interesting, which is achieved by using gestures while talking. That also explains the peak for arm movements as they are also important in gestures. Looking around is caused by people thinking. While thinking during a serious conversation, people look away to look concentrated. Furthermore, looking at someone while thinking might make them uncomfortable. Lastly, body stretching has a relatively high frequency. Body stretching generally gives a feeling that someone is comfortable in a conversation; it also shows that the person is present (if not done extremely abundant).

To confirm that the results are viable, a chi-squared statistic analysis was performed in both situations. This analysis was performed using the statistics program SPSS. This analysis was performed to assure that these outcomes aren't accidental, but actually can be verified. So the significance needs to be higher than 0,05 in order to confirm that these outcomes aren't accidental.

  • The significance for the motions of the listener in a serious conversation is 0,712 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.
  • The significance for the motions of the speaker in a serious conversation is 0,577 which is higher than 0,05 so the amount of movements per 5 minutes is no coincidence.

Experiment B1

Experiment B1 is divided into different parts. These are:

  • The NAO robot
  • Determining (NAO compatible) Idle movements from literature and *Experiment A.
  • The participants
  • Setting up experiment B1
  • The script
  • The questionnaire
  • The results
  • Limitations

The NAO robot

The NAO robot (see figure 1) is an autonomous, humanoid robot developed by SoftBank Robotics. It is widely used in education, research, and healthcare, with a great example being the research of Elena Torta. In the paper from Torta[3], where the idea of this project originates from, is the NAO robot used as well. In one of her researches, the NAO robot is used with the elderly which is a great example that includes our user group as well.

The NAO robot is small compared to a human being. With its approximate 57 centimeters, it is about one-third of a human. The robot has 25 degrees of freedom, eleven of which are positioned in the lower part, including legs and pelvis and fourteen of them, are present in the upper part including its trunk, arms, and head[18]. This high number of degrees of freedom made it possible to make the robot look more lifelike, despite its length. The robot is packed with sensors, for example, cameras that can detect a face, position sensor, velocity sensor, acceleration sensors, motor torque sensor, and range sensors[19]. It can speak multiple languages, including Dutch. As will be spoken about later, our participants were dutch. Therefore, the experiment will be done in Dutch to get the best results, without having a language barrier influencing the quality of the conversation.

Determining (NAO compatible) idle movements from literature and Experiment A

Unfortunately, the NAO robot is not able to perform every motion a human can do due to a limited amount of links and joints as already mentioned. The list of human idle movements from experiment A and the literature are:

  • Eye stretching
  • Mouth movement
  • Cough
  • Touching face
  • Touching hair
  • Looking around
  • Neck movement
  • Arm movement
  • Hand movement
  • Lean on something
  • Body stretching


All the human idle movements, that are relevant for the research, were made in Choregraphe 2.1.4[20], to make the robot perform them during the experiment. The chosen idle movements were decided upon during experiment A. The best ones were to be chosen, this is when a movement has a high frequency in the given graph. On top of that, some movements were impossible to do as the robot was not able to hold on to objects or able to lean on something. The following table includes the motions and their usage for a casual or serious conversation and the action (listening or speaking). n.p stands for not possible and means that the robot was not able to perform the idle movement.

Graph idle.jpg

A lot of human idle movements cannot be tested due to the limitations of the NAO robot or were not relevant enough for the conversation with the NAO robot. It is important to see that casual and serious have one different idle movement. This is due to the fact that the seriousness of the conversation is different, meaning that during a serious conversation people think by looking away. Furthermore, the touching of the face is just a form of comforting for the speaker and shows anthropomorphism.

The participants

As explained in chapter 'Users', there are a lot of users that can be taken into account. Due to limitations, it was decided to take one group fo participants to participate in experiment B. 18 Participants have been doing the experiment, ranging from 18-22 years old. To make sure that no problems would occur the participants all filled in a consent form, giving their consent to let their data be used for this experiment and for this experiment only. As all of the participants were volunteers, no compensation was given to them. Furthermore, it was noted that all of their information could be deleted if they mentioned that they did not want to be apart of the experiment anymore (within 24 hours).

Setting up experiment B

The experiment will be done in the robotics lab on the TU/e and will be done in week 6. The experiment will be performed as follows:

  • 1. The participant needs to fill in a Godspeed questionnaire.
  • 2. The NAO robot will start a conversation, this includes the robot asking open questions. The participant will have to give an elaborate answer to these questions.
  • 4. During the answer, an idle movement is performed by the NAO robot.
  • 5. After the answer is given, the NAO robot responds and will ask another question.
  • 6. During the answer, the same idle movement is performed by the NAO robot.
  • 7. After the conversation, the participant has to fill in another Godspeed questionnaire.

At first, our plan was to pick the three most performed idle movements per category and test them all separately on the different conversations. However, having the same movement over and over again during a conversation seemed odd and unnatural, humans do not do this either. With a robot, it looks even weirder. To let the conversations be more natural, the experiment was changed to the following. Pick the four most used idle movements per category (serious and casual) for speaking and listening. Some idle movements were not an option due to the overall limitations (see 'Determining (NAO compatible) Idle movements from literature and Experiment A'). Now a file on Choregraphe is made for the serious movements and the casual movements. A total of six movements per category (three speaking and three listening) were placed randomly in the Choregraphe file to simulate human behavior. Another file was made with no movements at all, so in total there are three movements files (serious movements, casual movements, and no movements).

Our goal of this research is to see if idle movements make a difference in a conversation and if different idle movements in different types of conversation will make a difference. For which two different conversations were made for a new setup. Each conversation will last about two minutes, having around eight questions asked by the robot. Where one of the three idle movement programs are done. Before and after a conversation is done, a questionnaire is taken by the participant. The order would be the same and as every participant would perform two conversations, there would be a total of three different versions of conversations that the participant could receive:

  • Casual conversation with casual movements and serious conversation with serious movements
  • Casual conversation with no movements and serious conversation with casual movements
  • Casual conversation with serious movements and serious conversation with no movements

The participant will undergo two conversations with the NAO robot, firstly the casual conversation, in our case a chat about the weather and plans for the summer. The second conversation is a serious conversation, in our case a job interview. The three idle movements files from Choregraphe will be linked to the conversations, having nine different conversations in total (as can be seen above). The serious conversation has one with the serious idle movements, the casual movements and no movements at all. The casual conversation has the same three different types of options. To have as much data for every conversation a total of 18 participants were used.

The setup also changed as the setup listed above does not include the various movements and versions:

  • 1. The participant needs to fill in a Godspeed questionnaire.
  • 2. The NAO robot will start a casual conversation, this includes the robot asking open questions. The participant will have to give an elaborate answer to these questions. The robot will perform a type of idle movement or not during the listening (depending on which version the participant has).
  • 3. After the conversation, the participant will have to fill in another Godspeed questionnaire.
  • 4. The participant has to do perform another conversation with the NAO robot, but this time it would be a serious conversation. The robot would perform a set of idle movement or not (depending on which version the participant has).
  • 5. After the conversation, another Godspeed questionnaire has to be filled in.

With 18 participants, it means that every type of conversation with different movements has been done six times and 54 questionnaires have been filled in.

The script (Dutch and English)

Down below, the two scripts for the casual and serious conversations can be seen. The experiments were performed in Dutch, but it has been translated for non-Dutch people.


Casual conversation

Robot: Hoi goed om je weer te zien. Hoe gaat het met je? Hi, good to see you again. How are you?

u: Het gaat lekker. It is going well.

Robot: Dat is goed om te horen. Alleen het weer is wat minder vandaag helaas. That's good to hear. Only the weather is bad today, unfortunately.

u: Ja klopt. ik heb zin in de zomer. Yes. I'm looking forward to the summer.

Robot: Inderdaad. Ik kan niet wachten om aan het strand te liggen. en jij? Indeed. I can't wait to lie on the beach. and you?

u: Ik ga liever barbecuen/terassen I prefer to barbecue / go to a terrace

Robot: Dat lijkt me inderdaad ook erg leuk om te doen. I think it would be fun to do.

u: Met vrienden samen iets doen is altijd gezellig. Doing something with friends is always fun.

Robot: Jazeker, ik ga met mijn maatjes een rondreis maken tijdens de vakantie, heb jij nog plannen? Yes, I'm going on a tour with my buddies during the holidays, do you have any plans?

u: Ik ga denk ik vaak zwemmen. I think I will go swimming often.

Robot: Wat leuk. dan heb je iets om naar uit te kijken. How nice. then you have something to look forward to.

u: Inderdaad heb er veel zin in. Indeed I am looking forward to it.

Robot: Ik moet er weer vandoor. Leuk je weer gesproken te hebben. I have to run again. Nice to have talked to you again.

u: Oke, tot de volgende keer! Okay, until next time!

Robot: Doei! Bye!


Job interview

Robot: Hallo, welkom bij dit sollicitatiegesprek. Wat zijn uw beste vaardigheden? Hello, welcome to this interview. What are your best skills?

u: Ik ben goed in samenwerken. I am good at collaborating

Robot: Dat is goed om te horen. Welke opleiding heeft u gedaan? That's good to hear. What study did you do?

u: (kies je eigen opleiding) (choose your own education)

Robot: Dat komt goed uit. Zulke mensen kunnen we goed gebruiken in ons bedrijf. Bent u op zoek naar een volledige werkweek of een halve werkweek? That is convenient. We can put such people to good use in our company. Are you looking for a full working week or a half working week?

u: (kies uit een volledige werkweek of een halve werkweek) (choose from a full working week or a half working week)

Robot: Oke, daarvoor hebben we nog een plek beschikbaar. Binnenkort kunt u een mail verwachten met het contractvoorstel. Okay, we still have a place available for that. You can expect an email with the contract proposal soon.

u: Daar kijk ik naar uit bedankt. Thank you for that.

Robot: Tot ziens! Bye!

The questionnaire

To determine the user's perception of the NAO robot the Godspeed questionnaire[6] will be used. The Godspeed uses a 5-point scale. Five categories are used. These are Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety. As noted in the questionnaire as well, there is an overlap between Anthropomorphism and Animacy. The link to the actual questionnaire can be found here(https://www.overleaf.com/7569336777bwmwdmhvmmbc). The different categories will be used to represent the importance of idle movements. Ideally, all of the categories would be better with idle movements, but that will be further discussed in the chapter 'The results', down below. Nonetheless the size of the questionnaire, the questionnaire has its flaws and limitations. These are discussed in the chapter 'limitations'.

The results

The acquired data which we achieved from performing experiment B.1 was analyzed using SPSS. Because we used the godspeed questionnaire the data was acquired in Likert scale scores. We used a five-point Likert scale. The results were analyzed by taking the mean score of each of the five categories per conversation type. In total, we tested seven different conversations which are all explained in previous chapters. The following bar chart shows the mean score of every category per conversation type.

Figure 7: Bar chart for the mean scores of the godspeed questionnaire

To give a small explanation of the graph: on the y-axis, we can see the mean scores of the five points Likert scale. On the x-axis, we can see the five categories that are acquired from the godspeed questionnaire. These five categories are subdivided in different bars which represent a conversation type.

  • Before experiment: Before the experiment started we let the participants fill in the godspeed questionnaire to set a standard to compare the results to.
  • Casual Casual: The casual conversation script mentioned previously, supplemented with the casual idle movements.
  • Serious Serious: The serious conversation script mentioned previously, supplemented with the serious idle movements.
  • Casual No: The casual conversation script mentioned previously, using no movements at all.
  • Serious Casual: The serious conversation script mentioned previously, supplemented with the casual idle movements.
  • Casual Serious: The casual conversation script mentioned previously, supplemented with the serious idle movements.
  • Serious No: The serious conversation script mentioned previously, using no movements at all.

Chart analysis

Anthropomorphism

Contradicting with expectation, the serious conversation with casual movements and casual conversation with serious movements are peaking with the serious conversation being the highest. Followed by a serious conversation with serious movements and casual conversation with casual movements (without taking no-movements into account). The only big difference is that a casual conversation with casual movements has a lower mean than the mean before the experiment. This would suggest that the movements for the casual and serious conversation should be turned around for a higher mean value for anthropomorphism. What is remarkable about this result is that the no-movement combinations are overall lower for every other bar chart except the casual conversation with casual movements. This is totally in contradiction with what is expected (as it was expected to be one of the largest). However, even though there is a difference in the mean value between all the combinations, the difference is not significant.

Animacy

The bar chart for animacy has one big peak compared to the mean of "before experiments", which is the "serious conversation with casual movements" with an increase of about 0.5 points on the 1-5 scale. Followed by the serious conversation with serious movements and casual conversation with casual movements. All three had an increase compared to the mean before experiment (this increase is not immense for serious serious and casual casual). The opposite has happened for casual conversation with serious movements, casual conversation with no-movements and serious conversation with no-movements, as they have decreased compared to the measurement before (become worse relatively). The only increase and decrease that are really big are the serious conversation with casual movements and serious conversation with no-movements relatively. The other four combinations do not change significantly.

Likeability

Here, only one combination had an increase relative to the beginning mean value. The bar chart of the serious conversation with serious movements performs better than the beginning mean value (by just a tiny bit). The other four bar charts are scoring lower than the premeasured mean value. With the casual conversation with casual movements scoring second best, followed by casual conversation with serious movements and casual conversation with no movements and serious conversation with casual movements. Again, the serious conversation with no movements scores the lowest. This is corresponding with the expectation as the two 'right' combinations are present in the top two bars. However, as before this increase and decrease is so small that it is uncertain if it can be trusted and whether a definite conclusion can be drawn from them.

Perceived Intelligence

The two bars that peak relatively higher than the premeasured mean are both the serious conversation, but with the casual movements peaking higher than the serious movements. The others are decreased relative to the premeasured values. The casual conversation with casual movements is just a little smaller followed by the casual conversation with serious movements. The two combinations with no-movements are again the smallest. The increase and decrease are for all but the serious conversation with casual movements and no-movements conversations relatively small and cannot be drawn a direct conclusion from. On top of that, the result is also contradicting the expectation.

Perceived Safety

The only increase relative to the premeasured is the serious conversation with serious movements. A little decrease in the mean is present in the casual conversation with casual movements. The serious conversation with casual movements is the lowest after the latter followed by almost a three equal no-movements combination and the casual conversation with serious movements. This result is corresponding with the expectation and the difference is rather big. This result can, thus, be used to draw conclusions.

Conclusion

Overall, it can be said that no movements give worse results in all five categories than the two conversations with movements. This will most likely be the result of the robot looking real static and fake when not moving during talking. This can be seen by the fact that every bar is lower than the mean before the experiment. Especially the serious conversation without movements scores low compared to the others. It can be concluded that serious conversations are hugely impacted by the use of movements, more than casual conversations. As will be spoken about later on in this paragraph: the serious conversations score the best overall when looking at these bar charts.

From all the conversations with movements, one combination peaks in three of the five graphs. This is the serious conversation with casual movements (orange bar). The combination peaks in anthropomorphism, animacy and perceived intelligence. On top of that, the combination that peaks in every bar chart consist of a serious conversation (with either a serious movement or casual movement). Suggesting that serious conversations are in general perceived as better than casual conversations, resulting in the robots maybe being destined for serious conversations only? Adding to that, that the runner up is usually the same type of conversation but with the other movement type. For example, if the best result is given by the serious conversation with casual movements, the second-best is serious conversation with serious movements (look at animacy and perceived intelligence). The opposite combination is happening for likeability and perceived safety. Anthropomorphism's runner up is casual conversation with serious movements, but after that, the serious conversation with serious movements comes up, suggesting almost the same.

Though, as the increase is not immense between all the bars it cannot be said for certain. What can be said, as a result of this experiment, is that no movements during listening are a no-go and the robot should always perform a movement during listening and that serious conversations seem to be the better type of conversation for the NAO robot to perform.

Limitations

The participants

In the ideal case, more participants could be used. The users' groups would be better represented. This would mean that the data could show differences in data between different user groups. Unfortunately, with the resources, limitations, and duration of the project, this is not possible. On top of that, since the robot does not know what he is saying, the participants might still find the robot static.


The Godspeed questionnaire

The interpretations of the results of the Godspeed questionnaire[6] do have limitations, as explained above. These are:

  • Extremely difficult to determine the ground truth. As it is all relative, it is hard to find the real truth what people think about it.
  • Many factors influence the measurements (e.g. cultural background, prior experience with robots, etc.), this would create outliers. If there are too many outliers, the data would be unusable.

These limitations of the questionnaires result in the fact that the results of the measurements can not be used as an absolute value, but rather to see which option of idling is better.

Experiment B2

Experiment B1 is divided into different parts. These are:

  • Setting up experiment B2
  • The participants
  • The questionnaire
  • Results
  • Chart analysis
  • Limitations

Setting up experiment B2

During experiment B1, conversations with a team member and the NAO robot are filmed (these can be seen in the questionnaires mentioned down below). Two different conversations, serious and casual, were filmed with different idle movements:

  • Serious conversation with serious idle movements
  • Serious conversation with casual idle movements
  • Serious conversation with no idle movements


  • Casual conversation with serious idle movements
  • Casual conversation with casual idle movements
  • Casual conversation with no idle movements

At first, the participant did first a questionnaire, after seeing a picture of an NAO robot. Then twice a conversation following a questionnaire. The 6 different conversations were divided between the participants so that the online survey would not take too much time. This resulted in three questionnaires, which can be found below:

1: https://forms.gle/zx7HH38f8Tjqqwee6

2: https://forms.gle/9zhfTbCHRCNMgR1b9

3: https://forms.gle/ZjX54tzKW89LntgH9

where the first questionnaire has a casual conversation with according idle movements as well as a serious conversation with according idle movements.

The second questionnaire has a casual conversation without idle movements and a serious conversation with the idle movements intended for the casual conversation.

The third questionnaire has a casual conversation with the idle movements intended for a serious conversation and a serious conversation without idle movements.

The participant

A total of 24 participants took part in this online experiment. The range of age was between 17 and 60 years.

The questionnaire

The same questionnaire as in B1, the Godspeed questionnaire, was used.

The results

The acquired data which we achieved from performing experiment B.2 was analyzed using SPSS. Because we used the godspeed questionnaire the data was acquired in Likert scale scores. We used a five-point Likert scale. The results were analyzed by taking the mean score of each of the five categories per conversation type. In total, we tested seven different conversations which are all explained in previous chapters. The following bar chart shows the mean score of every category per conversation type.

Figure 8: Bar chart for the mean scores of the godspeed questionnaire

To give a small explanation of the graph: on the y-axis, we can see the mean scores of the five points Likert scale. On the x-axis, we can see the five categories that are acquired from the godspeed questionnaire. These five categories are subdivided in different bars which represent a conversation type.

  • Before experiment: Before the experiment started we let the participants fill in the godspeed questionnaire to set a standard to compare the results to.
  • Casual Casual: The casual conversation script mentioned previously, supplemented with the casual idle movements.
  • Serious Serious: The serious conversation script mentioned previously, supplemented with the serious idle movements.
  • Casual No: The casual conversation script mentioned previously, using no movements at all.
  • Serious Casual: The serious conversation script mentioned previously, supplemented with the casual idle movements.
  • Casual Serious: The casual conversation script mentioned previously, supplemented with the serious idle movements.
  • Serious No: The serious conversation script mentioned previously, using no movements at all.

Chart analysis B2

Anthropomorphism

In line with expectation, the serious conversation with serious movements and casual conversation with casual movements are peaking with the serious conversation being the highest. Followed by a serious conversation with casual movements. What is remarkable about this result is that the no-movement combinations are overall lower for every other bar chart except the casual conversation with serious movements. This is in contradiction with what is expected. Furthermore, the means of all categories lie higher than before the experiment.

Animacy

The bar chart for animacy has three peaks which lay quite close to each other: casual conversation with casual movements, serious conversation with serious movements and serious conversation with casual movements. After that come casual conversation with no movements and below that comes casual conversation with serious movements and serious conversation with no movements. The means of all categories are higher than the mean before the experiment.

Likeability

The peak is with the casual conversation with casual movements category. The means of casual conversation with serious movements and serious conversation with no movements lay lower than the mean of before the experiment. The means of serious conversation with serious movements, casual conversation with no movements and serious conversation with casual movements are a bit higher than before the experiment. What stands out is that casual conversation with no movements has a large mean score, which was not expected.

Perceived Intelligence

It can be noted that the mean of serious conversation with serious movements is the highest. After that casual conversation with casual movements, then serious conversation with casual movements. What stands out is that the means of all categories are higher than the mean before the experiment. Meaning that people found that seeing the robot positively stimulates the thought of perceived intelligence.

Perceived Safety

The category standing out is the casual conversation with serious movements. The mean is lower than before the experiment. This is a result that was not expected. The means of other categories are a bit higher than before the experiment, with casual conversation with casual movements the highest mean. What stands out as well, is the fact that the casual conversation with no movements is the second-highest. This can be caused by the reason that no movements may give the person watching the idea that the robot is harmless as it cannot do precision movements anyway.

Conclusion

Overall, it can be said that the right combination (casual conversation with casual movements or serious conversation with serious movements) gives the expected result: namely that it is received better overall. The casual conversation with casual movements peaks twice over the five categories and serious conversation with serious movements peaking, the resulting, three times. It is remarkable to see that the range of the means is overall smaller for the casual conversation with casual movements than for the serious conversation with serious movements, meaning that the results of the latter can be less trusted than the casual conversation with casual movements.

Furthermore, it is remarkable to see that there is an increase of mean for all combinations in three categories (relative to the premeasured mean),. With a fourth having all but one higher than the premeasured mean and the last one, two bars being lower than the premeasured. This means that (nonetheless the combination of conversation and movement) the view of the NAO robot is positively influenced by seeing the robot on a video, with likeability and perceived safety is less certain.

What is on the contrary, to being expected, is that casual conversation with no-movements has a high mean with an average (to comparison) range, while the serious conversation with no-movements is perceived as the overall worse. On top of that, the casual conversation with no-movements scores better in all categories than the casual conversation with serious movements. On the opposite combination, the serious conversation with no-movements scores worse than the serious conversation with casual movements, suggesting that the wrong type of movement only influences the casual conversation really much.

Last but not least, there seems not to be a large difference between the two types of conversation (serious and casual). Meaning that over the internet, one conversation type is not perceived as good as the other. Since the increases between the different types of movements are significant, it can be said that the use of the right movements for each conversation contributes to the perceiving of the robot over a video.

Limitations

Conducting an online questionnaire always brings limitations to your experiment, especially when the participants are asked what they think about a social robot. As the participants are not interacting with the robot, nor they are physically there to see the robot in person, we still wanted to conduct the online questionnaire. This would give a better representation of the user's base, which was one of our limitations from experiment B1.

Feedback loop on the users

In the first two experiments, the primary focus was to find out if the robot's appearance would be more 'natural' while having the right idle motions. This was determined while not telling the participants what type of movements they got with their conversation. This was a good way to try and understand the human perception of robots. Another way to have more interaction with the users was to just ask them. 'Do you think these idle motions make the robot better'.

Primary users

At first, the primary users were looked at. Since in elderly care it is more relevant to have a casual conversation. They want to be comfortable and have fun with the robot. Several elderly were shown the videos of the casual conversations and explained the differences between the movements. Their reaction was very positive. Something that stood out with the other user groups was that the elderly are just really impressed by the robot. They tend to anthropomorphize a bit more. They explained how the robots were so smart and liked all the functions on the robot. However, they still agreed on the idea of implementing casual motions rather than no or serious movements, because in the future elderly will have more experience with robots and this will change.

Secondary users

The secondary users were asked the same questions as the primary users. The secondary users available for questioning in this short time and due to corona restrictions consisted of a general practitioner, a nurse, and a salesman. They reacted more normally to the videos instead of, 'wow this is cool'. They understood the reason for researching different idle motions better. Most of the secondary users fall in the 'serious' category, because for instance in a business or when going to the doctor you do not know this person very well. These users advised when using social robots in the future for their professions to use serious movements. It could make a difference.

Final takeaway

As read before, the reactions of the users were relatively positive, especially in the first group. The explanation of the project made sense to all users. However when seeing the video of the NAO robot and the limitations in degrees of freedom some users were slightly less positive. They explained that now with these robots the movements could look weird and the distinguishing of serious and casual movements would not really be noted, only the weird movements the robot made. So for in the future with better human-like robots, they would use these idle motions for when robots would work in their industry, but for now, it is still too complicated for the robot motions.

Conclusion and Discussion

Conclusion

Which idle movements are considered the most appropriate human-human interaction?

By using two different types of conversation (casual and serious) and a neutral position (waiting/in rest), it was figured out what kind of idle movements are most appropriate in human-human interaction. The first indication to find the idle movements was by using a Japanese paper suggesting eleven movements that are done often by humans[7]. By tallying the number of repetitions of each movement per category, it was found that (for listening) a lot of the same movements resulted in peaking higher than others. The peaks of the casual conversation were for neck movement the highest, followed by arm movement, mouth movement, touching face and hand movement. The peaks of the serious conversation were to be found for hand movement being the highest, being followed by looking around, arm movement, body stretching, touching face and neck movement. The big difference between the two is, is that the frequency differences between the peaks were much higher for the serious conversation. Meaning that the difference in idle movements is clearly visible between the two conversation types as the serious conversation consists of less variation in the eleven used movement types. The waiting's peaks were to be found for neck movement, looking around, and mouth movement. Furthermore, the waiting movements were not surprising and were not to be further used in the research.


Which idle movements are considered the most appropriate human-robot interaction?

In the following experiment, combination sets were created for the different types of conversation together with different types of movement combinations. As was being told in the first paragraph, there were different movements for the different types of conversation. The best four movements were chosen for the robot, adding to this that the movements had to be doable by the robot. This resulted in movements not being able to be used, such as mouth movement or body stretching (as the robot tended to do it too slow or to collapse once doing it faster). This resulted (see table 1) in the neck arm and hand movement being chosen for both the conversations and touching face for the casual conversation and looking around for the serious conversation.

Followed to this setup, is the execution of experiment B.1. Experiment B.1 consisted of the aforementioned movements and a conversation. The conversation type differed and, thus, also needed a different topic. It was chosen that the casual conversation was a conversation about upcoming plans for the summer and the serious conversation was a job interview. It was made so that a movement was performed after the question or sentence of the NAO robot. Participants (in total 18) were used to talking to the robot and were needed to fill in Godspeed questionnaires (in total three per participant), covering anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The latter experiment was done in real life, but another experiment was done (B.2) which was done via the internet. Participants (in total 24) were shown a video with the same Godspeed questionnaires. The video was a conversation being recorded of one of our own and consisted of exactly the same combinations as for experiment B.1.

This resulted in two completely different outcomes. Experiment B.1 showed the opposite of what was expected: the combination of a casual conversation with serious movements and vice versa was concluded to be the best in most of the categories of the Godspeed questionnaire. This would suggest that experiment A is either wrong or that the perceiving of the movements between human-robot interaction is different. On top of that, it was concluded that the serious conversation was perceived as being the best in all categories, when not looking at what type of movements were performed. Suggesting that in real life, serious conversations are perceived by humans better in terms of the five categories. However, the usage of movements has improved the result overall. On the other hand, there is a large range of means and a small difference in mean suggesting that the data can not completely be trusted.

Experiment B.2 showed that the expected result, casual conversation with casual movements peaking together with the vice versa. This is nearly in all of the categories the result. On top of that, the range of means seems to be pretty small for the casual conversation with casual movements, suggesting that this type is perceived as much better. The serious conversation with serious movements has a large range but is often the highest peak. On top of this, the peaks of the two latter are significantly bigger than the others, suggesting that this data can be used to conclude from. On the downside, there seems to be a peak for the casual conversation with no-movement, which seems off. This will be explained in the discussion.

Overall, the usage of movements positively stimulates the participants (and, thus, speakers). The best set of movements seems to be dependent on whether the speaker is directly in front of the robot or sees the robot on their screen. For when people see the robot in real life, the best combinations would be the casual conversation with serious movements and the serious conversation with casual movements. For when people see the robot on a screen, the best combinations would be the casual conversation with casual movements and the serious conversation with serious movements. However, since the difference for means was small for the categories in experiment B.1, it would not matter if it were used as casual – casual and serious – serious. This would result in an easier to program software.

Discussion

Experiment A

The most important result from experiment A is that people tend to use idle movements as a means to show acceptance or agreement with what has been said. This is important to our research, to make robots more human-like. It is important that robots use idle movements in conversations to show their understanding of what has been said. It can also be noted that people always move, from scratching their faces to moving their arms around. These results were implemented in experiment B. Unfortunately, the NAO robot has a limited range of motion. Eight specific idle movements were chosen with the help of literature research and eventually determined by experiment A, but unfortunately, the NAO robot was only able to perform six of eight idle movements. In experiment A, it was difficult to determine in what category an idle movement would fall in. All of the videos used in experiment A were taken from YouTube. This could have affected the results of experiment A, as people might unconsciously behave differently when they know they are recorded, i.e. they might use less idle movements, or use their hands more to show social affection. To try to counter this, the experiment has been done by two people. On top of that, the significance has been used as well, to see whether the results were a coincidence (but they were not).

Experiment B1

Conversations without movements clearly score lower than conversations with movements, whether the movements were intended for the casual or serious conversation. This is likely due to the fact that the robot is completely static, which results in looking lifeless as expected. This tells us that the idle movements are key to making a robot look human-like in a conversation. However, due to the fact that not every movement is tested individually, the difference between the casual movements and serious movements cannot be seen clearly in the results. Ideally, the idle movements are tested individually to have a clear result on what idle movements work best, especially with robots that have a limited range of motion, like the NAO robot. However, to test this thoroughly, a lot more participants were needed. Experiment B1 would have benefited if more participants were used. The users' groups would be better represented, especially if those participants worked in healthcare. This would mean that the data could show differences in data between different user groups. This would benefit the feedback loop for the primary users. Unfortunately, with the resources, limitations, and duration of the project, this is not possible.

Experiment B2

In contradiction to experiment B1, the right combination of movements did show the expected results. This shows that the view of the NAO robot is positively influenced by seeing it on a video. However, the results of experiment B2 cannot be compared to a real-life conversation with a robot. Experiment B2 would have benefited if more participants were used. Because this experiment was conducted online, participants who work in healthcare were asked to participate, in order to counteract the limitation of experiment B1. However, not enough primary users were contacted to draw a conclusion from this data. Due to the set up used in experiment B1, it was decided upon to not change the set-up of experiment B2. Although experiment B2 is an extension of experiment B1, its data cannot be used on its own, due to the fact that the social interaction with the NAO robot is replaced in the form of an online video.

Follow up study

For studies that want to dig in deeper into this subject, the following advice is given:

  • More research can be conducted into the idle movements of humans. As explained before, the idle movements that were found were of people that knew they were being filmed.
  • Perform experiments B1 and B2 with a robot that is able to perform all the idle movements that were found in experiment A and literature.
  • Do experiments B1 and B2 with a higher number of participants so the human-robot interaction of different users can be researched.

State of the Art

The papers are assigned to different categories, these categories indicate the contribution the papers can have towards our project.

Foundation

These papers lay the foundation of our project, our project will build on the work that has been done in these papers.

Research work presented in this paper[3] addressed the introduction of a small–humanoid robot in elderly people’s homes providing novel insights in the areas of robotic navigation, non–verbal cues in human-robot interaction, and design and evaluation of socially–assistive robots in smart environments. The results reported throughout the thesis could lie in one or multiple of these areas giving an indication of the multidisciplinary nature of research in robotics and human-robot interaction. Topics like robotic navigation in the presence of a person, adding a particle filter to already existing navigational algorithms, attracting a person's attention by a robot, and the design of a smart home environment are discussed in the paper. Our research will add additional research to this paper.

This paper[21] presents a method for analyzing human-robot interaction by body movements. Future intelligent robots will communicate with humans and perform physical and communicative tasks to participate in daily life. A human-like body will provide an abundance of non-verbal information and enable us to smoothly communicate with the robot. To achieve this, we have developed a humanoid robot that autonomously interacts with humans by speaking and making gestures. It is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion capturing system, which allows us to measure the body movements in detail. We have performed experiments to compare the body movements with subjective impressions of the robot. The results reveal the importance of well-coordinated behaviors and suggest a new analytical approach to human-robot interaction. This paper lays the foundation for our project, so it is key to read it carefully!

The behavior of autonomous characters in virtual environments is usually described via a complex deterministic state machine or a behavior tree driven by the current state of the system. This is very useful when a high level of control over a character is required, but it arguably does have a negative effect on the illusion of realism in the decision-making process of the character. This is particularly prominent in cases where the character only exhibits idle behavior, e.g. a student sitting in a classroom. In this article[22], we propose the use of discrete-time Markov chains as the model for defining realistic non-interactive behavior and describe how to compute decision probabilities to normalize by the length of individual actions. Lastly, we argue that those allow for more precise calibration and adjustment for the idle behavior model then the models being currently employed in practice.

Foundation / Experiment

This paper[7] describes the design idle motions in which service robots act in its standby state. In the case of the robots developed so far, they show proper reactions only when there is a certain input from humans or the environment. In order to find the behavioral patterns at the standby state, we observed clerks at the information center via video ethnography, and idle motions are extracted from video data. We analyzed idle motions, and apply extracted characteristics to our robot. Moreover, by using the robot which can show those series of expressions and actions, our research needs to find that people can feel like they are alive.

Experiment

These papers will be useful when preparing or setting up our experiments.

The design of complex dynamic motions for humanoid robots is achievable only through the use of robot kinematics. In this paper[23], we study the problems of forward and inverse kinematics for the Aldebaran NAO humanoid robot and present a complete, exact, analytical solution to both problems, including a software library implementation for real-time onboard execution. The forward kinematics allow NAO developers to map any configuration of the robot from its own joint space to the three-dimensional physical space, whereas the inverse kinematics provide closed-form solutions to finding joint configurations that drive the end effectors of the robot to desired target positions in the three-dimensional physical space. The proposed solution was made feasible through decomposition into five independent problems (head, two arms, two legs), the use of the Denavit-Hartenberg method, the analytical solution of a non-linear system of equations, and the exploitation of body and joint symmetries. The main advantage of the proposed inverse kinematics solution compared to existing approaches is its accuracy, its efficiency, and the elimination of singularities. In addition, we suggest a generic guideline for solving the inverse kinematics problem for other humanoid robots. The implemented, freely-available, NAO kinematics library, which additionally offers center-of-mass calculations and Jacobian inverse kinematics, is demonstrated in three motion design tasks: basic center-of-mass balancing, pointing to a moving ball, and human-guided balancing on two legs.

The ability to realize human‐motion imitation using robots is closely related to developments in the field of artificial intelligence. However, it is not easy to imitate human motions entirely owing to the physical differences between the human body and robots. In this paper[24], we propose a work chain‐based inverse kinematics to enable a robot to imitate the human motion of upper limbs in real-time. Two work chains are built on each arm to ensure that there is motion similarity, such as the end effector trajectory and the joint‐angle configuration. In addition, a two-phase filter is used to remove the interference and noise, together with a self‐collision avoidance scheme to maintain the stability of the robot during the imitation. Experimental results verify the effectiveness of our solution on the humanoid robot Nao‐H25 in terms of accuracy and real‐time performance.

Content‐based human motion retrieval is important for animators with the development of motion editing and synthesis, which need to search similar motions in large databases. Obtaining text‐based representation from the quantization of mocap data turned out to be efficient. It becomes a fundamental step of many pieces of research in human motion analysis. Geometric features are one of these techniques, which involve much prior knowledge and reduce data redundancy of numerical data. This paper[25] describes geometric features as a basic unit to define human motions (also called mo‐words) and view a human motion as a generative process. Therefore, we obtain topic motions, which possess more semantic information using latent Dirichlet allocation by learning from massive training examples in order to understand motions better. We combine the probabilistic model with human motion retrieval and come up with a new representation of human motions and a new retrieval framework. Our experiments demonstrate its advantages, both for understanding motions and retrieval.

Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man-machine interfaces, content-based image storage and retrieval, and video conferencing. This paper[26] gives an overview of the various tasks involved in motion analysis of the human body. We focus on three major areas related to interpreting human motion: motion analysis involving human body parts, tracking a moving human from a single view or multiple camera perspectives, and recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion from a single view or multiple perspectives focuses on higher-level processing, in which moving humans are tracked without identifying their body parts. After successfully matching the moving human image from one frame to another in an image sequence, understanding the human movements or activities comes naturally, which leads to our discussion of recognizing human activities.

Experiment / Extend

The motion of a humanoid robot is one of the most intuitive communication channels for human-robot interaction. Previous studies have presented related knowledge to generate speech-based motions of virtual agents on screens. However, physical humanoid robots share time and space with people, and thus, numerous speechless situations occur where the robot cannot be hidden from users. Therefore, we must understand the appropriate roles of motion design for a humanoid robot in many different situations. We achieved the target knowledge as motion-design guidelines based on the iterative findings of design case studies and a literature review. The guidelines are largely separated into two main roles for speech-based and speechless situations, and the latter can be further subdivided into idling, observing, listening, expecting, and mood-setting, all of which are well-distributed by different levels of intention. A series of experiments proved that our guidelines help create preferable motion designs of a humanoid robot. This study[27] provides researchers with a balanced perspective between speech-based and speechless situations, and thus they can design the motions of a humanoid robot to satisfy users in more acceptable and pleasurable ways.

Extend

These papers include work that is related to our project but does not overlap.

The main objective of the research project[28] was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. The second experiment can be useful in our research, they investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited for only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for the operational limitations of robot systems are discussed. This research can be useful to further convince people that idle motions are not only to make people feel comfortable near robots, but also as a safety measure.

A framework and methodology to realize robot-to-human behavioral expression is proposed in the paper[29]. Human-robot symbiosis requires to enhance nonverbal communication between humans and robots. The proposed methodology is based on movement analysis theories of dance psychology researchers. Two experiments on robot-to-human behavioral expression are also presented to support the methodology. One is an experiment to produce familiarity with a robot-to-human tactile reaction. The other is an experiment to express a robot's emotions by its dances. This methodology will be key to realize robots that work close to humans cooperatively and thus will also couple idle movements to emotions, for example, when a robot wants to express it is nervous.

Research has been done towards the capability of a humanoid robot to provide enjoyment to people. For example, who picks up the robot and plays with it by hugging, shaking and moving the robot in various ways. [30] Inertial sensors inside a robot can capture how its body is moved when people perform such “full-body gestures”. A conclusion of this research was that the accuracy of the recognition of full-body gestures was rather high (77%) and that a progressive reward strategy for responses was much more successful. The progressive strategy for responses increases perceived variety; persisting suggestions increase understanding, perceived variety, and enjoyment; and users find enjoyment in playing with a robot in various ways. This resulted in the conclusion that motions should be meaningful, responses rewarding, suggestions inspiring and instructions fulfilling. This also results in increased understanding, perceived variety, and enjoyment. This conclusion suggests that the movements of a robot should make meaningful (to humans) motions, which is what we are trying to achieve with idle movements.

Another paper presents a study that investigates human-robot interpersonal distances and the influence of posture, either sitting or standing on the interpersonal distances.[31] The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintains either a sitting or standing posture while being approached. The results revealed that humans allow shorter interpersonal distances when a robot is sitting or is in a more passive position, and leave more space when being approached while standing. The paper suggests that in future work more non-verbal behaviors will be investigated between robots and humans and their effect, combined with the robot’s posture, on interpersonal distances. As idle movements might influence the feelings of safety positively which results in shorter personal distances, but that might be discussed later. However, this research suggests that the idle movement might have different effects on people during different postures of the people. This has to be taken into account once testing the theorem suggested in this research.

This paper[32] presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.

The paper[33] reviews the literature about human responses to healthcare robots and summarises the variables that have been found to influence responses. It may be possible to increase the acceptance of healthcare robots by properly assessing the needs of the human user and then matching the robot’s role, appearance and behavior to these needs. Because robots have limitations in their abilities, another way to increase acceptance may be to modify the expectations of users to better match robots’ abilities. However, it is stated that more research needs to be done in this field as the potential users’ needs and expectations in specific situations need to be found out. Furthermore, research has to be put into the question of how to increase the acceptance match between robots and humans.

Comparable

Work in these papers is comparable to our project.

In this paper[34], we explored the effect of a robot’s subconscious gestures made during moments when idle on anthropomorphic perceptions of five-year-old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts. This research focusses only on children, but the concept of the research is the same as ours.

Research has already been done by looking into the subtle movements of the head while waiting for a reaction to the environment.[35] The main task of this research was to describe the subtle head movements when a virtual person is waiting for a reaction from the environment. This technique can increase the level of realism while performing human-computer interactions. Even though it was tested for a virtual avatar, the head movements will still be the same for a humanoid robot, such as the NAO robot. These head movements might be of importance for the idle movements of the NAO robot, therefore it is important to use this paper for its complications and accomplices.

Furthermore, research has been done for the emotions of robots.[36] To be exact for the latter, poses were used to create emotions. Five poses were created of the emotions: sadness, anger, neutral, pride and happiness. The recognition of all poses was not the same percentage and the velocity of a movement also has an influence on the interpretation of the pose. This has a lot in common with idle movements. Idle movements have to positively influence the emotions of the user (as to giving the robot a more life-like representation and, thus, a safe feeling for the user). By keeping this research in mind, the velocity of the movement has to be set right as that might influence the interpretation of the movement (aggressive or calm) and, if set wrong, can give the user an insecure feeling.

In the field of starting a conversation, research has also been done.[37] This research looks into a model of approaching people while they are walking. The research, the latter research, concluded that a proposed model (which made use of efficient and polite approach behavior with three phases: finding an interaction target, interaction at a public distance and initiating the conversation at a social distance) was much more successful than a simplistic way (proposed: 33 out of 59 successes and simplistic: 20 out of 57 successes). Moreover, during the field trial, it was observed that people enjoyed receiving information from the robot, suggesting the usefulness of a proactive approach in initiating services from a robot. This positive feedback is also wanted for the use of idle movements, even though the idle movements might not be as obvious as the conversation approach in this research. It is important to notice that in this research an efficient and polite approach is more successful and even more efficient.

This article[38] outlines a reference architecture for social head gaze generation in social robots. The architecture discussed here is grounded in human communication, based on behavioral robotics theory, and captures the commonalities, essence, and experience of 32 previous social robotics implementations of social head gaze. No such architecture currently exists, but such an architecture is needed to serve as a template for creating or re-engineering systems, provide analyses and understanding of different systems, and provide a common lexicon and taxonomy that facilitates communication across various communities. The resulting reference architecture guides the implementation of a social head gaze in a rescue robot for the purpose of victim management in urban search and rescue. Using the proposed reference architecture will benefit social robotics because it will simplify the principled implementations of head gaze generation and allow for comparisons between such implementations. Gazing can be seen as a part of idle movements.

This paper[39] will focus on the establishment of a social event that refers to our case study to a social android robot simulating interaction with human users. Within an open-field study setting, we observed a shifting of interactive behavior, presence setting and agency ascription by the human users toward the robot, operating in parallel with the robots’ different modulations of activity. The activity modes of the robot range from a simple idling state, to a reactive facetrack condition and finally an interactive teleoperated condition. By relating the three types of activity modes with three types of (increasing social) presence settings—ranging from co-location to co-presence and finally, to social presence—we observed modulations in the human users’ interactive behavior as well as modulations of agency ascription toward the robot. The observations of the behavioral shift in the human users toward the robot lead us to the assumption that a fortification of agency ascription toward the robot goes hand in hand with the increase in its social activity modes as well as its display of social presence features.

This research[40] details the application of non-verbal communication display behaviors to an autonomous humanoid robot, including the use of proxemics, which to date has been seldom explored in the field of human-robot interaction. In order to allow the robot to communicate information non-verbally while simultaneously fulfilling its existing instrumental behavior, a “behavioral overlay” model that encodes this data onto the robot’s pre-existing motor expression is developed and presented. The state of the robot’s system of internal emotions and motivational drives is used as the principal data source for non-verbal expression, but in order for the robot to display this information in a natural and nuanced fashion, an additional para-emotional framework has been developed to support the individuality of the robot’s interpersonal relationships with humans and of the robot itself. An implementation of the Sony QRIO is described which overlays QRIO’s existing EGO architecture and situated schema-based behaviors with a mechanism for communicating this framework through modalities that encompass posture, gesture and the management of interpersonal distance.

Support

These papers can verify or support our work.

In this study[41], a simple joint task was used to expose our participants to different levels of social verification. Low social verification was portrayed using idle motions and high social verification was portrayed using meaningful motions. Our results indicate that social responses increase with the level of social verification in line with the threshold model of social influence. This paper verifies why our research on idle motions is needed in the first place and states that in order to have a high social human-robot interaction, idle motions are a necessity.

Some robots lack the possibility to produce humanlike nonverbal behavior (HNB). Using robot-specific nonverbal behavior (RNB) such as different eye colors to convey emotional meaning might be a fruitful mechanism to enhance HRI experiences, but it is unclear whether RNB is as effective as HNB. This paper[24] presents a review on affective nonverbal behaviors in robots and an experimental study. We experimentally tested the influence of HNB and RNB (colored LEDs) on users’ perception of the robot (e.g. likeability, animacy), their emotional experience, and self-disclosure. In a between-subjects design, users interacted with either a robot displaying no nonverbal behavior, a robot displaying affective RNB, a robot displaying affective HNB or a robot displaying affective HNB and RNB. Results show that HNB, but not RNB, has a significant effect on the perceived animacy of the robot, participants’ emotional state, and self-disclosure. However, RNB still slightly influenced participants’ perception, emotion, and behavior: Planned contrasts revealed having any type of nonverbal behavior significantly increased perceived animacy, positive affect, and self-disclosure. Moreover, observed linear trends indicate that the effects increased with the addition of nonverbal behaviors (control< RNB< HNB). In combination, our results suggest that HNB is more effective in transporting the robot’s communicative message than RNB.

Presentation and Choregraphe file for the idle movements

Via the link to google docs: https://drive.google.com/open?id=1tl8hjBziPE5PN1l10R9BYrB9WI-FaVUk

Peer review

We think the project went well and every member put the same amount of effort into the project/research. We do not want any differences in grades for this project between the group members.

Planning

Week Datum start To Do & Milestones Responsible team members
1 3 Feb. Determining the subject

Milestones:
1. Choose subject;
2. Make the objectives, approach and task division;
3. Describe the users and the state-of-the-art;
4. Contact the person responsible for the NAO robot;


Responsible members:

1. Everyone;
2. Max and Sebastiaan;
3. Daan, Quinten and Stijn;
4. Max

2 10 Feb. setting up experiments

Milestones:
1. Update wiki with feedback;
2. Work out experiment A and experiment B based on literature study;
3. Set up testing facility;
4. Approach test persons.


Responsible members:

1. Sebastiaan;
2. Max and Daan;
3. Stijn and Sebastiaan;
4. Quinten.

3 17 Feb. Doing experiment A

Milestones:
1. Update wiki with feedback
2. Do experiment A.


Responsible members:

1. Stijn;
2. Everyone.

24 Feb. Break: Buffer for unfinished work in week 1-3
4 2 March Process experiment A and start preparing for NAO robot'

Milestones:
1. Update wiki with feedback;
2. Process the data from experiment A;
3. Further workout experiment B;
4. Start on preparing on working with the NAO robot(software etc.).

Responsible members:

1. Quinten;
2. Stijn;
3. Daan and Max;
4. Sebastiaan.

5 9 March Finilize preparing experiment B and testing the NAO robot

Milestones:
1. Update wiki with feedback;
2. Receive Nao robot;
3. Test the NAO robot and make sure it can perform the required movements;
4. Finalize experiment B.


Responsible members:

1. Max;
2. Everyone;
3 Sebastiaan and Max;
4. Quinten and Stijn.

6 16 March Performing experiment B and process results

Milestones:
1. Update wiki with feedback;
2. Perform experiment B;
3. Process results from experiment B.

Responsible members:

1. Daan;
2. Sebastiaan, Quiten and Max;
3. Stijn and Daan.

7 23 March Evaluate experiments, draw conclusion and work on wiki

Milestones:
1. Update wiki with feedback;
2. Conclusion and evaluation experiment A;
3. Conclusion and evaluation experiment B;
4. Determine follow up study;
5. Work on wiki.

Responsible members:

1. Sebastiaan;
2. Quinten and Max;
3. Stijn and Daan;
4. Sebastiaan;
5. Everyone.

8 30 March Finilze wiki and final presentation

Milestones:
1. Finilize wiki;
2. Prepare final presentation;
3. Peerreview.

Responsible team members:

1. Sebastiaan, Quinten, Daan and Max;
2. Stijn;
3. everyone.

Time spent

Below the link to an overleaf file which shows the time everyone spent per week: https://www.overleaf.com/2323989585tvychvjqthwr

References

  1. Yamauchi, M., Imanaka, K., Nakayama, M., & Nishizawa, S. (2004). Lateral difference and interhemispheric transfer on arm-positioning movement between right and left-handers. Perceptual and motor skills, 98(3_suppl), 1199-1209.
  2. Homes, S., Nugent, B. C., & Augusto, J. C. (2006). Human-robot user studies in eldercare: Lessons learned. Smart homes and beyond: Icost, 2006, 4th.
  3. 3.0 3.1 3.2 Torta, E. (2014). Approaching independent living with robots. https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf
  4. Rantanen, T., Lehto, P., Vuorinen, P., & Coco, K. (2018). The adoption of care robots in home care—A survey on the attitudes of Finnish home care personnel. Journal of clinical nursing, 27(9-10), 1846-1859.
  5. 5.0 5.1 Tapus, A. (2013). The Grand Challenges in Socially Assistive Robotics.https://hal.archives-ouvertes.fr/hal-00770113/document
  6. 6.0 6.1 6.2 Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1(1), 71–81.
  7. 7.0 7.1 7.2 7.3 Song, H., Min Joong, K., Jeong, S.-H., Hyen-Jeong, S., Dong-Soo K.: (2009). Design of Idle motions for service robot via video ethnography. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 195–99 https://ieeexplore.ieee.org/abstract/document/5326062
  8. Andrist, S., Tan, X. Z., Gleicher, M., & Mutlu, B. (2014, March). Conversational gaze aversion for humanlike robots. In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 25-32). IEEE.
  9. Magical and Millennial Episode on Friends Like Us Podcast, Marina Franklin, https://www.youtube.com/watch?v=Vw9M4bVZCSk
  10. Een gesprek tussen Johnny en Jesse, GroenLinks, https://www.youtube.com/watch?v=v6dk6eI1qFQ
  11. Joaquin Phoenix and Jimmy Fallon Trade Places, The Tonight Show Starring Jimmy Fallon, https://www.youtube.com/watch?v=_plgHxLyCt4
  12. Doing nothing for 8 hours straight, Rens Cools, https://www.youtube.com/watch?v=sqD4f5MlPTc&t=3061s
  13. Sitting in a Chair for One Hour Doing Nothing, Robin Sealark, https://www.youtube.com/watch?v=jEjBz9FwUpE&t=1221sk
  14. Sitting in a Chair for One Hour Doing Nothing, Povilas Ramanauskas, https://www.youtube.com/watch?v=90lI-mU2o-I
  15. Job Interview Good Example copy (2016), Katherine Johnson, https://www.youtube.com/watch?v=OVAMb6Kui6A
  16. How motivation can fix public systems | Abhishek Gopalka (2020), TED, https://www.youtube.com/watch?v=IGJt7QmtUOk
  17. Climate Change Debate | Kriti Joshi | Opposition, OxfordUnion, https://www.youtube.com/watch?v=Lq0iua0r0KQ
  18. N.A. How many degrees of freedom does my robot has? SoftBank Robotics. Retrieved from: https://community.ald.softbankrobotics.com/en/resources/faq/using-robot/how-many-degrees-freedom-does-my-robot-has
  19. NAO - Technical overview. http://doc.aldebaran.com/2-1/family/robots/index_robots.html
  20. Choregraphe software. https://community.ald.softbankrobotics.com/en/resources/software/language/en-gb
  21. Kanda, T., Ishiguro, H., Imai, M., and Ono, T. (2003). Body Movement Analysis of Human-Robot Interaction http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf
  22. Streck, A., Wolbers, T.: (2018). Using Discrete Time Markov Chains for Control of Idle Character Animation 17(8). https://ieeexplore.ieee.org/document/8490450
  23. Kofinas, N., Orfanoudakis, E., & Lagoudakis, M. G. (2015). Complete analytical forward and inverse kinematics for the NAO humanoid robot. Journal of Intelligent & Robotic Systems, 77(2), 251-264.
  24. 24.0 24.1 Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582. https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2018-0057
  25. Zhu, M., Sun, H., Lan, R., Li, B.: (2011). Human motion retrieval using topic model, 4(10), pp. 469-476. https://onlinelibrary.wiley.com/doi/abs/10.1002/cav.432
  26. Aggarwal, J., Cai, Q.: (1999). Human Motion Analysis: A Review, 1(3), pp. 428-440. https://www.sciencedirect.com/science/article/pii/S1077314298907445
  27. Jung, J., Kanda, T., & Kim, M. S. (2013). Guidelines for contextual motion design of a humanoid robot. International Journal of Social Robotics, 5(2), 153-169. https://link.springer.com/article/10.1007/s12369-012-0175-6
  28. Karwowski, W. (2007). Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots https://www.tandfonline.com/doi/abs/10.1080/00140139108967335
  29. Nakata, T., Sato, T. and Mori, T. (1998). Expression of Emotion and Intention by Robot Body Movement https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf
  30. Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. https://link.springer.com/article/10.1007/s12369-013-0212-0
  31. Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155
  32. Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Alami, R. and Siméon, T. (2006). How may I serve you?: a robot companion approaching a seated person in a helping context. https://hal.laas.fr/hal-01979221/document
  33. Broadbent, E., Stafford, R. and MacDonald, B. (2009). Acceptance of healthcare robots for the older population: review and future directions.https://link.springer.com/article/10.1007/s12369-009-0030-6
  34. Asselborn, T., Johal, W. and Dillenbourg, P. (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments
  35. Kocoń, M., & Emirsajłow, Z. (2012). Modeling the idle movements of the human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022
  36. Beck, A., Hiolle, A., & Canamero, L. (2013). Using Perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf
  37. Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human-robot interaction (pp. 109-116). https://dl.acm.org/doi/pdf/10.1145/1514095.1514117
  38. Srinivasan, V., Murphy, R. R., & Bethel, C. L. (2015). A reference architecture for social head gaze generation in social robotics. International Journal of Social Robotics, 7(5), 601-616. https://link.springer.com/article/10.1007/s12369-015-0315-x
  39. Straub, I. (2016). ‘It looks like a human!’The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society, 31(4), 553-571. https://link.springer.com/article/10.1007/s00146-015-0632-5
  40. Brooks, A. G. and Arkin, R. C. (2006). Behavioral overlays for non-verbal communication expression on a humanoid robot. https://smartech.gatech.edu/bitstream/handle/1853/20540/BrooksArkinAURO2006.pdf?sequence=1&isAllowed=y
  41. Cuijpers, R. H., Knops, M. A. H. (2015). Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions