PRE2019 3 Group2: Difference between revisions
TUe\20183095 (talk | contribs) |
TUe\20183095 (talk | contribs) |
||
Line 222: | Line 222: | ||
=== Serious conversation === | === Serious conversation === | ||
After spending a lot of timing researcher human idle movements in a serious or formal environment by studying the idle movements in conversations like job interviews <ref> Job Interview Good Example copy (2016) https://www.youtube.com/watch?v=OVAMb6Kui6A</ref>, ted talks <ref>How motivation can fix public systems | Abhishek Gopalka (2020) https://www.youtube.com/watch?v=IGJt7QmtUOk</ref>and other relative formal interactions like a debate <ref> Climate Change Debate | Kriti Joshi | Opposition https://www.youtube.com/watch?v=Lq0iua0r0KQ</ref> or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch it's face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements in this category that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research (Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. The 11 idle motions that will be counted are eye stretching, mouth movement, cough, touching the face, touching hair, looking around, neck movement (includes nodding), arm movement, hand movement, lean on something (including holding something) and body stretching. Two bar chart have been made counting all the idle motions in the videos mentioned at the top of this chapter. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. So the two bar charts look like the following: | After spending a lot of timing researcher human idle movements in a serious or formal environment by studying the idle movements in conversations like job interviews <ref> Job Interview Good Example copy (2016) https://www.youtube.com/watch?v=OVAMb6Kui6A</ref>, ted talks <ref>How motivation can fix public systems | Abhishek Gopalka (2020) https://www.youtube.com/watch?v=IGJt7QmtUOk</ref>and other relative formal interactions like a debate <ref> Climate Change Debate | Kriti Joshi | Opposition https://www.youtube.com/watch?v=Lq0iua0r0KQ</ref> or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch it's face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements in this category that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research (Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. The 11 idle motions that will be counted are eye stretching, mouth movement, cough, touching the face, touching hair, looking around, neck movement (includes nodding), arm movement, hand movement, lean on something (including holding something) and body stretching. Two bar chart have been made counting all the idle motions in the videos mentioned at the top of this chapter. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. So the two bar charts look like the following: | ||
[[File:listening. | [[File:listening.jpg|left|thumb|550px|Figure 1: Bar graph for the idle motions during listening]][[File:speaking.jpg|center|thumb|550px|Figure 2: Bar graph for the idle motions during speaking]] | ||
== '''Experiment B''' == | == '''Experiment B''' == |
Revision as of 11:11, 19 February 2020
Research on Idle Movements for Robots
Abstract
Group Members
Name | Study | Student ID |
---|---|---|
Stijn Eeltink | Mechanical Engineering | 1004290 |
Sebastiaan Beers | Mechanical Engineering | 1257692 |
Quinten Bisschop | Mechanical Engineering | 1257919 |
Daan van der Velden | Mechanical Engineering | 1322818 |
Max Cornielje | Mechanical Engineering | 1381989 |
Planning V1
Division of work
The research is performed in a total of eight weeks. In those weeks two experiments are done:
1. Experiment A: .....;
2. Experiment B:.......
Week | Datum start | To Do & Milestones | Responsible team members |
---|---|---|---|
1 | 3 Feb. | Determining the subject
Milestones: |
Responsible members: 1. Everyone; |
2 | 10 Feb. | setting up experiments
Milestones: |
Responsible members: 1. Sebastiaan; |
3 | 17 Feb. | Doing experiment A
Milestones: |
Responsible members: 1. Stijn; |
24 Feb. | Break: Buffer for unfinished work in week 1-3 | ||
4 | 2 March | Process experiment A and start preparing for NAO robot'
Milestones: |
Responsible members:
1. Quinten; |
5 | 9 March | Finilize preparing experiment B and testing the NAO robot
Milestones: |
Responsible members: 1. Max; |
6 | 16 March | Performing experiment B and process results
Milestones: |
Responsible members:
1. Daan; |
7 | 23 March | Evaluate experiments, draw conclusion and work on wiki
Milestones: |
Responsible members:
1. Sebastiaan; |
8 | 30 March | Finilze wiki and final presentation
Milestones: |
Responsible team members:
1. Sebastiaan, Quinten, Daan and Max; |
Introduction
Problem statement
In an ideal world and in the future robots will interact with humans in a very socially intelligent way. Robots demonstrate humanlike social intelligence and non-experts will not be able to distinguish robots and other human agents anymore. To accomplish this, robots need to develop a lot further. The social intelligence of robots needs to be increased a lot, but also the movement of the robots. Nowadays, robots don't move the way humans do. For instance when moving your arm to grab something. Humans tend to overshoot a bit[Source?]. A robot specifies the target and moves the shortest way to the object. Humans try to take the least resistance path. So this means they also use their surroundings to reach for their target. For instance, lean on a table to cancel out the gravity force. Humans use their joints more than robots do. Another big problem for a robot's motion to look human is idle movement. For humans and every other living creature in the world, it is physically impossible to stand precisely still. Robots, however, when not in action stand completely lifeless. Creating a problem for the interaction between the robot and the corresponding person. It is unsure if the robot is turned on and can respond to the human, and it also feels unnatural[source?]. Another thing is that humans are always doing something or holding something while having a conversation. To improve the interaction with humans and robots there has to be looked at human idle movements, which idle movements are most beneficial for the human-robot interaction, and which idle movements are most realistic and manageable for a robot to perform without looking too weird. In this research, we will look at all these things by observing human idle movements, test robot idle movement, and research into the most preferable movements according to contestants.
Objectives
It is still very hard to get a robot to appear in a humanistic natural way as robots tend to be static (whether they move or not). As a more natural human-robot interaction is wanted, the behavior of social robots needs to be improved on different levels. In this project, the main focus will be on the movements that make a robot appear more natural/lifelike, which are the idle movements. The objectives of the research will be to find out which idle movements make a robot appear in a more natural way. In this research, the information will be based on previous research that has been done on this subject. More information will be gathered by applying interviews and using surveys to get the opinion of possible future users (these users will be mentioned in the chapter 'Users'). Next to this, the NAO robot will be experimented with. The NAO robot will be performing different idle movements and future users give their responses to these movements. With these acquired responses we will retrieve data that can be used to find out which idle movements make a robot appear more life-like. Due to the fact that in the future humanoid robots will also improve, possible expectations on the most important idle movements will also be given. All together we hope that this information will give greater insight into the use of idle movements on humanoid robots to be used in future research and projects on this subject.
Users
When taking a look at the users of social robots, a lot of groups can be taken into account. This is because we don’t know what the future holds and where social robots will be used. Because focusing on all the possible users for this project would be impossible, a selection has been made of users who will likely be the first to benefit from or have to deal with social robots. This selection is based on collected research from other papers[1], where these groups where highlighted and researched on their opinion on robot behavior.
The main focus of research papers on social robots is for future applications in elderly care, and therefore elderly people are the main subject. Another group of users who will reap the benefits of social robots are physically or mentally disabled people. The hospitals, care centers, and nurses who care for these people nowadays will also be users of social robots.
For the people in need of care, it is essential that these social robots are as human-like as possible. This will help them better accept the presence of a social robot in their private environment. One key element of making a robot appear life-like is the presence of idle motions.
Companies and manufacturers of social robots will benefit from this research, implementing it in their product and will therefore also be users. They want to offer their customers the best social robot they are able to produce, which in turn has to be as human-like as possible, and therefore it is key to include idle movements into their design.
Approach, Milestones and Deliverables
Approach
To get knowledge about idle movements, observations have to be done on humans. Though, the idle movements will be different for various categories (see experiment A), which requires research beforehand. This research will be done via state of the art. Papers, such as papers from R. Cuijpers and/or E. Torta and Hyunsoo Song, Min Joong Kim, Sang-Hoon Jeong, Hyeon-Jeong Suk, and Dong-Soo Kwon, contain a lot of information concerning the idle movements that are considered important. Therefore, it is important to read the papers of the state of the art carefully. The state of the art will be explained briefly in the chapter ‘State of the art’. After the research, observations can be done. The perfect observation method is dependent on the category. Examples of such methods are observing people walking (or standing) around public spaces, such as on the campus of the university or on the train. However, for this research, videos or live streams will be watched on (e.g) Youtube. The tapes must film the people 'secretly' as people might behave on camera differently than off-camera. The noticed idle movements can be listed in a table and can be tallied. The most tallied idle movement will be considered the best for that specific category. However, that does not mean it will work for a robot as the responses of users might be negatively influenced. Experiment B will clarify this.
The best idle movement per category will be used in experiment B. This experiment will be done by using the NAO robot. The experiment makes use of a large number of participants (which will be the users, see ‘Users’). The NAO robot will start a conversation with the participant for x amount of time. This is done multiple times (depending on the number of idle movements used), once with the NAO robot not using any idle movements and, then, using different idle movements for every conversation. The used idle movements will be based on the research as listed above and are in the same order for every participant. After each conversation, the participant has to fill in Godspeed questionnaire[2]. The Godspeed questionnaire also includes a question about animacy. This question should also be answered with a scale between 1-5. By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.
Milestones and Deliverables
See Planning V1.
Experiment A
The first experiment will be a human-only experiment. To understand the meaning of robot idle-movement better humans have to be observed. People will be observed in multiple places such as mentioned in the Approach but still have different movements. Listed down below, it is stated which type of video has been used (e.g youtube videos or live streams). The videos differ as the idle movements most likely differ in different categories. For instance, when people are having a conversation with a friend a possible idle move will be shaking with their leg and or feet or biting their nails. However, when a job interview people tend to be a lot more serious and try to focus on acting 'normal'. The different categories for the idle movement with multiple examples to implement on the robot are listed down below. Furthermore, these examples are based on eleven motions suggested by {{{ref}}} and these eleven motions are eye stretching, mouth movement, coughing, touching the face, touching hair, looking around, neck movement, arm movement, hand movement, lean on something and body stretching.
Category | Examples of idle movements |
---|---|
Casual conversation | nodding, briefly look away, scratch head, putting arms in the side, scratch shoulder, change head position. |
Waiting / in rest | lightly shake leg/feet up and down or to the side, put fingers against head and scratch, blink eyes (LED on/off), breath. |
Serious conversation | nodding, folding arms, hand gestures, nod, lift eyebrows slightly, touch the face, touch/hold arm. |
The listed eleven motions are used for the tallying of the idle movements so that its full comparison can be given between the categories. The type of tape/video used is, thus, dependent on such a category. For each category, it will be explained which type of video has been used and why.
After this research, another research has to be done. This will supplement the first experiment and be a setup for the second experiment when the actual social intelligent robot will be tested. In the same categories as used before, people will be asked to come up with idle movement or choose from a list of idle movements that they think are most natural for robots to perform. Such as mentioned in the problem statement, not all human idle movements will be good idle movements for robots. After this survey, a selection of idle movements in every category will be chosen and will make it to testing with the social robots in experiment B.
Casual conversation
Casual conversation is talking about non-serious activities, which can occur between two strangers but also between two relatives. The topics of those conversations are usually not really serious, resulting in laughter or just calm listening. No different from the serious conversation, attention should be paid to the speaker. During conversations, the listeners will show such attention towards the speaker, which can generally be done by having eye contact. Furthermore, lifelike behavior is also important as it results in a feeling of trust towards the listener. Therefore, it is important for a robot to have the ability to establish such trust via idle movements.
Eye contact for the NAO robot has already been researched and it is established that it is important.{{{ref}}} The usage of idle movements for the gain of trust is also considered important. Examples of such idle movements are nodding, briefly looking away, scratching the head, putting the arms in the side (and changing its position), scratching the shoulder and changing the head position. These idle movements are corresponding with what has been found in the idle movement research. Template:Ref In casual conversations, nodding can confirm the presence of the listener and ensure that attention is being paid. Briefly looking away can mimic the movement of thought as people tend to look away and think while talking. Scratching your head might mimic the same idea, as scratching your head is associated with thinking. Moreover, scratching your head might also give the speaker a feeling of confidence due to the mimic of liveliness. Putting the arms in the side is generally a gesture of confidence and also of relaxation. Mixing up the number of arms in the side and the position of the legs will give the speaker an idea that the robot will relax, which gives the speaker a feeling of confidence. Scratching the shoulder is just a general mimic that ensures that the robot will look alive and seems confident. The change of the head position has the same purpose, the change of head position generally is a movement that occurs when thinking and during questioning what has been said. Both reasons give the speaker an idea that the robot is more lifelike. All of these examples of idle movements come back to the (already spoken about) research, resulting in the stated eleven motions (see first paragraph of 'Experiment A').
After knowing what idle movements to look for, it was important to investigate ways to tally the motions. Since there are many different ways to casually talk to someone, three different videos were taken (all from youtube). One of them is a podcast between four women [3], another one is a conversation between a dutch politician and a dutch presenter [4] and the last one is of an American host show[5]. The three videos were all watched fully and the number of each motion was being noted. The tallying was done for the listener and for the speaker so that the differences can also be spotted. The data of both can be seen in the bar graphs in figure 1 and 2.
What can be seen in figure 1 is that neck movements are the most important motion, followed by mouth movements, arm movements and touching face (respectively). This can be explained due to the fact that people tend to nod a lot as a way to show acceptance and agreement with what has been said. However, head shaking is also seen as such a neck movement. The mouth movement is wetting their lips or just saying "yes", "okay" or "mhm". Again, with the underlying meaning being acceptance and agreement. Both motions are also a way of showing that attention is being paid. The arm movement can be explained by the fact that people scratch or just feel bored. Nonetheless, this movement shows life-like characteristics. Touching their faces is the same principle as the arm movement as it can happen due to boredom, an itch or something else. Next to these four, there were no motions that stood out.
From figure 2, the arm movement, eye stretching, hand movement, neck movement, and looking around stood out. The arm and hand movement can be explained by the fact that people tend to explain things via gestures. These gestures are done by using your arm and hand. The eye stretching is used to create emotion (e.g being in shock or being surprised). The neck movement is a result of approval or disapproval of things that are being told. Looking around is a factor in thinking and, if people look away, they are thinking about what they remember of a scenario or what the best way is to tell something. Furthermore, nothing really stood out.
In this research, nothing will be done with the speaking research. However, it is good to see that the motions differ per action (listening or speaking) and that the eleven motions can be applied on these.
Waiting / in rest
Serious conversation
After spending a lot of timing researcher human idle movements in a serious or formal environment by studying the idle movements in conversations like job interviews [6], ted talks [7]and other relative formal interactions like a debate [8] or a parent-teacher conference. Comparing the idle movements with other categories as like the friendly conversation people there is a big difference to be seen. In a serious conversation, people try to come over smart or serious. As in friendly conversation, people do not think about their idle movements as much. The best example of this behavior is the tendency of humans to touch or scratch it's face, nose, ear or any other part of the head. These idle movements are more common in friendly conversation because in a formal interaction people try to prevent coming over as childish or impatient. The idle movements in this category that are special are moves like hand gestures that follow the rhythm of speech. When talking another common idle movement is to raising eyebrows when trying to come over as convincing. When listening, however, a common idle movement is to fold hands. This comes over as smart or to express an understanding of the subject. The last type of idle movements that are very common are moves like touching the arm or blinking are also used in friendly conversations. In previous research (Japanese article) there has been done research on idle movements for any type of conversation, now the same will be done for these categories counting all the idle motions for the videos mentioned, included the idle motions mentioned earlier, which stood out without counting the movements. The 11 idle motions that will be counted are eye stretching, mouth movement, cough, touching the face, touching hair, looking around, neck movement (includes nodding), arm movement, hand movement, lean on something (including holding something) and body stretching. Two bar chart have been made counting all the idle motions in the videos mentioned at the top of this chapter. However, in two of the three videos, only one person is relevant to count the idle motions. So in the video of the job interview, only the job giver is considered because this corresponds better with the other videos and makes more sense for the robot to rather not be the applicant but the employer. However the employer is listening most of the time and in the other two videos, the main characters are speaking. So in a serious conversation, a difference has to be made between speaking and listening. So the two bar charts look like the following:
Experiment B
- Experiment B is divided into different parts. These are:
- The NAO robot
- Determining (NAO compatible) Idle movements from literature and *Experiment A.
- The participants
- Setting up experiment B
- The script
- The questionnaire
- Processing the result
- Hypothesis
- Limitations
The NAO robot
The NAO robot is an autonomous, humanoid robot developed by Aldebaran Robotics. It is widely used in education and research. In the paper from Torta[], where the idea of this project is originated, is the NAO robot used as well. The robot is packed with sensors, for example. ….[ref]. It can speak multiple languages, however, not Dutch. Therefore the experiment will be done in English.
Determining (NAO compatible) Idle movements from literature and Experiment A
Unfortunately, the NAO robot is not able to perform every motion a human can do due to a limited amount of links and joints. The list of human idle movements from experiment A and the literature is :
- Human idle movement 1
- Human idle movement 2
- Human idle movement 3
- ...
The human idle movements 1 until X were either found in the database[ref] or made in Choregraphe 2.1.4[ref].
- Idle movements of the NAO robot 1
- Idle movements of the NAO robot 2
- Idle movements of the NAO robot 3
- ...
The other human idle movements cannot be tested due to the limitations of the NAO robot.
The participants
As explained in [ref users] there are a lot of users that can be taken into account. Due to limitations, it was decided to take random participants to participate in experiment B. 20 Participants will be participating in this experiment.
Setting up experiment B(Open Question or ask to read something?-> implementation of categories in A?)
A room will be booked close to the social robotics lab. The experiment will be done in week 6. The experiment will be performed as follows:
- The NAO robot will start a conversation
- The NAO robot will ask an open question, to which the participant needs to give an elaborate answer.
- During the answer, an idle movement is performed by the NAO robot.
- After the answer is given, the NAO asks another question
- During the answer, the same idle movement is performed by the NAO robot
Each conversation will have 3 questions, where the same Idle movement is performed. After the conversation is done, a questionnaire is taken by the participant. The participant will undergo X(depends on the amount of Idle movements) conversations with the NAO robot.
The script
The questionnaire
(add questions to hide obvious parts?) To determine the user's perception of the NAO robot the Godspeed questionnaire[2] will be used. The Godspeed uses a 5-point scale. 5 categories are used. These are Anthropomorphism, Animacy, Likeability, Perceived Intelligence and Perceived Safety. As noted in [2] as well, there is an overlap between Anthropomorphism and Animacy. The link to the actual questionnaire can be found here(https://www.overleaf.com/7569336777bwmwdmhvmmbc)
This questionnaire does have limitations, these are discussed in the results 'limitations'.
Processing the data
Deliverables.
By using the data of this experiment, a diagram can be made to the responses of the participants to the various idle movements. Via this, the best idle movement can be decided for each category. The result can also occur in a combination of various idle movements as being the best.
The result can also occur in a combination of various idle movements as being the best.
Hypothesis
Limitations
The participants
In the ideal case, more participants could be used. The users' groups would be better represented. This would mean that the data could show differences in data between different users groups. Unfortunately, with the resources, limitations and duration of the project, this is not possible.
The Godspeed questionnaire
The interpretations of the results of the Godspeed questionnaire does have limitations, as explained in [2]. These are:
- Extremely difficult to determine the ground truth
- Many factors influence the measurements(eg. cultural background, prior experience with robots, etc.)
These limitations of the questionnaires result in the fact that the results of the measurements can not be used as an absolute value, but rather to see which option of idling is better.
State of the Art
Research work presented in this paper[9] addressed the introduction of a small–humanoid robot in elderly people’s homes providing novel insights in the areas of robotic navigation, non–verbal cues in human-robot interaction, and design and evaluation of socially–assistive robots in smart environments. The results reported throughout the thesis could lie in one or multiple of these areas giving an indication of the multidisciplinary nature of research in robotics and human-robot interaction. Topics like robotic navigation in the presence of a person, adding a particle filter to already existing navigational algorithms, attracting a person's attention by a robot, and the design of a smart home environment are discussed in the paper. Our research will add additional research to this paper.
The main objective of the research project[10] was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. The second experiment can be useful in our research, they investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited for only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for the operational limitations of robot systems are discussed. This research can be useful to further convince people that idle motions are not only to make people feel comfortable near robots, but also as a safety measure.
In this study[11], a simple joint task was used to expose our participants to different levels of social verification. Low social verification was portrayed using idle motions and high social verification was portrayed using meaningful motions. Our results indicate that social responses increase with the level of social verification in line with the threshold model of social influence. This paper verifies why our research on idle motions is needed in the first place and states that in order to have a high social human-robot interaction, idle motions are a necessity.
A framework and methodology to realize robot-to-human behavioral expression is proposed in the paper[12]. Human-robot symbiosis requires to enhance nonverbal communication between humans and robots. The proposed methodology is based on movement analysis theories of dance psychology researchers. Two experiments on robot-to-human behavioral expression are also presented to support the methodology. One is an experiment to produce familiarity with a robot-to-human tactile reaction. The other is an experiment to express a robot's emotions by its dances. This methodology will be key to realize robots that work close to humans cooperatively and thus will also couple idle movements to emotions, for example, when a robot wants to express it is nervous.
This paper[13] presents a method for analyzing human-robot interaction by body movements. Future intelligent robots will communicate with humans and perform physical and communicative tasks to participate in daily life. A human-like body will provide an abundance of non-verbal information and enable us to smoothly communicate with the robot. To achieve this, we have developed a humanoid robot that autonomously interacts with humans by speaking and making gestures. It is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion capturing system, which allows us to measure the body movements in detail. We have performed experiments to compare the body movements with subjective impressions of the robot. The results reveal the importance of well-coordinated behaviors and suggest a new analytical approach to human-robot interaction. This paper lays the foundation for our project, so it is key to read it carefully!
In this paper[14], we explored the effect of a robot’s subconscious gestures made during moments when idle on anthropomorphic perceptions of five-year-old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts. This research focusses only on children, but the concept of the research is the same as ours.
Research has been done towards the capability of a humanoid robot to provide enjoyment to people. For example, who picks up the robot and plays with it by hugging, shaking and moving the robot in various ways. [15] Inertial sensors inside a robot can capture how its body is moved when people perform such “full-body gestures”. A conclusion of this research was that the accuracy of the recognition of full-body gestures was rather high (77%) and that a progressive reward strategy for responses was much more successful. The progressive strategy for responses increases perceived variety; persisting suggestions increase understanding, perceived variety, and enjoyment; and users find enjoyment in playing with a robot in various ways. This resulted in the conclusion that motions should be meaningful, responses rewarding, suggestions inspiring and instructions fulfilling. This also results in increased understanding, perceived variety, and enjoyment. This conclusion suggests that the movements of a robot should make meaningful motions, which also is of the essence for idle movements
Research has already been done by looking into the subtle movements of the head while waiting for a reaction to the environment.[16] The main task of this research was to describe the subtle head movements when a virtual person is waiting for a reaction from the environment. This technique can increase the level of realism while performing human-computer interactions. Even though it was tested for a virtual avatar, the head movements will still be the same for a humanoid robot, such as the NAO robot. These head movements might be of importance for the idle movements of the NAO robot, therefore it is important to use this paper for its complications and accomplices.
Furthermore, research has been done for the emotions of robots.[17] To be exact for the latter, poses were used to create emotions. Five poses were created of the emotions: sadness, anger, neutral, pride and happiness. The recognition of all poses was not the same percentage and the velocity of a movement also has an influence on the interpretation of the pose. This has a lot in common with idle movements. Idle movements have to positively influence the emotions of the user (as to giving the robot a more life-like representation and, thus, a safe feeling for the user). By keeping this research in mind, the velocity of the movement has to be set right as that might influence the interpretation of the movement (aggressive or calm) and, if set wrong, can give the user an insecure feeling.
In the field of starting a conversation, research has also been done.[18] This research looks into a model of approaching people while they are walking. The research, the latter research, concluded that a proposed model (which made use of efficient and polite approach behavior with three phases: finding an interaction target, interaction at public distance and initiating conversation at social distance) was much more successful than a simplistic way (proposed: 33 out of 59 successes and simplistic: 20 out of 57 successes). Moreover, during the field trial, it was observed that people enjoyed receiving information from the robot, suggesting the usefulness of a proactive approach in initiating services from a robot. This positive feedback is also wanted for the use of idle movements, even though the idle movements might not be as obvious as the conversation approach in this research. It is important to notice that in this research an efficient and polite approach is more successful and even more efficient.
Another paper presents a study that investigates human-robot interpersonal distances and the influence of posture, either sitting or standing on the interpersonal distances.[19] The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintains either a sitting or standing posture while being approached. The results revealed that humans allow shorter interpersonal distances when a robot is sitting or is in a more passive position, and leave more space when being approached while standing. The paper suggests that in future work more non-verbal behaviors will be investigated between robots and humans and their effect, combined with the robot’s posture, on interpersonal distances. As idle movements might influence the feelings of safety positively which results in shorter personal distances, but that might be discussed later. However, this research suggests that the idle movement might have different effects on people during different postures of the people. This has to be taken into account once testing the theorem suggested in this research.
Time spent
Below the link to an overleaf file which shows the time everyone spent per week: https://www.overleaf.com/2323989585tvychvjqthwr
Als er staat "Gedaan:" kan die in principe weg maar kan beter op het laatst gedaan worden
- GEDAAN: Torta, E. (2014). Approaching independent living with robots. Eindhoven: Technische Universiteit Eindhoven [20]
- GEDAAN: Waldemar Karwowski (2007). Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots [21]
- GEDAAN: Raymond H. Cuijpers, Marco A. M. H. Knops (2015). Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions [22]
- GEDAAN: Toru Nakata, Tomomasa Sato and Taketoshi Mori (1998). Expression of Emotion and Intention by Robot Body Movement [23]
- GEDAAN: Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono (2003). Body Movement Analysis of Human-Robot Interaction [24]
- GEDAAN: Thibault Asselborn, Wafa Johal and Pierre Dillenbourg (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments [25]
- GEDAAN: Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. [26]
- GEDAAN: Kocoń, M., & Emirsajłow, Z. (2012). Modelling the idle movements of human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. [27]
- GEDAAN: Beck, A., Hiolle, A., & Canamero, L. (2013). Using perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). [28]
- GEDAAN: Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction (pp. 109-116). [29]
- GEDAAN: Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. [30]
- Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics, 10(5), 569-582.[31]
- [32]
- Srinivasan, V., Murphy, R. R., & Bethel, C. L. (2015). A reference architecture for social head gaze generation in social robotics. International Journal of Social Robotics, 7(5), 601-616.[33]
- Jung, J., Kanda, T., & Kim, M. S. (2013). Guidelines for contextual motion design of a humanoid robot. International Journal of Social Robotics, 5(2), 153-169.[34]
- Straub, I. (2016). ‘It looks like a human!’The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society, 31(4), 553-571.[35]
- Song, H., Min Joong, K., Jeong, S.-H., Hyen-Jeong, S., Dong-Soo K.: (2009). Design of Idle motions for service robot via video ethnography. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 195–99 [36]
- Streck, A., Wolbers, T.: (2018). Using Discrete Time Markov Chains for Control of Idle Character Animation 17(8). [37]
- Kofinas, N., Orfanoudakis, E., Lagoudakis, M.,: (2014). Complete Analytical Forward and Inverse Kinematics for the NAO Humanoid Robot, 31(1), pp. 251-264 [38]
- Zhang, M., Chen, J., Wei, X., Zhang, D.: (2018). Work chain‐based inverse kinematics of robot to imitate human motion with Kinect, 7(8). [39]
- Zhu, M., Sun, H., Lan, R., Li, B.: (2011). Human motion retrieval using topic model, 4(10), pp. 469-476. [40]
- Aggarwal, J., Cai, Q.: (1999). Human Motion Analysis: A Review, 1(3), pp. 428-440. [41]
- Broadbent, E., Stafford, R. and MacDonald, B. (2009). Acceptance of healthcare robots for the older population: review and future directions[42]
- Brooks, A. G. and Arkin, R. C. (2006). Behavioral overlays for non-verbal communication expression on a humanoid robot.[43]
- Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Alami, R. and Siméon, T. (2006). How may I serve you?: a robot companion approaching a seated person in a helping context.[44]
References
- ↑ https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf
- ↑ Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1(1), 71–81.
- ↑ Magical and Millennial Episode on Friends Like Us Podcast, Marina Franklin, https://www.youtube.com/watch?v=Vw9M4bVZCSk
- ↑ Een gesprek tussen Johnny en Jesse, GroenLinks, https://www.youtube.com/watch?v=v6dk6eI1qFQ
- ↑ Joaquin Phoenix and Jimmy Fallon Trade Places, The Tonight Show Starring Jimmy Fallon, https://www.youtube.com/watch?v=_plgHxLyCt4
- ↑ Job Interview Good Example copy (2016) https://www.youtube.com/watch?v=OVAMb6Kui6A
- ↑ How motivation can fix public systems | Abhishek Gopalka (2020) https://www.youtube.com/watch?v=IGJt7QmtUOk
- ↑ Climate Change Debate | Kriti Joshi | Opposition https://www.youtube.com/watch?v=Lq0iua0r0KQ
- ↑ https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf
- ↑ https://www.tandfonline.com/doi/abs/10.1080/00140139108967335
- ↑ https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions
- ↑ https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf
- ↑ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf
- ↑ https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments
- ↑ Cooney, M., Kanda, T., Alissandrakis, A., & Ishiguro, H. (2014). Designing enjoyable motion-based play interactions with a small humanoid robot. International Journal of Social Robotics, 6(2), 173-193. https://link.springer.com/article/10.1007/s12369-013-0212-0
- ↑ Kocoń, M., & Emirsajłow, Z. (2012). Modeling the idle movements of the human head in three-dimensional virtual environments. Pomiary Automatyka Kontrola, 58(12), 1121-1123. https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022
- ↑ Beck, A., Hiolle, A., & Canamero, L. (2013). Using Perlin noise to generate emotional expressions in a robot. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 35, No. 35). https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf
- ↑ Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009, March). How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE international conference on Human-robot interaction (pp. 109-116). https://dl.acm.org/doi/pdf/10.1145/1514095.1514117
- ↑ Obaid, M., Sandoval, E. B., Złotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. (2016, August). Stop! That is close enough. How body postures influence human-robot proximity. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 354-361). IEEE. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155
- ↑ https://pure.tue.nl/ws/portalfiles/portal/3924729/766648.pdf
- ↑ https://www.tandfonline.com/doi/abs/10.1080/00140139108967335
- ↑ https://www.researchgate.net/publication/281841000_Motions_of_Robots_Matter_The_Social_Effects_of_Idle_and_Meaningful_Motions
- ↑ https://pdfs.semanticscholar.org/9921/b7f11e200ecac35e4f59540b8cf678059fcc.pdf
- ↑ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3393&rep=rep1&type=pdf
- ↑ https://www.researchgate.net/publication/321813854_Keep_on_moving_Exploring_anthropomorphic_effects_of_motion_during_idle_moments
- ↑ https://link.springer.com/article/10.1007/s12369-013-0212-0
- ↑ https://www.infona.pl/resource/bwmeta1.element.baztech-article-BSW4-0125-0022
- ↑ https://escholarship.org/content/qt4qv84958/qt4qv84958.pdf
- ↑ https://dl.acm.org/doi/pdf/10.1145/1514095.1514117
- ↑ https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7745155
- ↑ https://link.springer.com/article/10.1007/s12369-018-0466-7
- ↑ https://link.springer.com/content/pdf/10.1007/s10846-013-0015-4.pdf
- ↑ https://link.springer.com/article/10.1007/s12369-015-0315-x
- ↑ https://link.springer.com/article/10.1007/s12369-012-0175-6
- ↑ https://link.springer.com/article/10.1007/s00146-015-0632-5
- ↑ https://ieeexplore.ieee.org/abstract/document/5326062
- ↑ https://ieeexplore.ieee.org/document/8490450
- ↑ https://link.springer.com/article/10.1007/s10846-013-0015-4
- ↑ https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2018-0057
- ↑ https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cav.432
- ↑ https://www.sciencedirect.com/science/article/pii/S1077314298907445
- ↑ https://s3.amazonaws.com/academia.edu.documents/45751247/s12369-009-0030-620160518-24489-1rd1itt.pdf?response-content-disposition=inline%3B%20filename%3DAcceptance_of_Healthcare_Robots_for_the.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWOWYYGZ2Y53UL3A%2F20200206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200206T185802Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=e66808bc708e20134353a70cc3868c63a321523e19c7b5b6d85b0b5bda76d4df
- ↑ https://smartech.gatech.edu/bitstream/handle/1853/20540/BrooksArkinAURO2006.pdf?sequence=1&isAllowed=y
- ↑ https://hal.laas.fr/hal-01979221/document