PRE2023 3 Group4: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
 
(37 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
== Group members ==
== Group members ==
This study was approved by the ERB on Sunday 03/03/2024 (number ERB2024IEIS22). For this approval, an [https://tuenl-my.sharepoint.com/:b:/g/personal/m_j_d_ruiter_student_tue_nl/EQl398CxAy1DrzuhkzjdQnEB-cBD8GvB4kG5t951HxsSDw?e=e5JUtP ERB form] was filled in and a [https://tuenl-my.sharepoint.com/:b:/g/personal/m_j_d_ruiter_student_tue_nl/Ed91CL5xMjZPrBsvwDe_1AsBoJA4dUB6QhpUZqjMPh4hsw?e=W9SHsT research proposal] was made.
{| class="wikitable"
{| class="wikitable"
!Name
!Name
Line 31: Line 32:
|BPT  
|BPT  
|Programming responsible  
|Programming responsible  
|}
== Introduction ==
The use of social robots, specifically designed for interacting with humans and other robots, has been rising for the past several years. These types of robots differ from the robots we have been getting used to over the past decades which often only perform on specific and dedicated tasks. Social robots are now mostly used in services settings, as companions and support tools <ref name=":0">Biba, J. (2023, March 10). ''What is a social robot?'' Retrieved from Built In: <nowiki>https://www.nature.com/articles/s41598-020-66982-y#citeas</nowiki> </ref><ref>Borghi, M., & Mariani, M. (2022, September). ''The role of emotions in the consumer meaning-making of interactions with social robots''. Retrieved from Science Direct: <nowiki>https://www.sciencedirect.com/science/article/pii/S0040162522003687</nowiki> </ref>. In many promising sectors of application, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. To make this human-robot interaction (HRI) feel natural and enjoyable for humans, robots must make use of human social norms<ref name=":2">Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. ''Robotics and Autonomous Systems'', ''58''(3), 322–332. <nowiki>https://doi.org/10.1016/J.ROBOT.2009.09.015</nowiki> </ref>. This requirement originates from humans anthropomorphizing robots, meaning that we attribute human characteristics to robots and engage and form relationships with them as if they are human<ref name=":3">Breazeal, C. (2004). Designing Sociable Robots. ''Designing Sociable Robots''. <nowiki>https://doi.org/10.7551/MITPRESS/2376.001.0001</nowiki>
</ref><ref name=":2" />. We use this to make the robot’s behavior familiar, understandable and predictable to us, and infer the robot’s mental state. However, for this to be a correct as well as intuitive inference, the robot’s behavior must be aligned with our social expectations and interpretations for mental states<ref name=":3" />.
One very important integrated element in human communication is the use of nonverbal expressions of emotions, such as facial expressions, gaze, body posture, gestures, and actions<ref name=":2" /><ref name=":3" />. In human-to-human interaction as well as human-robot interaction, these nonverbal cues support and add meaning to verbal communication, and expressions of emotions specifically help build deeper and more meaningful relations, facilitate engagement and co-create experiences<ref name=":1">Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. ''Journal of Retailing and Consumer Services'', ''61'', 102551. <nowiki>https://doi.org/10.1016/J.JRETCONSER.2021.102551</nowiki> </ref>. Besides adding conversational content, it is also shown that humans can unconsciously mimic the emotional expression of the conversational partner, known as emotional contagion, which helps to emphasize with others by simulating their feelings<ref name=":2" /><ref name=":1" />. Due to our tendency to anthropomorphize robots, it is possible that emotional contagion also occurs during HRI and can facilitate making users feel positive affect while interacting with a social robot<ref name=":1" />. Artificial emotions can be used in social robots to facilitate believable HRI, but also provide feedback to the user about the robot’s internal state, goals and intentions<ref name=":4">Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. ''Robotics and Autonomous Systems'', ''42''(3–4), 143–166. <nowiki>https://doi.org/10.1016/S0921-8890(02)00372-X</nowiki> </ref>. Moreover, they can act as a control system through which we learn what drives the robots’ behavior and how it is affected by and adapts due to different factors over time<ref name=":4" />. Finally, the ability of social robots to display emotions is crucial in forming long-term social relationships, which is what people will naturally seek due to the anthropomorphic nature of social robots<ref name=":2" />.
Altogether, the important role of emotions in human-robot interaction requires us to gather information about how robots can and should display emotions for them to be naturally recognized as the intended emotion by humans. A robot can display emotions when it combines body posture, motion velocity, facial expressions and vocal signs (e.g. prosody, pitch, loudness), highly depending on the possibilities considering the robot’s morphology and degree of anthropomorphism<ref name=":6">Marcos-Pablos, S., & García‐Peñalvo, F. J. (2021). Emotional Intelligence in Robotics: A Scoping review. In Advances in intelligent systems and computing (pp. 66–75). <nowiki>https://doi.org/10.1007/978-3-030-87687-6_7</nowiki>  </ref><ref name=":13">Miwa, H., Itoh, K., Matsumoto, M., Zecca, M., Takariobu, H., Roccella, S., Carrozza, M. C., Dario, P., & Takanishi, A. (n.d.). Effective emotional expressions with emotion expression humanoid robot WE-4RII. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), 3, 2203–2208. <nowiki>https://doi.org/10.1109/IROS.2004.1389736</nowiki>   </ref><ref name=":7">Crumpton, J., & Bethel, C. L. (2015). A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal Of Social Robotics, 8(2), 271–285. <nowiki>https://doi.org/10.1007/s12369-015-0329-4</nowiki>  </ref> Social robots are often more humanoid, increasing anthropomorphism, and therefore a match is required between the robot's behavior and appearance to avoid falling into the uncanny valley, which elicits a feeling of uneasiness or disturbance<ref name=":6" /><ref>Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review Of General Psychology, 19(4), 393–407. <nowiki>https://doi.org/10.1037/gpr0000056</nowiki>  </ref>. Some research has already been done on testing the capability of certain social robots, including Pepper, Nao and Misty, to display emotions and resulted in robot-specific guidelines on how to program displaying certain emotions<ref name=":15">Bishop, L., Van Maris, A., Dogramadzi, S., & Zook, N. (2019). Social robots: The influence of human and robot characteristics on acceptance. Paladyn, 10(1), 346–358. <nowiki>https://doi.org/10.1515/pjbr-2019-0028</nowiki> </ref><ref>Johnson, D. O., Cuijpers, R. H., & van der Pol, D. (2013). Imitating Human Emotions with Artificial Facial Expressions. International Journal of Social Robotics, 5(4), 503–513. <nowiki>https://doi.org/10.1007/S12369-013-0211-1/TABLES/8</nowiki> </ref><ref>Zhao, F. O., White, N. T., Cagiltay, B., Niedenthal, P. M., Michaelis, J., & Mutlu, B. (2023, January). (PDF) Designing Emotional Expressions for a Reading Companion Robot. <nowiki>https://doi.org/10.31234/osf.io/7p2ns</nowiki></ref>.
Based on these established guidelines for displaying emotions, we can look further into how humans are affected by the robot’s emotional cues during interaction with a robot. We will research this in a context where we would also expect a human to display emotions, namely during telling an emotional story. Our research takes inspiration from the study of Van Otterdijk et al. (2021)<ref name=":5">Van Otterdijk, M., & Barakova, E. I., Torresen, J. & Neggers, M. E. M. (2021). Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior. 10.1109/ARSO51874.2021.9542833. </ref> and Bishop et al. (2019)<ref name=":15" /> in which the robot Pepper was used to deliver either a positive or negative message accompanied by congruent or incongruent emotional behavior. We extend on these studies by taking a different combination of context for application and research method: interaction with students as researching application in an educational setting rather than healthcare and using interviews to gain a deep understanding rather than surveys. We opted for this qualitative approach, as we had to work with a small participant pool of ten people due to feasibility constraints. This allowed us to dig deeper into the details of robot-human interaction by capturing the intricate nuances of participants’ experiences and perspectives, providing us with a deeper understanding of our topic. Moreover, students are an important target group for robots, because they represent future workforce and innovations. Understanding their needs can help developers design the robots so that they are engaging, user-friendly and educational<ref name=":8">Manzi, F., Sorgente, A., Massaro, D., Villani, D., Di Lernia, D., Malighetti, C., Gaggioli, A., Rossignoli, D., Sandini, G., Sciutti, A., Rea, F., Maggioni, M. A., Marchetti, A., & Riva, G. (2021). Emerging Adults’ Expectations about the Next Generation of Robots: Exploring Robotic Needs through a Latent Profile Analysis. Cyberpsychology, Behavior, and Social Networking, 24(5), 315–323. <nowiki>https://doi.org/10.1089/CYBER.2020.0161</nowiki> </ref>.
More specifically, the research question that will be studied in this paper is “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”. We expect that participants will prefer interacting with the robot while displaying the emotion that fits with the content of its message and to be open to more future interactions like this with the robot. On the other hand, we expect that a mismatch between the emotion displayed by robot and the story it is telling will make participants feel less comfortable and therefore less accepting of the robot. Moreover, we expect that the influence of congruent emotion displaying will be more prominent with a negative than a positive message. The main focus of this research is thus on how accepting the students are of the robot after interacting with it, but also gaining insights into potential underlying reasons, such as the amount of trust the students have in the robot and how comfortable they feel when interacting with them. The results could be used to provide insights into the importance of congruent emotion displaying and whether robots could be used on university campuses as assistant robots.
== Method ==
=== Design ===
This research consisted of an exploratory study. The experiment was a within-subjects design, where all the participants were exposed to the six conditions of the experiment. It consisted of a 2 (positive/negative story) x 3 (happy/neutral/sad emotion displayed by robot) experiment. These six conditions differ in terms of a match between the content of the story (either positive or negative) and the emotion (happy, neutral or sad) of the robot. An overview of the conditions can be seen in Table 1.
{| class="wikitable"
|+Table 1: The six conditions in the experiment
|Story / displayed emotion
|Happy
|Neutral
|Sad
|-
|Positive
|Congruent
|Emotionless
|Incongruent
|-
|Negative
|Incongruent
|Emotionless
|Congruent
|}
The independent variables in this experiment were the combination of displayed emotion and the kind of emotional story. The dependent variable was the acceptance of the robot. This was measured by qualitatively analyzing the interviews held with the participants during the experiment.
=== Participants ===
The study investigated the viewpoint of students and therefore the participants were gathered from the TU/e. We have chosen to target this specific group because of their in general higher openness to social robots and the increased likelihood that this group will deal a lot with social robots in the near future<ref name=":8" />.
Ten participants took part in this experiment and all the participants were allocated to all the six conditions of the study. There were five men and five women who completed the study. Their age ranged between 19 and 26 years, with an average of 21.4 years (+- 1.96). They are all students at the Eindhoven University of Technology and volunteers, meaning they were not compensated financially for participating in this study. The participants were gathered from the researchers’ own networks, but multiple different studies were included (see Table 2). The general attitude towards robots of all the participants was measured, and all of them had relatively positive attitudes towards robots. Most of the participants saw robots as a useful tool that would help to reduce the workload of humans, however two participants commented that current robots would be unable to fully replace humans in their jobs. When asked whether they had been in contact with a robot before, six of the participants had seen or worked with a robot before. One participant was even familiar with the Pepper robot that was used during the experiment. Three participants responded that they had not been in contact before, but they had experience with AI or Large Language Models (LLM). One participant had never been in contact with a robot before but did not comment on whether they had used AI or LLM. All in all, the participants were all familiar with robots and the technology surrounding robots, which is expected as robots in general are a large part of the curriculum of the Technical University they are enrolled at.
{| class="wikitable"
|+Table 2: The distribution of current study programs of the participants
|Study
|Number of  participants
|-
|-
|Naomi Han
|Psychology and Technology
|0986672
|3
|BCS
|-
|Electrical engineering
|2
|-
|Mechanical engineering
|1
|-
|Industrial Design
|1
|-
|Biomedical technology
|1
|-
|Applied physics
|1
|-
|Applied mathematics
|1
|}
 
=== Materials ===
 
==== Robot Pepper ====
For this experiment, the robot Pepper was used, which is manufactured by SoftBank Robotics. [[File:Pepper emotions.png|thumb|Figure 1: Pepper's behavior for happy, neutral and sad condition (from left to right)]]The reason this robot was chosen is because Pepper is a well-known robot that multiple studies have been done on and that is already being applied in different settings, such as hospitals and customer service. Based on young adults' preferences for robot design, Pepper would also be most useful in student settings, given its human-like shape and ability to engage emotionally with people<ref>Björling, E. A., Thomas, K. A., Rose, E., & Çakmak, M. (2020). Exploring teens as robot operators, users and witnesses in the wild. ''Frontiers in Robotics and AI'', ''7''. <nowiki>https://doi.org/10.3389/frobt.2020.00005</nowiki>
</ref>. The experiment itself was conducted in one of the robotics labs on the TU/e campus, where the robot Pepper is readily available.
 
When looking for a suitable robot for our project, the robots that were readily available at the TU/e and suggested by the supervisors of this research were considered, including Misty, SociBot and Pepper. With those robots in mind, the possibilities for conveying the desired emotions were compared. According to Cui et al. (2020)<ref name=":9">Cui, M., Fang, J., & Zhao, Y. (2020). Emotion recognition of human body’s posture in open environment. 2020 Chinese Control And Decision Conference (CCDC). <nowiki>https://doi.org/10.1109/ccdc49329.2020.9164551</nowiki></ref>, posture is considered important for conveying emotions, and out of the three options, Pepper was the most suitable for that task. Next to that, Pepper was used in the aforementioned study by Van Otterdijk et al. (2021)<ref name=":5" /> and Bishop et al. (2019)<ref name=":15" />, which added to the convenience of using Pepper.
 
Pepper was programmed in Choregraphe to display happiness and sadness based on the voice, body posture and gestures, and LED eye colour. The behavior that Pepper displayed is shown in Table 3 and Figure 1, based on the research of Bishop et al. (2019)<ref name=":15" /> and Van Otterdijk et al. (2021)<ref name=":5" />. Facial expressions cannot be used, since the morphology of Pepper does not allow for it.
{| class="wikitable"
|+Table 3: Robot behavior during each of the emotion displaying conditions
|
|
|Happy
|Neutral
|Sad
|-
|Pitch  of voice
|High pitch, speed, volume and emphasis
|Average of happy and sad condition
|Low pitch, speed, volume and less emphasis
|-
|Body posture  and gestures
|Raised chin, extreme movements, upwards  arms, strong nodding
|Average of happy and sad condition
|Lowered chin, small movements, hanging arms,  not looking around
|-
|LED  eye colour
|Yellow
|White
|Light blue
|}
|}
The voices for each story were created using Voicebooking.com, using their free AI voice over generator. This program was chosen instead of the built-in Pepper voice because it was quite inaudible for some parts of the stories the robot was supposed to tell. In Voicebooking, the preferred voice was picked and there were moods created based on adjustments for the speed, pitch, and emphasis of the storytelling. The greatest values of each of these were assigned to the happy mood and the lowest for the sad one . For the neutral robot, the values were averaged out between these two, as mentioned in Table 3. After uploading these audio files to Choregaphe, the volume levels were changed to 80%, 90%, and 100% for the sad, neutral, and happy robot, respectively<ref>M. Rabiei and A. Gasparetto, "A system for feature classification of emotions based on speech analysis; applications to human-robot interaction," 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 2014, pp. 795-800, doi: 10.1109/ICRoM.2014.6991001.</ref>.
Pepper's posture and gestures were created with dialog boxes that were readily available in Choregraphe. Dialog boxes are graphical interfaces that contain pre-installed movements of behaviors for the robot. For each of the robot moods, a selection was made of suitable pre-installed movements. The happy robot had the most expressive movements, making great use of its arms and nodding strongly<ref name=":9" />. The neutral robot would be gently swaying its arms and make gestures with them every now and then, but those were not as strong as those of the happy one. Next to that, the neutral robot was also programmed to look around, making eye contact with the participants<ref name=":9" />. And finally, for the sad robot, it was the objective to minimize movement and give Pepper a sad posture. This was achieved by using the same built-in movement that was used for the gentle swaying of the neutral robot. This behavior included the eye contact movement of the head, so the head movement had to be disabled using the settings of the dialog box. Next to shutting off the eye contact behavior, Pepper was programmed to look down at all times within these same settings. The sad posture was finalized by adjusting the hinge at the hip and the shoulders<ref>I. Cohen, R. Looije and M. A. Neerincx, "Child's recognition of emotions in robot's face and body," 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 2011, pp. 123-124, doi: 10.1145/1957656.1957692.</ref>.
The robot's eye colors were changed using the eye LEDs to represent the three different moods: yellow was used for happy, white for neutral, and light blue for the sad robot<ref name=":5" />.
A [https://tuenl-my.sharepoint.com/:v:/g/personal/m_j_d_ruiter_student_tue_nl/ET4FAJ9o_i5CiXydolN6UZ8Bi5nzeeJrZOB812m9glojeg?e=W5tdyd&nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJTdHJlYW1XZWJBcHAiLCJyZWZlcnJhbFZpZXciOiJTaGFyZURpYWxvZy1MaW5rIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXcifX0%3D video] was made to visually show the three different emotional behavior's of Pepper.
==== Laptops ====
In the full study, four laptops were used. At the start of the experiment, three laptops were used to hand to the participant for filling in the demographics LimeSurvey. During the experiment, one laptop was used to direct Pepper to tell the different stories with the different emotions. This was done from the control room. Two other researchers were present in the room with the participants and Pepper to assist if necessary and take notes on their laptop, so two other laptops were used there. Moreover, during the interviews, the researcher could choose to keep their laptop with them for taking notes or recording. If chosen not to, the researcher used a mobile phone to record.
==== LimeSurvey for demographics ====
The demographics survey that participants were asked to fill in consisted of the following questions:
# What is your age in years?
# What is your gender?
#* Male
#* Female
#* Non-binary
#* Other
#* Do not want to say
# What study program are you enrolled in currently?
# In general, what do you think about robots?
# Did you have contact with a robot before? Where and when?
These last two questions were based on an article by Horstmann & Krämer<ref>Kramer, S. & Horstmann, W. (2019). ​Perceptions and beliefs of academic librarians in Germany and the USA: a comparative study. ''LIBER Quarterly'', ''29''(1), ​1​-18​. ​doi: <nowiki>https://doi.org/10.18352/lq.10314</nowiki> </ref>, which focused on the expectations that people have with robots and their expectations when confronted with other social robots concepts.
==== Stories told by the robot ====
The positive and negative stories that the robot told are fictional stories about polar bears, inspired by the study Bishop et al., 2019<ref name=":15" />. The content of the stories is based on non-fictional internet sources and rewritten to best fit our purpose. It was decided to keep the stories fictional and about animals rather than humans, because of the lower risk of doing emotional damage to the participants associated with elicited feelings based on personal circumstances.
The positive story is and adaptation of Cole (2021)<ref>Cole, J. (2021, April). ''Good News Network''. Retrieved from Orphaned Polar Bear That Loved to Hug Arctic Workers Gets New Life: vhttps://www.goodnewsnetwork.org/orphaned-polar-bear-rescued-russian-arctic/ </ref> and shown below:<blockquote>''“When Artic gold miners were working on their base, they were greeted by a surprising guest, a young lost polar bear cub. It did not take long for her to melt the hearts of the miners. As the orphaned cub grew to trust the men, the furry guest soon felt like a friend to the workers on their remote working grounds. Even more surprising, the lovely cub loved to hand out bear hugs. Over the many months that followed, the miners and the cub would create a true friendship. The new furry friend was even named Archie after one of the researcher’s children. When the contract of the gold miners came to an end, the polar bear cub would not leave their side, so the miners decided to arrange a deal with a sanctuary in Moscow, where the polar bear cub would be able to live a happy life in a place where its new-found friends would come to visit every day.”''</blockquote>The negative story is an adaptation of Alexander (n.d.)<ref>Alexander, B. (n.d.). ''USA Today Entertainment''. Retrieved from Polar bear cub's agonizing struggle in Netflix's 'Our Planet II' is telling 'heartbreaker': <nowiki>https://eu.usatoday.com/story/entertainment/tv/2023/06/15/netflix-our-planet-2-polar-bear/70296362007/</nowiki> </ref> and shown below: <blockquote>''"While shooting a nature documentary on the Arctic Ocean Island chain of Svalbard, researchers encountered a polar bear family of a mother and two cubs. During the mother's increasingly desperate search for scarce food, the starving family was forced to use precious energy swimming between rocky islands due to melting sea ice. This mother and her cubs should have been hunting on the ice, even broken ice. But they were in water that was open for as far as the eye could see. The weaker cub labours trying to keep up and the cub strained to pull itself ashore and then struggled up the rock face. The exhausted cub panicked after losing sight of its mother and its screaming could be heard from across the water. That's the reality of the world they live in today. To see this family with the cub, struggling due to no fault of their own is extremely heart breaking.”''</blockquote>


== Introduction to the course and project ==
==== Interview questions ====
Two semi-structured interviews were held per participant, one after the first three conditions, in which the story is the same, and one after the second three conditions. These interviews were practically the same, except for one extra question (question 7) in the second interview (see below). The interview questions 1-8 were mandatory and questions a-q were optional to use as probing questions. Researchers were free to use these probing questions or use new questions to get a deeper understanding of the participant's opinion during the interview. The interviews also included a short explanation beforehand. The interview guide was printed for each interview with additional space for taking notes.


=== Problem statement ===
The interview questions were largely based on literature research. They were divided into three different categories: attitude, trust and comfort. Overall, these three categories should give insight into the general acceptance of robots by students<ref>Wagner Ladeira, M. G. P., & Santini, F. (2023). Acceptance of service robots: a meta-analysis in the hospitality and tourism industry. Journal of Hospitality Marketing \& Management, 32(6), 694–716. <nowiki>https://doi.org/10.1080/19368623.2023.2202168</nowiki></ref><ref>Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. Proceedings of the 6th International Conference on Human-Robot Interaction, 147–148. <nowiki>https://doi.org/10.1145/1957656.1957704</nowiki></ref><ref>Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model. International Journal of Social Robotics, 2(4), 361–375. <nowiki>https://doi.org/10.1007/s12369-010-0068-5</nowiki></ref>. Firstly, a manipulation check was done to make sure the participants had a correct impression of the story and the emotion the robot was supposed to convey. These were followed by questions about the attitude, focusing on the general impression that the students had of the robot, their likes and dislikes towards the robot, and their general preference for a specific robot emotion. These questions were mainly based on the research of Wu et al (2014)<ref>Wu, Y.-H., Wrobel, J., Cornuet, M., Kerhervé, H., Damnée, S., & Rigaud, A.-S. (2014). Acceptance of an assistive robot in older adults: a mixed-method study of human–robot interaction over a 1-month period in the Living Lab setting. ''Clinical Interventions in Aging'', ''9''(null), 801–811. <nowiki>https://doi.org/10.2147/CIA.S56435</nowiki></ref> and Del Valle-Canencia (2022)<ref>Valle-Canencia, Marta & Moreno, Carlos & Rodríguez-Jiménez, Rosa-María & Corrales-Paredes, Ana. (2022). The emotions effect on a virtual characters design–A student perspective analysis. Frontiers in Computer Science. 4. 10.3389/fcomp.2022.892597. </ref>. The questions about the trustworthiness of the different emotional states of the robot are based on Jung et al. (2021)<ref>Jung, M., Lazaro, M. J. S., & Yun, M. H. (2021). Evaluation of Methodologies and Measures on the Usability of Social Robots: A Systematic Review. ''Applied Sciences'', ''11''(4). <nowiki>https://doi.org/10.3390/app11041388</nowiki></ref> and Madsen and Gregor (2000)<ref>Madsen, Maria & Gregor, Shirley. (2000). Measuring human-computer trust. </ref>. The comfort-category focused mainly on how comfortable the participants felt with the robot. These questions were based on research (Erken, 2022)<ref>Erken, E. (2022). Chatbot vs. Social Robot: A Qualitative Study Exploring Students’ Expectations and Experiences Interacting with a Therapeutic Conversational Agent. Tilburg University, Tilburg. Microsoft Word - EstherErken_2049726_MasterThesis_Final.docx (uvt.nl)</ref>. Lastly, the participants were asked whether they think Pepper would be suitable to use on campus and for which tasks.
Modern media is filled with images of highly sophisticated robots that speak, move and behave like humans would. The many movies, plays and books that are created speculate that these types of robots will be integrating into our daily lives in the near future. The idea of robots becoming increasingly more like humans is thus integrated into in our ideas. However, modern technology has not yet been able to catch up to this futuristic idea of what an artificial agent, like a robot, is able to do. This delay mainly comes from the lack of knowledge on how to replicate the behavior of humans in the hardware and programming of the artificial agents. One of the main areas that has been of growing interest is the implementation of emotions in robots and other artificial agents. Emotions of a human are not easy to replicate, as they consist of many different factors that make up the emotion. The research that will be presented in this wiki will also focus on emotions, but it will look at how these emotions have an effect on the acceptance of the robot. The question that will be answered is:<blockquote>“To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?"</blockquote>
The complete interview, including the introduction, can be seen below:
''You have now watched three iterations of the robot telling a story. During each iteration the robot had a different emotional state. We will now ask you some questions about the experience you had with the robot. We would like to emphasize that there are no right or wrong answers. If there is a question that you would not like to answer, we will skip it. ''


=== Objectives ===
# ''What was your impression of the story that you heard?''
As a group, we outlined our objectives for our project. With our main objectives being contributing to knowledge about the role of emotions in social robot interactions and extending knowledge on the reliability of the acceptance measurement with the focus on young adults. As the Almere model is yet to be extensively tested on younger adults. In order to achieve these two main objectives, we have some smaller objectives that will guide us towards them. These concern conducting lab research and doing statistical and qualitative data analysis that are related to social and psychological research. Next to that, we are a multidisciplinary group, and are aiming towards working together in such a manner that every single group member is able to bring their own discipline to the table. And finally, properly programming and working with a robot is crucial to achieve our main objectives.  
#* ''a. Briefly describe, in your own words, the emotions that you felt when listening to the three versions of the story?''
#* ''b. Which emotion did you think would best describe the story?''
# ''How did you perceive the feelings that were expressed by the robot?''
#* ''c. How did the robot convey this feeling?''
#* ''d. Did the robot do something unexpected?''
# ''What did you like/dislike about the robot during each of the three emotional states?''
#* ''e. What are concrete examples of this (dis)liking?''
#* ''f. How did these examples influence your feelings about the robot?''
#* ''g. What were the effects of the different emotional states of Pepper compared to each other?''
#** ''i. What was the most noticeable difference?''
# ''Which of the three robot interactions do you prefer?''
#* ''h. Why do you prefer this emotional state of the robot?''
#* ''i. If sad/happy chosen, did you think the emotion had added value compared?''
#* ''j. If neutral chosen, why did you not prefer the expression of the matching emotion?''
# ''Which emotional state did you find the most trustworthy? And which one the least trustworthy?''
#* ''k. Why was this emotional state the most/least trustworthy?''
#* ''l. What did the robot do to cause your level of trust?''
#* ''m. What did the other emotional states do to be less trustworthy?''
# ''Which of the three emotional states of the robot made you feel the most comfortable in the interaction?''
#* ''n. Why did this emotional state make you feel comfortable?''
#* ''o. What effect did the other emotional state have?''
# ''Do you think Pepper would be suitable as a campus assistant robot and why (not)?''
#* ''p. If not, in what setting would you think it would be suitable to use Pepper?''
#* ''q. What tasks do you think Pepper could do on campus?''
# ''Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?''


=== Users ===
=== Procedure ===
The users in this research are young adolescents. They have specific needs and require certain characteristics of the social robot in order to have a pleasant social interaction. In general, these users would like the robots to be authentic, imperfect, and to be active listeners. Active listening helps to build trust between the human and the robot. Also, by listening and showing that the robot understands the conversation and the emotional state of the person, the robot can adapt its interactions according to this, which will lead to a more personalized and meaningful interaction. The users require the robot to be easy to understand and it should have an intuitive interface. As already mentioned above a bit, the users like robots that can understand and respond to human emotions in order to have a meaningful interaction.
[[File:Setup experiment.png|thumb|259x259px|Figure 2: Setup of experiment]]
When participants entered the experiment room, they were instructed to sit down on one of the five chairs in front of the robot Pepper (see Figure 2). Each chair had a similar distance to the robot of about 1 meter. The robot Pepper was already moving rather calmly to get participants used to the robot movements. This was especially important since some participants were not yet familiar with Pepper.
 
The participants started with a short introduction of the study, as can also be seen in the research protocol linked below, and the request to read and sign the [https://tuenl-my.sharepoint.com/:b:/g/personal/m_j_d_ruiter_student_tue_nl/EXy_1fH8mkpHqY--y5QjqwsBhFvjL0Vfjfygcd_xZ0XAaw?e=aDDDxX consent form] and continue with filling in a demographic survey on LimeSurvey. There were two sessions with each five participants. First, the participants listened to Pepper, who told the group a story about a polar bear, either a positive or a negative one (see Table 4 and 5 for exact condition-order per session). Pepper told this story three times, each time with a different emotion, which could be ‘happy', ‘sad’ or ‘neutral’. The time that these three iterations took was approximately 6 minutes. When Pepper was finished, the five participants were asked to each follow one of the researchers into an interview room. These one-on-one interviews were held simultaneously and lasted approximately 10 minutes. After completing the interview, the participants went back to the room where they started and listened again to a story about a polar bear. If they had already listened to the positive story, they now proceeded to the negative one and vice versa. After Pepper had finished this story, the same interview was held under the same circumstances. After completion of the interview, there was a short debriefing. The total the experiment lasted about 45-60 minutes.
{| class="wikitable"
|+Table 4: Condition order for experiment session 1
|Round
|story
|Emotion 1
|Emotion 2
|Emotion 3
|-
|1
|'''Negative'''
|Neutral
|Sad
|Happy
|-
|2
|'''Positive'''
|Neutral
|Happy
|Sad
|}
{| class="wikitable"
|+Table 5: Condition order for experiment session 2
|Round
|story
|Emotion 1
|Emotion 2
|Emotion 3
|-
|1
|'''Positive'''
|Neutral
|Happy
|Sad
|-
|2
|'''Negative'''
|Neutral
|Sad
|Happy
|}
An elaborate [https://tuenl-my.sharepoint.com/:b:/g/personal/m_j_d_ruiter_student_tue_nl/ET9L2jzvtK1OhfzfdAz8doIBh4Pjd5W_tnQ7WqxCmViazg?e=ZfgNBq research protocol] was also made, which explains more detailed what should be done during each part of the experiment.
 
=== Data analysis ===
The data analysis done in this research is a thematic analysis of the interviews. The interviews were audio-recorded and from the recordings a transcript was made using Descript. As a first step to the data analysis process, the raw transcripts were cleaned. This includes removing nonsense words, like “uhm” and “nou” or any other forms of stop words. The speakers in the transcript were then also labelled with “interviewer” and “participant X” to make the data analysis easier. After the raw data was cleaned, the experimenters were instructed to become familiar with the transcripts, after which they could start the initial coding stage. This means highlighting important answers and phrases that could help answer the research question. These highlighted texts were then coded using a short label. All the above steps were done by the experimenters individually for their own interviews.
 
The next step would be combining codes and refining them. This was done during a group meeting where all the codes were carefully examined and combined to form one list of codes. After this, the experimenters recoded their own interview with this list of codes and another experimenter checked the recoded transcript. Any uncertainties or discussions on coding that arose were discussed in the next group meeting. In this meeting, some codes were added, removed or adjusted and the final list of codes was completed. The codes in this final list are divided into themes that can be used to eventually answer the research question. After this meeting, the interviews were again recoded using the final list of codes and the results were compiled from the final coding. The fully codes transcripts were all combined in one [https://tuenl-my.sharepoint.com/:b:/g/personal/m_j_d_ruiter_student_tue_nl/ETVzupG71wdOriDVQRlreAgBKTvjSOSi7fzC7j-Cz2UYuw?e=ElB0ev file]. 
 
== Results ==
 
=== Results of thematic analysis ===
An overview of the final codes and themes as emerged from the thematic analysis is shown in Figure 3 and an explanation for each code and theme is provided in Table 6. 
{| class="wikitable"
|+Table 6: Overview of themes and codes with explanation
|'''Overarching themes'''
|'''Themes'''
|'''Subthemes'''
|'''Code'''
|'''Explanation'''
|-
| rowspan="10" |'''Impression of the experimental conditions'''
|'''Impression of story'''
|
|
|This theme includes all the impressions from the participants of  the stories told by the robot
|-
| rowspan="4" |
| rowspan="4" |
|Negative story perceived as sad
 
|Negative story is perceived as sad or described in a sad way. Sad  undertones included.
|-
|Negative story perceived as documenting
 
|Negative story is perceived as documenting. This is a neutral  factual way of talking about the story.
|-
|Not/other understanding of the story
 
|Difficulty understanding/ following the story, influence on later  perceptions or preferences
|-
|Positive story perceived as positive
|Positive story is perceived as happy, funny, entertaining,  enthusiastic, etc. The tone is positive.
|-
|'''Robot emotion perception'''
|
|
|This theme includes all the perceptions and opinions on the  robot's emotional behavior.
|-
| rowspan="4" |
| rowspan="2" |Happy emotion  perception
|Happy robot behavior  perceived as happy
|All behavior of the  happy robot that was perceived as happy, entertaining, funny, excited,  energetic, etc.
|-
|Happy robot behavior  perceived as chaotic or not natural
|All behavior of the  happy robot that was perceived as too chaotic or random in movements,  sometimes leading to not being natural.
|-
| rowspan="2" |Sad emotion  perception
|Sad robot behavior  perceived as sad
|The behavior of the  sad robot was perceived as sad.
|-
|Sad robot behavior  perceived as shy or not confident
|All behavior of the  sad robot that was perceived as the robot feeling shy, hesitant,  uncomfortable, not confident, etc.
|-
| rowspan="5" |
| rowspan="5" |
| rowspan="2" |
|Sad robot behavior  perceived as uninterested
|All behavior of the  sad robot that was perceived as the robot not being interested in what it was  telling.
|-
|Sad robot behavior  perceived as documenting
|All behavior of the  sad robot that was perceived as the robot being serious or telling a story in  a documenting way.
|-
| rowspan="3" |Neutral emotion  perception
|Neutral robot  behavior perceived as neutral
|All behavior of the  neutral robot that is perceived as not having a specific emotion
|-
|Neutral robot  behavior perceived as natural
 
|All behavior of the  neutral robot that is perceived as natural human-like behavior.
|-
|Neutral robot  behavior perceived as documenting
|All behavior of the  neutral robot that was perceived as the robot being serious or telling a  story in a documenting way.
|-
| rowspan="12" |'''Influence on acceptance'''
|'''Influence of robot on participant'''
|
|
|This theme includes all the influences that participants  experienced with direct regard to the robot's emotional behavior.
|-
| rowspan="6" |
| rowspan="2" |Influence happy  robot
|Engaged by happy  robot
|The participant felt  inspired, engaged or encouraged to listen to the story and pay attention by  the happy robot.
|-
|Distracted by happy  robot
|The participant felt  distracted by the happy robot movements and noise of movements
|-
| rowspan="2" |Influence sad robot
|Bored/Disengaged by  sad robot
|The sad robot was  perceived as boring and participants were disengaged by the robot.
|-
|Less distracted by  sad robot
|The sad robot was  perceived as less distracting or allowing for more focus on the message than  the other robots
 
 
|-
| rowspan="2" |Influence neutral  robot
|Engaged by neutral  robot
|The participant felt  inspired, engaged or encouraged to listen to the story and pay attention by  the neutral robot.
|-
|Less distracted by  neutral robot
|The neutral robot  was perceived as less distracting or allowing for more focus on the message  than the other robots.
|-
| rowspan="5" |
| rowspan="5" |
|Influence of participant expectation and experience
|Participant remarks on how their expectations of the experiment  and experience with the robot Pepper influenced their perception
|-
|Influence robot behavior on trustworthiness
|The reasoning behind why a certain robot was/wasn’t trustworthy  based on its behavior.
|-
|Influence robot behavior on comfortability
|The reasoning behind why a certain robot did make the participant  feel (un)comfortable based on its behavior.
|-
|Importance eye contact
|Participant adresses the effect of the robot (not) making eye  contact.
|-
|Importance of gesture timing with speech
|Participant adresses that gestures did not match with speech and  what effect this had on them.
|-
| rowspan="11" |
| rowspan="6" |'''Emotion-message match'''
|
|
|These are the codes that look at the match between the emotional  behaviors displayed by the robots and the message that is told by the robot  and how important emotions are in storytelling.
|-
| rowspan="2" |Opinion congruent  emotion
|Congruent emotion  did match
|The robot behavior  that was expected to match did match the story that the robot told.
|-
|Congruent emotion  didn't match
|The robot behavior  that was expected to match the story did not match according to participants.
|-
| rowspan="2" |Opinion  non-congruent emotion
|Non-congruent  emotion didn't match
|The robot behavior  did not match the story that the robot told.
|-
|Non-congruent  emotion did match
|The robot behavior  that was not expected to match the story did match according to participants,  though this seemed due to numerous different reasons such as not  understanding the story.
|-
|
|Value of emotion to story
|Discusses if the participant felt that the emotion had added  value to the story-telling or not.
|-
|'''Emotional robot behavior preference'''
|
|
|These are all the preferences that the participants expressed  regarding the robot's emotional behavior.
|-
| rowspan="4" |
| rowspan="4" |Robot preferences
|Neutral robot  preferred
|Participant  indicates a preference the neutral robot.
|-
|Happy robot  preferred
|Participant indicates  a preference the happy robot.
|-
|Sad robot preferred
|Participant  indicates a preference for the sad robot.
|-
|Combination of  robots preferred
|The participant  preferred the robot with a combination or switch between multiple emotional states.
|-
| rowspan="4" |
|
|
|Voice preference
 
|Discusses what the participant likes/disliked about the voice of  the robot
|-
|'''Application of Pepper'''
|
|
|This theme looks at the real-life applications of pepper and  whether the robot would be suitable as an application.
|-
| rowspan="2" |
| rowspan="2" |
|Not suitable implementation
|The pepper robot is not suitable or needs adjustments before it  is suitable in real life.
|-
|Suitable implementation
|The pepper robot would be suitable for specific roles  (navigation, guidance, administration tasks etc.) as is.
|}
[[File:Codes overview.png|none|thumb|580x580px|Figure 6: Overview of codes]]
 
==== Impression of experimental conditions ====
The overarching theme ‘'''Impression of experimental conditions'''’ arose from questions regarding how the participant perceived the emotional story and the emotional robot behaviour. These answers were gathered to do a manipulation check on the experimental conditions, thus checking if these are perceived as intended.
 
<u>Impression of story</u>
 
For the positive story that was told by Pepper, it was found that nearly all participants perceived it as a positive story and described it as either happy, funny, entertaining or any other positive adjective, such as shown in these quotes: “''It was a very cute story. A nice story.” (Participant 013 – PS, [00:00:39]), “It was a happy story just because, it had a happy ending.” (Participant 014 – PS, [00:01:06])'' and ''“I thought it was a funny story.” (Participant 023 – PS, [00:00:33]).''
 
The negative story on the other hand, was sometimes perceived as sad and sometimes as more documenting, factual and with no clear emotional message. The following quotes illustrates this: ''“The general tone of the story was a bit sad, a bit anxious.” (Participant 012 – NS, [00:00:45])'' and ''“The story sounded like a documentary.” (Participant 022 – NS, [00:01:09]).''
 
Moreover, not all participants understood the story as intended or at all and this seemed to have influenced their attitude towards the emotional states of the robot. Some participants indicated that they found the story hard to follow due to difficult words or that they did not know what to expect at the beginning of the experiment and were a bit distracted.
 
<u>Robot emotion perception</u>
 
Additionally, the participants were asked which emotion they would assign to each of the robot's three emotional states. For the robot behavior that was intended as happy, most participants identified this as happy, enthusiastic, or another similar positive adjective.  However, the happy robot behavior was also often perceived as chaotic, random or programmed in its movements, sometimes leading to their behaviour seen as not natural. This could also be linked to the fact that many participants noticed that the timing of the gestures did not fit with the context of the story at that point. This was mostly noticeable in the happy robot, but participants also made this comment about the other robot conditions. Examples are given in the following quotes: ''“It seemed just very excited and maybe happy to tell the story” (Participant 024 – PS, [00:02:02]), “The more energetic one is just too much for me. I think the gestures don't make any sense. It's just really random and really chaotic.” (Participant 014 – PS, [00:07:32])'' and ''“I also felt that the arm movements again did just not at all correlate with what a person presenting the same story would do. Because it just was too much." (Participant 012 – PS, [00:02:01])''.
 
Concerning the sad robot behaviour, the opinions of participants on how the undertone of the behaviour was perceived varied more, from sad to shy to uninterested to more seriously documenting the story. The following quotes again illustrate the different opinions of the participants: ''"The robot was a bit more sad and more serious in their voice” (Participant 021 – NS, [00:02:18])'' and ''“It was more like it was a person presenting who was afraid to present." (Participant 012 – NS, [00:03:15])''.
 
Finally, the neutral robot behavior was successfully interpreted as neutral in emotion by most participants, and additionally as more seriously documenting the story. The neutral robot was often also seen as the most natural and human-like behaving robot. Some examples of participants illustrating this: ''“The first one wasn't really an emotion, but more informative or something.” (Participant 022 – NS, [00:01:09])'' and ''"It felt more human like.” (Participant 024 – PS, [00:01:28]).''
 
==== Influence on acceptance ====
The next overarching theme ‘'''Influence on acceptance'''’ discussed various factors that participants experienced regarding the robot's behavior. This includes specific robot behavior characteristics that influenced how engaged participants felt to listen to the story, such as eye contact and timing of gestures, but also behavior that was noted as influencing comfortability and trustworthiness. These factors were often mentioned in relation to how engaged participants felt and a how natural the robot behavior was perceived.
 
<u>Influence of robot on participant</u>
 
Continuing with the subtheme ‘Influence of robot on participant’, the different emotional robot behaviors had different effects on the participants in term of their focus on the story. Almost all participants indicated multiple times that the happy robot engaged them to listen, but that the extreme movements and associated noise of the actuators distracted them a lot: ''“It tried to actively engage listeners” (Participant 012 – PS, [00:02:01])'' and “''It became distracting, I thought. But mostly also because the movement of the robot makes noise, and I'm going to pay more attention to that.” (Participant 022 – PS, [00:01:47]).''
 
The sad robot seemed to strongly have the opposite effect, namely that participants were less distracted by the robot, but sometimes even to the extent that they felt disengaged to listen or even bored, as demonstrated here: ''“There was too little motion and hand gestures. And I think for a long period of time, that would be too boring to listen to. But, the benefit from that, is that you are only focused on the story itself, not on the robot movement” (Participant 014 – NS, [00:03:40])''.
 
The neutral robot also distracted the participants less, but a part of the participant still felt engaged to pay attention to the story: “''It's not as distracting as the other ones and the most being focused on the story.” (Participant 021 – NS, [00:05:09])''. These factors clearly influenced the opinion of the participant towards each robot's emotional states and ultimately may have determined their preference for a certain robot emotion for the story that was told.
 
<u>Eye contact, timing of gestures, trust and comfortability</u>
 
For example, for the low engagement level of the sad robot, many participants gave their need for eye contact as an explanation. They felt less connected and engaged with the robot since it only looked downwards. Many participants also noted the incongruence between the gestures and speech as a reason for their low engagement with the robot: “''Maybe when the story is building up, you can try to see if one gesture fits better with that part of the story. So you can really connect to it because you have a beginning point and end point of it” (Participant 011 – NS, [00:03:32])''.  
 
From time to time, these factors were linked to the robot's trustworthiness and participant's comfortability during interacting with the robot. There were also other reasons for a difference in trusting or feeling comfortable around a certain robot behaviour. For example, participants mentioned that they felt more comfort and trust with the happy robot during the positive story since the gestures fitted: “''Now I would say the happy one the most trustworthy, because it fits the story well. [...] I noticed the gestures is less heavy than in the previous story. Maybe because it blended so well.” (Participant 011 – PS, [00:05:35]).'' However, when a participant felt that the emotion did not match or was unpredictable, it felt less comfortable with the happy robot: ''“The active [happy] robot makes it maybe a bit uncomfortable because it was too happy in combination with the story that was told” (Participant 021 – NS, [00:05:40])'' and “''I just felt like it was trying way too hard. [...] He basically just stood there, I know. But it felt a bit unpredictable or something.''” ''(Participant 023 – PS, [00:08:23]).''
 
The lack of eye contact from the sad robot was often mentioned as a reason for a low level of trust and/or comfort: “''And the scary one looked only down at you with those eyes. So it lacked a bit of motion. So that just affected their comfort.” (Participant 011 – PS, [00:07:25]).'' Some participants also found the sad robot to be less trustworthy because they perceived it as shy. “''Well, the third one [sad robot] [...] was not trustworthy as it was telling the story as if it didn't want to tell you the story because it was [...] shy and drawn back. The second one [happy robot], you could call it trustworthy [...]. It was actively trying to convince the listeners that that story was of a certain emotion and maybe it overdid that.” (Participant 012 – PS, [00:08:41]).''
 
Most often, the participants saw the neutral robot as the most comfortable or trustworthy, for example since it was the most natural: “''Because if it's natural, then it's trustworthy.” (Participant 013 – PS, [00:06:12]).'' Also, the neutral robot showed more eye-contact, which many participants found to be more trustworthy: ''“Because the robot was really looking at us. And I think you would do that when you're confident about your own story and want to be trusted.” (Participant 021 – PS, [00:05:42]).''
 
Moreover, for some participants, their expectations of the experiment had notable influence on what they noticed during the conditions or the extent to which they had previous experience with the robot determined a strong overall attitude towards the robot. “''Because this was totally new, I didn't know what to expect. that's also maybe an aspect that maybe influences my thoughts upon the robot itself. Just because when you get the story from the robot, then a lot of things pop into my mind, [...] so you're really distracted by many things. And then, the second and the third robots [happy and sad robot], you're more like at rest [...]. So that may cause a different outputs from people, when they encounter to those robots for the first time.” (Participant 014 – NS, [00:07:53]).''
 
<u>Emotion-message match</u>
 
Participants also noted a congruence or incongruence between the emotional state of the robot and the story that it told. Most often, participants indicated a match between the intended congruent emotional state and the story, or a mismatch between the intended incongruent emotional state and the story. The following quotes exemplify these occasions: ''“Its emotions, [...] the sad eyes, matched better with the story that was being told.” (Participant 012 – NS, [00:05:35])'' and ''“This story was of course happy and then, more happy emotions would also be more realistic for the audience” (Participant 014 – PS, [00:01:57]).''
 
However, during multiple interviews, participants had another understanding of the emotional state or story and therefore did not find it fitting, or they understood what emotional state was congruent, but they did not feel that it fitted well due to other disliked prominent behavior of the robot. As a result, most of these participants preferred the neutral robot, though some of them felt that the intended non-congruent emotion matched better. This is mentioned in the next quotes: ''“I would say the energetic one, because that would match the story. But for some reason, if I would have to trust the robot itself, then I would say the medium one [neutral robot], just because the over energetic one is... then I would get the feeling that they want to convince me” (Participant 014 – PS, [00:05:59])'' and “''The [negative] story sounded more like a documentary. I thought the first one [neutral robot] was better for a documentary. The second one [sad robot] was too sad. And the third one [happy robot] was too enthusiastic.” (Participant 022 – NS, [00:03:36])''.
 
During this, it was occasionally also discussed if participants felt like the emotional states of the robot added value to the storytelling or not. When the neutral or non-congruent emotion was preferred, participants frequently indicated that the emotion had no added value, while this was the other way around for congruent emotion preferences. For example, one of the participants preferred the happy robot during the positive story, and they mentioned that this happy emotion did give the story more value: “''I could feel this joy of the people there, that could kind of adopt a polar bear, and a polar bear that could see his friends.''” (''Participant 011 – PS, [00:05:09]).'' However, there were also participants that did not want any emotion in the storytelling: ''“I don't think so. I think when you're trying to convey a story, you just want to have the story told. And you want the story to be the main purpose of telling.” (Participant 015 – PS, [00:04:30]).''
 
<u>Emotional robot behavior preference</u>
 
Taking all this into account, theme ‘Emotional robot behavior preference’ includes which robot each participant named as an overall preferred, most trustworthy and most comfortable robot's emotional state for each story. The results from this can also be seen in Table 7, which is discussed later.
 
Additionally, some participants also discussed specifically which version(s) of the voices they liked or disliked and why. For example,  “''Now the last one [voice of happy robot] was definitely more engaging, but sometimes it was a bit loud, even though it didn't really fit the story. The first one [voice of neutral robot] was too neutral, so the intonations were on one line, so maybe a combination? Just having more intonation, I would say.” (Participant 011 – NS, [00:07:32])''. Sometimes, the participants also found the voice of the sad robot to be inaudible: “F''or the second one [sad robot], it was [...] less audible than the other, than the first one [neutral robot].” (Participant 015 – NS, [00:00:42]).''
 
<u>Application of Pepper</u>
 
Finally, the theme ‘Application of Pepper’ encompasses the responses that participants had to the question of for which application Pepper would be suitable. The interviews showed that most participants could see Pepper being used in real life, but mostly for short, easy tasks, such as pointing people in the right direction on campus or giving some basic information: “''I can already see Pepper standing in front of the university and leading the way around. Answering questions like where do you have to go, and then the road.” (Participant 013 – PS, [00:07:59]).'' However, they also noted possible limitations in Pepper’s implementations. For example, they do not believe that Pepper could have deeper conversations with people, so it would not be suitable for more difficult campus related tasks such as teaching or more serious or emotional messages''.'' Some also noted that they don’t see the usefulness of Pepper, since humans will usually be able to do a task better. For example, ''“For storytelling, making people aware of really serious things, then I would think maybe less suitable because I don't really see the seriousness or maybe I don't really feel like I should listen to the robot. I think a real human would make more impact.” (Participant 021 – NS, [00:06:51])'' and “''There is already this reception down below as well, where you can ask questions, where there's always a person, which will still be better than most robots.” (Participant 015 – PS, [00:06:25]).'' However, most participants were more positive about Pepper, although improvements would first have to be made.
 
=== Participants preferences for emotional states of the robot ===
As introduced before, Table 7 shows the preferences of participants in which emotional state they preferred overall, which was more trustworthy and which they found most comfortable. The table includes the total scores that the robot received from all the participants. When a participant preferred a combination of robots, the robot received a fraction of a full point.
 
From Table 7 it can been seen that the neutral robot was scored the highest by the participants in all three categories, for both the positive and the negatives story. From the more expressive robots, the happy robot received the highest scores in all three the conditions again for both the positive and the negative story, while the sad robot received only a small fraction of the scores.
 
In addition to the stand-alone preference of the participants, it was also examined whether participants preferred the same robot or the same combination of robots in all three the categories. The results of this can be found in Table 8 and 9. In summary, seven out of ten participants preferred the same robot in all three categories. Most participants choose the neutral robot in all three categories, but sometimes a combination with the neutral and happy robot was preferred. The negative story had slightly more mixed results, these showed only three participants that preferred the same robot in all three the categories. Again, most participants preferred the neutral robot in all three categories, while some preferred the happy robot.
{| class="wikitable"
|+Table 7: Indicated preferences for emotional states of the robot during each of the stories
| colspan="3" |General
|-
|
|Positive story
|Negative story
|-
|Happy robot
|2
|2⅔
|-
|Sad robot
|0
|2⅔
|-
|Neutral robot
|8
|4⅔
|-
| colspan="3" |Trust
|-
|
|Positive story
|Negative story
|-
|Happy robot
|3
|1 ½
|-
|Sad robot
|0
|-
|Neutral robot
|7
|8
|-
| colspan="3" |Comfort
|-
|
|Positive story
|Negative story
|-
|Happy robot
|2 ½
|2 ½
|-
|Sad robot
|1
|-
|Neutral robot
|6 ½
|7
|}
{| class="wikitable"
|+Table 8: Indicated preference distribution for the positive story, showing if participants chose the same robot behavior for multiple preference categories
|Participant
|General
|Trust
|Comfort
|Conclusion
|-
|1
|Neutral
|Neutral
|Neutral
|All three the same
|-
|2
|Neutral
|Happy
|Happy
|Trust + Comfort the same
|-
|3
|Combination happy and  neutral
|Combination happy and neutral
|Neutral
|General + Trust the same
|-
|4
|Happy
|Happy
|Happy
|All three the same
|-
|5
|Neutral
|Neutral
|Sad
|General + Trust the same
|-
|6
|Neutral
|Neutral
|Neutral
|all three the same
|-
|7
|Neutral
|Neutral
|Neutral
|all three the same
|-
|8
|Combination happy and  neutral
|Combination happy and  neutral
|Combination happy and  neutral
|all three the same
|-
|9
|Neutral
|Neutral
|Neutral
|all three the same
|-
|10
|Neutral
|Neutral
|Neutral
|all three the same
|}
 
 
{| class="wikitable"
|+Table 9: Indicated preference distribution for the negative story, showing if participants chose the same robot behavior for multiple preference categories.
|Participant
|General
|Trust
|Comfort
|Conclusion
|-
|1
|Combination happy and  neutral
|Neutral
|Combination happy and  neutral
|General + Comfort the same
|-
|2
|Combination sad and neutral
|Neutral
|Neutral
|Trust + Comfort the same
|-
|3
|Neutral
|Neutral
|Neutral
|All three the same
|-
|4
|Combination happy and  neutral
|Neutral
|Combination happy and neutral
|General + Comfort the same
|-
|5
|Combination of all three the  robots
|Combination happy and  neutral
|Combination happy and  neutral
|Trust + Comfort the same
|-
|6
|Sad
|Neutral
|Combination sad and neutral
|All three different
|-
|7
|Happy
|Happy
|Happy
|All three the same
|-
|8
|Neutral
|Neutral
|Neutral
|All three the same
|-
|9
|Combination sad and happy
|Combination sad and neutral
|Neutral
|All three different
|-
|10
|Combination of all three the  robots
|Neutral
|Neutral
|Trust + Comfort the same
|-
|10
|Neutral
|Neutral
|Neutral
|All three the same
|}
 
== Discussion ==
 
=== Main findings ===
When looking back at the research question, “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”, in combination with the results, it becomes clear that most participants were not that influenced by the match of emotion to story. As shown in Table 7, the neutral robot state was clearly most preferred for each story and each aspect of acceptance. Therefore, our pre-research expectation that the congruent emotion would be most preferred, was falsified. Taking into account the in-depth reasoning of participants during the interview, this preference did not always depend on how suitable the robot's emotional state was for the specific story. Rather, in most cases, participants preferences stemmed from disliking the happy and sad emotional robot state and finding the neutral behavior the most natural for human-like behavior or a good in-between the other more extreme states.
 
Moreover, it was expected that the importance of congruent emotion displaying would be more apparent for the negative story than the negative story. This would mean that there is a clearer preference for the congruent emotion and less for the incongruent emotion when comparing to the positive story. However, Table 7 shows inconsistent results. Only for the overall preference, the congruent emotion is more preferred for the negative story than for the positive story. For all other cases, the congruent emotion is more preferred and the incongruent emotion less preferred for the positive story than for the negative story. This may indicate that congruent emotion displaying is more important for positive messages. The higher level of trust in the happy robot could also be explained by positive, in comparison to negative, emotional expression leading to higher anthropomorphic trust in social robots<ref>Song, Y., Tao, D., & Luximon, Y. (2023). In robot we trust? The effect of emotional expressions and contextual cues on anthropomorphic trustworthiness. Applied Ergonomics, 109, 103967. <nowiki>https://doi.org/10.1016/J.APERGO.2023.103967</nowiki>  </ref>.
 
Comparing these results to other similar research, such as the study of Van Otterdijk et al. (2021)<ref name=":5" />, shows contrasting results. Van Otterdijk and colleagues found no clear preference among different robot behaviors, where our study found a prominent preference for the neutral robot behavior. Moreover, they found that for delivering a negative message, the congruent sad robot behavior was significantly more preferred. This is again in contrast with our results whereas the importance congruence was more present for the positive story. However, it is important to note a clear difference in robot behavior, namely that in the study of Van Otterdijk et al., Pepper was approaching participants, whilst our Pepper was standing still. As the authors also explain, the surprising results could also be caused by the emotional robot behavior not being experienced as intended. For their study, the neutral and happy behavior were perceived as quite similar, though in our study participants indicated a clear difference. Moreover, they also discussed that in the context of elderly in a care center, bringing sad news may be more triggering. Appel et al. (2021)<ref name=":10">Appel, M., Lugrin, B., Kühle, M., & Heindl, C. (2021). The emotional robotic storyteller: On the influence of affect congruency on narrative transportation, robot perception, and persuasion. Computers in Human Behaviour, 120. <nowiki>https://doi.org/10.1016/J.CHB.2021.106749</nowiki>  </ref> conducted a similar study with the robot Reeti. However, differences are that only facial expressions were manipulated, and the robot told the stories from first-person perspective. The found that congruent emotion displaying positively affected transportation in the story world and led to more positive evaluations of the robot. The differences in results can again be explained by differences in robot behavior or the robot’s anthropomorphism, but also the extent to which the stories and emotions were interpreted as intended.
 
Though the results do not give a clear answer on the research questions, the in-depth analysis of the interview resulted in some theoretical implications and practical recommendations for programming emotional behavior with Pepper. The results showed varying opinions about suitable implementations for the Pepper robot as campus assistant, such as for guiding or providing general information, but no tasks that deal with emotions or responsibility and occasionally still a preference for humans performing the task. This indicates that the targeted user group is not yet very acceptable of using Pepper based on the interaction during the experiment and previous attitude, especially when it comes to its emotion displaying. The participants’ dislikes of the emotions states probably influenced this. Consequently, improvements must be made on how Pepper displayed emotions to change the users’ attitude towards Pepper engaging in tasks involving emotions. First of all, the happy robot behavior was very often perceived as way too extreme in its movements, while the sad robot was noted as moving too little, and the neutral robot was seen as the most natural. The emotional robot behavior used in this study and the differences between states were too extreme. The participants clearly indicated that they prefer more subtle changes and natural movements. Therefore, we recommend for practical implications that the neutral robot behavior should be used as a basis from which only small adjustments should be made to create robot behavior for other emotions. These adjustments should be carefully determined by investigating how humans naturally convey these emotions while telling a story.
 
=== Limitations ===
In this study, despite careful planning and execution, several constraints emerged that warrant acknowledgment. Understanding these limitations is crucial for contextualizing the results and guiding future research. The following section outlines the key limitations that were encountered during the experimental process, shedding light on areas for refinement and further investigation.
 
There were a few cases in which the participant misunderstood a story. The stories serve as crucial components in determining participants' preferences for congruent versus incongruent scenarios. When participants misunderstand the narrative, their responses may not accurately reflect their true preferences or emotional reactions<ref>J. Xu, J. Broekens, K. Hindriks and M. A. Neerincx, "Effects of a robotic storyteller's moody gestures on storytelling perception," 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 2015, pp. 449-455, doi: 10.1109/ACII.2015.7344609.</ref>. In addition to that, the negative story was sometimes perceived as a newspaper story. This was because it lacked personal emotional value, as the decision was made to lower the risk of participants experiencing emotional damage due to the storytelling. The study by Appel et al (2021)<ref name=":10" /> made use of a story that was personal to the robot itself. If this was done during this particular experiment, emotional value would have been added properly and possibly could have prevented the sense of detachment from the emotional essence of the story.
 
Due to the duration of the course and the availability of Pepper, there was limited time to get familiar with the robot in combination with the Choregraphe software. This resulted in the choice of using the pre-installed Choregraphe behaviors. Consequently, there was not enough time to explore methods to make the robot's expressions more nuanced and less exaggerated, as some participants perceived them to be, particularly regarding the portrayal of happiness. Some of the participants also stated that they would have preferred a different emotion if the other one was not as extreme.
 
For feasibility reasons, the participants were all exposed to the same order of emotions for each of the stories. Whether they heard that in combination with the positive or negative story first depended on which iteration of the experiment they were participating in. Considering the peak-end rule, the predetermined mood sequence could have impacted the participants’ overall experience with Pepper. If the participant felt a positive or negative emotional peak near the end of the experiment, they might have attributed that feeling towards the entire experiment, regardless of how they, for example, felt for the first mood iteration. Thus, the sequence of mood exposure could have influenced participants' hindsight evaluations<ref>Kyeong-Soo Han, Han-kyu Lee, Youngho Jeong and Youn-Seon Jang, "Improvement of viewers' preference inference by applying the peak-end rule," 2013 International Conference on ICT Convergence (ICTC), Jeju, Korea (South), 2013, pp. 792-797, doi: 10.1109/ICTC.2013.6675481.</ref>.
 
As the researchers on this project were not very experienced with interviewing, it has occurred that the interviewer could have asked for more explanation when they didn't. This resulted in missing out on possible relevant information for the study and having a possible, less deeper understanding of the results.
 
The last limitation was the limited screening that was done on participants before they were invited to the experiment. As the experiment relied on the ability of participants to pick up detailed emotional ques, some participants might have been better suited than other participants. Some conditions can have an impact on the perception of emotions and the interpretation of these emotions, like autism can have. As the participants were not specifically asked whether they had autism or any other condition that could have an impact on emotional perception, it might have had an impact on the misunderstanding of the emotion<ref>C. Tsangouri, W. Li, Z. Zhu, F. Abtahi and T. Ro, "An interactive facial-expression training platform for individuals with autism spectrum disorder," 2016 IEEE MIT Undergraduate Research Technology Conference (URTC), Cambridge, MA, USA, 2016, pp. 1-3, doi: 10.1109/URTC.2016.8284067.</ref>.
 
=== Recommendations for future research ===
Although the outcomes of this study deviated from expectations, they still shed valuable light on the field of Human-Robot Interaction (HRI), which holds significant relevance in contemporary society. Future research endeaveours can build upon these findings, refining them by addressing identified limitations. Primarily, it became evident that participants didn't always interpret the narrative as intended. To solve this, preliminary testing of the story and robot's emotional cues on non-participants could refine storylines before the main study. Additionally, participants noted difficulties comprehending the stories and maintaining focus during the robot's movements. Implementing measures such as pre-reading the story or initially having the robot deliver the story without movements could remove this issue. Moreover, Pepper's chest screen could be utilized to visually complement the story, enhancing engagement. These strategies aim to increase the likelihood that the stories have the intended effect, and therefore provide more valuable insights into emotion-message congruence.
 
One of the limitations that was mentioned was the choice for pre-installed Choregraphe behaviours. This lead to led to less the emotions being perceived as too exaggerated by many participants. In future research, it is recommended to spend to more time getting familiar with suitable programs and methods for programming Pepper. This could lead to more nuanced and subtle expressions that would be more appealing for the listeners. For this, also more research would have to be done into perceiving in human-robot interaction with Pepper, paying attention to what cues humans use to pick up on certain emotions of robots. Furthermore, by manually programming Pepper, more variation can be introduced in the intensities of the emotions during the story-telling, meaning the gestures and emotions as shown by Pepper correspond better to the context of the story that is told. Besides, a wider range of emotions could be used in future research, instead of only “happy”, “sad” and “neutral”, to gain more knowledge into the exact emotion that is preferred by participants, as they now often saw the “happy” and “sad” robot as too extreme. In real life, human emotions manifest across a nuanced spectrum. Emotions are very subtle and complex, since the perception of emotions is highly sensitive to context and personal factors. This implies the importance of considering this complexity in refining Pepper's emotional behavior<ref>Ben-Ze'ev, Aaron. (2001). The subtlety of emotions. Psycoloquy. 12. </ref>.
 
In future investigations, it would be compelling to conduct a study with a more extensive participant pool. As depicted in Table 7, 8 and 9, while many participants exhibited consistent preferences for specific robots, outliers were observed — individuals with differing perceptions or opinions on robot emotions. Replicating the experiment on a broader scale as confirmatory research, with appropriate improvements, allows for statistical significance testing and could enhance the generalizability of results. Only then it is possible to validate whether the observed patterns hold true for a larger demographic.
 
In general, participants expressed a positive attitude toward having Pepper on campus for providing basic information like directions, although with some suggested adjustments. Further research is necessary to delve into the specifics of Pepper's role as a campus robot, including the type of information it can convey and the way it will do this (e.g., via voice, emotion display, movements). Since this study primarily examined students' general attitudes towards Pepper rather than its specific use as a campus robot, further research can aim for more targeted exploration in this regard.


== Planning ==
== Planning ==
Each week, there will be a mentor meeting on Monday morning followed by a group meeting. Another group meeting will be held on Thursday afternoon and by Sunday afternoon the wiki will be updated for work done that week (weekly deliverable).  
Each week, there was a mentor meeting on Monday morning followed by a group meeting. Another group meeting was held on Thursday afternoon and by Sunday afternoon the wiki was updated for work done that week (weekly deliverable).  


'''Week 1'''  
'''Week 1'''  
Line 101: Line 787:


== Individual effort per week ==
== Individual effort per week ==
 
'''Week 1'''
=== Week 1 ===
{| class="wikitable"
{| class="wikitable"
|'''Name'''  
|'''Name'''  
Line 130: Line 815:
|13  
|13  
|Intro lecture (2h), group meeting (2h), literature research (4h), writing summary literature (3h) group meeting (2h)  
|Intro lecture (2h), group meeting (2h), literature research (4h), writing summary literature (3h) group meeting (2h)  
|-
|Naomi Han
|2
|group meeting (2h)
|}
|}
 
'''Week 2'''
=== Week 2 ===
{| class="wikitable"
{| class="wikitable"
|'''Name'''  
|'''Name'''  
Line 163: Line 842:
|Tutormeeting (35min), groupmeeting (3h), read literature Pepper and summarize (3h), groupmeeting (3h), research comfort question interview (2h)  
|Tutormeeting (35min), groupmeeting (3h), read literature Pepper and summarize (3h), groupmeeting (3h), research comfort question interview (2h)  
|}
|}
'''Week 3'''
{| class="wikitable"
|'''Name'''
|'''Total Hours'''
|'''Break-down'''
|-
|Danique Klomp
|14
|Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), preparation Thematic analysis & protocol (2h), mail and contact (1,5h), meeting Zoe (1h), group meeting (3h)
|-
|Liandra Disse
|12
|Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), update (meeting) planning (1h), prepare meeting (1h),  group meeting (3h), find participant (30min)
|-
|Emma Pagen
|12
|Tutormeeting (35min), groupmeeting 1(3h), finish ERB form (1h), create lime survey (1,5h), make an overview of the content sections of final wiki page (1h), group meeting 2 (3h), updating the wiki (2h)
|-
|Isha Rakhan
|12,5
|Tutormeeting (35min), groupmeeting 1(3h), meeting zoe (1h), group meeting (3h), programming (5h)
|-
|Margit de Ruiter
|
|*was not present this week, but told the group in advance and had a good reason*
|}
'''Week 4'''
{| class="wikitable"
|'''Name'''
|'''Total Hours'''
|'''Break-down'''
|-
|Danique Klomp
|11,5
|Tutor meeting (35min), group meeting (3h), review interview questions (1h), finding participants (0.5h), mail and contact (1h), reading and reviewing wiki (1.5h), group meeting (4h), lab preparations (1h)
|-
|Liandra Disse
|11
|Prepare and catch-up after missed meeting due to being sick (1,5h), find participants (0.5h), planning (1h), group meeting (4h), set-up final report and write introduction (4h)
|-
|Emma Pagen
|13
|Tutormeeting (35min), group meeting (3h), adding interview literature (2h), find participants (0,5h), group meeting (4h), reviewing interview questions (1h), going over introduction (1h), updating wiki (1h)
|-
|Isha Rakhan
|11,5
|Tutormeeting (35min), group meeting (3h), research and implement AI voices (2h), documentation choices Pepper behavior (2h), group meeting (4h)


== Literary review ==
|-
'''State of the art'''  
|Margit de Ruiter
|11
|Tutormeeting (35min), group meeting (1h), find participants (0.5h), testing interview questions (1h), group meeting (4h), start writing methods (4h)


The use of social robots has increased rapidly over time. Social robots are being developed specifically for interacting with humans and other robots. They use artificial intelligence and are equipped with tools such as sensors, cameras and microphones. These tools enable the robot to interact with humans ​<ref name=":0">Biba, J. (2023, March 10). ''What is a social robot?'' Retrieved from Built In: <nowiki>https://www.nature.com/articles/s41598-020-66982-y#citeas</nowiki> </ref>. These robots come in all different kinds of shapes and sizes. For example, there are social robots such as Pepper, that look more humanlike, and there are robots such as PARO, which is seal-like ​<ref>Geva, N., Uzefovsky, F., & Levy-Tzedek, S. (2020, June 17). ''Touching the social robot PARO reduces pain perception and salivary oxytocin levels''. Retrieved from Scientific reports: <nowiki>https://www.nature.com/articles/s41598-020-66982-y#citeas</nowiki> </ref>. These types of robots are now mostly used in service settings ​<ref>Borghi, M., & Mariani, M. (2022, September). ''The role of emotions in the consumer meaning-making of interactions with social robots''. Retrieved from Science Direct: <nowiki>https://www.sciencedirect.com/science/article/pii/S0040162522003687</nowiki> </ref>.  
|}
'''Week 5'''
{| class="wikitable"
|'''Name'''
|'''Total Hours'''  
|'''Break-down'''
|-
|Danique Klomp
|24,5
|Tutor meeting (30min), group meeting (experiment) (3h),  transcribe first round of interviews (2h),  familiarize with interviews (5h),  highlight interviews (2h),  group meeting (experiment) (3h), transcribe second round of interviews (2h), first round of coding (3h), refine and summarize codes (2h), prepare next meeting (1h), adjust participants in methods section (1h)  
|-
|Liandra Disse
|21,5
|Tutor meeting (30min), group meeting (experiment) (3h), transcribe, familiarize and code first interviews (6h), incorporate feedback introduction (1h), group meeting (experiment) (3h), transcribe, familiarize, code second interviews and refine codes (7h), extend methods section (1h)  


As human-robot interactions are growing more important, more research is being done around these interactions, such as acceptance, trust, responsibility and anthropomorphism. This can be done by investigating different properties of the robots, such as voice, appearance, and facial expressions. These properties can be programmed in the robot in such a way that people can recognize certain emotions in the robot ​<ref name=":1">Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. ''Journal of Retailing and Consumer Services'', ''61'', 102551. <nowiki>https://doi.org/10.1016/J.JRETCONSER.2021.102551</nowiki> </ref>​. For example, a study has been done with the robot Sophia, who is developed to eventually work as a service robot in for example healthcare and education. She was given different emotional expressions, and pictures of Sophia were posted on Instagram. The comments on these posts were then analyzed to examine people’s responses to emotions on robots ​<ref name=":1" />​. This is only one example of many more similar studies.  
|-
|Emma Pagen
|20,5
|Tutor meeting (30min), group meeting (experiment) (3h), transcribe first round of experiments (3h), familiarize with interviews and coding of first interviews (4h), group meeting (experiment) (3h), transcribe second round of interviews (3h), familiarize with interviews and coding of second interviews (4h)


While research is still being done on human reactions to social robots, many of these robots are already being used in real world settings. They are mainly used as companions and support tools for children, but they are also used for providing services such as cleaning ​<ref name=":0" />. Two examples of social robots that are applied in the real world will be given. The first is the robot Misty. This robot is capable of many different facial expressions and can move its arms and head. Moreover, it has face and speech recognition to remember people and recognize intents ​<ref name=":0" /> <ref>Misty Robotics. (n.d.). ''Misty Robotics''. Retrieved from Misty Robotics: <nowiki>https://www.mistyrobotics.com/</nowiki> </ref> . Another example is the robot Pepper. This robot has a more humanoid appearance than Misty and is more advanced in its movements. Pepper is also able of perceiving human emotions and adapting its behavior appropriately. The robot is mostly used in companies and schools, such as Palomar College, where the robot is used to highlight and promote programs and services at the college. The students are able to ask it questions, such as “How do I get to my class?” ​<ref>Becerra, T. (2017, October 3). ''Palomar College welcomes Pepper the robot''. Retrieved from The Telescope: <nowiki>https://www.palomar.edu/telescope/2017/10/03/palomar-robot-pepper-debut/</nowiki> </ref>​.
|-
|Isha Rakhan
|14,5
|Tutor meeting (30min), group meeting (experiment) (3h), group meeting (experiment) (3h) transcribing all of the interviews (4h), coding all of the interviews (4h)  


|-
|Margit de Ruiter
|17
|Tutor meeting (30min), group meeting (experiment) (3h), transcribing all the interviews (7h), group meeting (experiment) (3h) coding all the interviews (3,5h)


'''How do students interact with robots?'''
|}
'''Week 6'''
{| class="wikitable"
|'''Name'''
|'''Total Hours'''
|'''Break-down'''
|-
|Danique Klomp
|18,5
|Tutor meeting (30 min), group meeting first round coding (3h),  second round of recoding interviews (2h),  check recoding of other interviewer (2h), look at comments/check recoding of other interviewer (1h), preparations meeting Thursday (2h),  create preference count document for the group meeting (2,5h), group meeting (3h), recode and finalize transcripts (1,5h), work on thematic map and finalizing the themes/codes (1h),
|-
|Liandra Disse
|17,5
|Tutor meeting (30 min), group meeting first round coding (3h), second round of recoding interviews (2h), check recoding of other interviewer, go over comments on own interview and make suggestions for adjusting codes (4h), planning (1h), group meeting (3h), finalize transcripts (1h), start on results section and give feedback on methods (3h)
|-
|Emma Pagen
|16,5
|Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2h), check coding of other interviewer (2h), adjust own coding based on suggestions from other group member (1h), group meeting (3h), recoding after group meeting and finalize transcript (2h), finalize method (1h), update wiki (2h)
|-
|Isha Rakhan
|15
|Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2,5h), check coding of other interviewer (2h), check codes feedback other interviewer (1h), group meeting (2h), adjusting codes after group meeting (1h), working on the "Pepper" part of the report (3h)
|-
|Margit de Ruiter
|15
|Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2,5h), check coding of other interviewer (2,5h), check codes feedback other interviewer (2h) group meeting (2h), adjusting codes after group meeting (1h), check overview with themes and codes (1,5h)
|}
'''Week 7'''
{| class="wikitable"
|'''Name'''
|'''Total Hours'''
|'''Break-down'''  
|-
|Danique Klomp
|13
|Tutor meeting (30min), group meeting (3h),  group meeting (2h), write preference results (1,5h), read main findings (1h),  write presentation text (2h), adjust presentation template (1h), prepare presentation (2h),  


Students are an important user group for robots, since robots can be helpful educational tools. They could help students to grasp difficult concepts. Especially they can be useful in providing language, science of technology education. A robot could take on the role of a peer, a tool or a tutor in the learning activity <ref>Mubin, O., Stevens, C. J., Shahid, S., Mahmud, A. A., & Dong, J. (2013). A REVIEW OF THE APPLICABILITY OF ROBOTS IN EDUCATION. ''Technology for Education and Learning'', ''1''(1). <nowiki>https://doi.org/10.2316/journal.209.2013.1.209-0015</nowiki> </ref>. Also, students can learn a lot from interacting with robots. Building teamwork and improving communication skills are just some examples of the multiple benefits of using robotics in education <ref>Center for Innovation and Learning. (2023, November 21). ''Explore the seven benefits of robotics in education for students''. Center for Innovation and Education. <nowiki>https://cie.spacefoundation.org/7-benefits-of-robotics-for-students/</nowiki> </ref>.  However, the implicit and multi-faceted impacts that this might bring into educational environments as a whole should be considered <ref>Shin, N., & Kim, S. (2007). Learning about, from, and with Robots: Students’ Perspectives. ''IEEE Xplore''. <nowiki>https://doi.org/10.1109/roman.2007.4415235</nowiki> </ref>. Another important aspect to stress is that exposure to robots at a relative young age prepares students for the future, it is likely that they will encounter robots in multiple industries. By familiarizing themselves with robots in an early spectrum, they will gain knowledge and benefit from this in their future careers.  
|-
 
|Liandra Disse
Students are an important target group for robots, because they represent future workforce and innovations. Understanding the needs of students is therefore important, since it can help developers design the robots so that they are engaging, user-friendly and educational. Teens have a desire for robots to be authentic, imperfect, and active listeners <ref name=":8">Björling, E. A., Thomas, K. A., Rose, E., & Çakmak, M. (2020). Exploring teens as robot operators, users and witnesses in the wild. ''Frontiers in Robotics and AI'', ''7''. <nowiki>https://doi.org/10.3389/frobt.2020.00005</nowiki> </ref>.  
|15
 
|Tutor meeting (30min), group meeting (3h), clean document of all interviews (1h), outline discussion (0,5h), planning (0,5h) extend results section (3h), group meeting (3h), write discussion main findings (3,5h)  
In former research, a field study was conducted with qualitative interviews. The results showed a positive perception of the robot-supported learning environment, indicating a positive impact on the learning outcomes. Most students showed an additional value in the presence of the robot compared to traditional onscreen scenario or self-study and the robot increased their motivation, concentration and attention <ref>Donnermann, M., Schäper, P., & Lugrin, B. (2020). Integrating a Social Robot in Higher Education – A Field Study. ''IEEE Xplore''. <nowiki>https://doi.org/10.1109/ro-man47096.2020.9223602</nowiki> </ref>.  
|-
 
|Emma Pagen
Students also prefer robots that are easy to use and understand, it needs an intuitive interface and clear instructions. Apart from this, students like robots that can perform a wide range of activities and tasks and also, providing challenges and opportunities over time. They also like robots to be adaptable. Also, robots that can interact socially are interesting to students. They like robots that can understand and respond to human emotions, speech, gestures in order to have meaningful interactions and relationships.
|10,5  
 
|Tutor meeting (30min), groups meeting (3h), working on results section (2h), group meeting (3h), working on discussion section (2h)  
 
|-
'''The importance of social robots being able to display emotions'''
|Isha Rakhan
 
|10,5
In many sectors, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. In order to make this human-robot interaction (HRI) feel natural and enjoyable for humans, robots must make use of human social norms <ref name=":2">Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. ''Robotics and Autonomous Systems'', ''58''(3), 322–332. <nowiki>https://doi.org/10.1016/J.ROBOT.2009.09.015</nowiki> </ref>. This requirement originates from humans anthropomorphizing robots, meaning that we attribute human characteristics to robots and engage and form relationships with them as if they are human <ref name=":2" /><ref name=":3">Breazeal, C. (2004). Designing Sociable Robots. ''Designing Sociable Robots''. <nowiki>https://doi.org/10.7551/MITPRESS/2376.001.0001</nowiki>
|Tutor meeting (30min), groups meeting (3h), group meeting (3h), Improving the introduction (1h), Working on the methods section (3h)  
</ref>. We use this to make the robot’s behavior familiar, understandable and predictable to us, and infer the robot’s mental state. However, for this to be a correct as well as intuitive inference, the robot’s behavior must be aligned with our social expectations and interpretations for mental states <ref name=":3" />.
|-
 
|Margit de Ruiter
One very important integrated element in human communication is the use of nonverbal expressions of emotions, such as facial expressions, gaze, body posture, gestures, and actions <ref name=":2" /><ref name=":3" />. In human-to-human interaction as well as human-robot interaction, these nonverbal cues support and add meaning to verbal communication, and expressions of emotions specifically help build deeper and more meaningful relations, facilitate engagement and co-create experiences <ref name=":1" />. Besides adding conversational content, it is also shown that humans can unconsciously mimic the emotional expression of the conversational partner, known as emotional contagion, which helps to emphasize with others by simulating their feelings <ref name=":1" /><ref name=":2" />. Due to our tendency to anthropomorphize robots, it is possible that emotional contagion also occurs during HRI and can facilitate making users feel positive affect while interacting with a social robot <ref name=":1" />.
|11,5
 
|Tutor meeting (30min), groups meeting (3h), Prepare presentation ppt and word document (3h), feedback intro & methods change (0,25h), group meeting (3h), practice presentation (1,5h)  
Artificial emotions can be used in social robots to facilitate believable HRI, but also provide feedback to the user about the robot’s internal state, goals and intentions <ref name=":4">Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. ''Robotics and Autonomous Systems'', ''42''(3–4), 143–166. <nowiki>https://doi.org/10.1016/S0921-8890(02)00372-X</nowiki> </ref>. Moreover, they can act as a control system through which we learn what drives the robots behavior and how he is affected by and adapts due to different factors over time <ref name=":4" />. Finally, the ability of social robots to display emotions is crucial in forming long-term social relationships, which is what people will naturally seek due to the anthropomorphic nature of social robots <ref name=":2" />.
|}
 
'''Week 8'''
 
'''Measurements in HCI research'''
 
In the past HCI (Human Computer Interaction) and HTI (Human Technology Interaction) research focused on improving the technological aspects of the interaction, but in more recent years increased interest has been developed into the aspects of user experience. User experience is still a broad area of research, yet in social robotics it has become increasingly relevant. User experience has many distinct aspects that are all a part of the overall experience, yet the basis of user experiences lies in the comfortable interaction with an agent. Making the interaction comfortable and likeable will create a sense of trust and eventually acceptance of the agent.  
 
The three factors mentioned above are all connected to each other. Specifically, trust and acceptance are linked. A paper by Wagner et al <ref>Wagner Ladeira, M. G. P., & Santini, F. (2023). Acceptance of service robots: a meta-analysis in the hospitality and tourism industry. ''Journal of Hospitality Marketing \& Management'', ''32''(6), 694–716. <nowiki>https://doi.org/10.1080/19368623.2023.2202168</nowiki> </ref> on a meta-analysis of acceptance in service robots described trust as a mediating factor between informational cues and acceptance of the service agent. However, there are also many contextual factors that play a role in this relationship. For example, acceptance and trust in agents seems to be smaller when the user is in a group than when the user uses the robot individually <ref>Martinez, J. E., VanLeeuwen, D., Stringam, B. B., & Fraune, M. R. (2023). Hey? ! What did you think about that Robot? Groups Polarize Users’ Acceptance and Trust of Food Delivery Robots. ''Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction'', 417–427. <nowiki>https://doi.org/10.1145/3568162.3576984</nowiki> </ref>.
 
To find these relationships and correlations between the many varied factors influencing the overall user experience, there need to be reliable measures that can be used to measure the presence, extend and underlying principles of them. Yet, one of the main challenges when it comes to HCI research is creating a reliable measurement. This challenge is present in most HCI domains, for example speech interfaces <ref>Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). The State of Speech in HCI: Trends, Themes and Challenges. ''Interacting with Computers'', ''31''(4), 349–371. <nowiki>https://doi.org/10.1093/iwc/iwz016</nowiki> </ref>, user engagement <ref>Doherty, K., & Doherty, G. (2018). Engagement in HCI: Conception, Theory and Measurement. ''ACM Comput. Surv.'', ''51''(5). <nowiki>https://doi.org/10.1145/3234149</nowiki> </ref> and online trust <ref>Kim, Y., & Peterson, R. A. (2017). A Meta-analysis of Online Trust Relationships in E-commerce. ''Journal of Interactive Marketing'', ''38''(1), 44–54. <nowiki>https://doi.org/10.1016/j.intmar.2017.01.001</nowiki> </ref>. The main reason for the lack of reliable and valid measures in HCI research is that these measures are only needed in the user experience research, which is new, as stated before.  
 
Still there are several attempts to create reliable measures for artificial agent acceptance and trust. First of all, one of the more well-known measures of acceptance is the Almere model that was proposed by Marcel Heerink et al. <ref name=":5">Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model. ''International Journal of Social Robotics'', ''2''(4), 361–375. <nowiki>https://doi.org/10.1007/s12369-010-0068-5</nowiki></ref>. This measure consists of a questionnaire that covers twelve basic principles that range from induced anxiety to enjoyment and trust. The questionnaire itself consists of 41 questions. This model has an acceptable Cronbach’s alpha score of ~0.7 when it is used in the older adult and elderly population <ref name=":5" /><ref>Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. ''Proceedings of the 6th International Conference on Human-Robot Interaction'', 147–148. <nowiki>https://doi.org/10.1145/1957656.1957704</nowiki> </ref>. However, when the measure is used for young adults, the reliability stays around the same <ref>Guner, H., & Acarturk, C. (2020). The use and acceptance of ICT by senior citizens: a comparison of technology acceptance model (TAM) for elderly and young adults. ''Universal Access in the Information Society'', ''19''(2), 311–330. <nowiki>https://doi.org/10.1007/s10209-018-0642-4</nowiki> </ref>. Although this is the measurement that was also proposed in the article that inspired this research, there are still many more measurements of acceptability and attitude towards robots <ref>Krägeloh, C. U., Bharatharaj, J., Sasthan Kutty, S. K., Nirmala, P. R., & Huang, L. (2019). Questionnaires to Measure Acceptability of Social Robots: A Critical Review. ''Robotics'', ''8''(4). <nowiki>https://doi.org/10.3390/robotics8040088</nowiki> </ref>.  
 
Measuring trust is harder as there is not one proposed overall method to measure trust, but this does not mean it cannot be measured. In a literary review of several papers measuring trust it was found that questionnaires are one of the most used methods to measure trust <ref>Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. ''International Journal of Human–Computer Interaction'', 1–16. <nowiki>https://doi.org/10.1080/10447318.2022.2138826</nowiki> </ref>. Sadly, the questionnaires used are not one standard set of questions, but a variety. This creates the added problem that studies are hard to compare to each other. As stated before, trust is connected to acceptance. In the Almere model discussed in the previous paragraph, trust is included as a basic factor. It, however, it is only measured using 2 statements “I would trust the robot if it gave me advice” and “I would follow the advice the robot gives me”. When trust is being measured on its own, it will need to be extended. Luckily there have been some measures proposed. One of these measurements comes from a review of several measurements of trust that were combined by Madsen and Gregor <ref>Madsen, M., & Gregor, S. D. (2000). ''Measuring Human-Computer Trust''. <nowiki>https://api.semanticscholar.org/CorpusID:18821611</nowiki> </ref>. In the same paper they proposed a questionnaire that consists of 5 distinct factors that each have 5 questions related to them. The overall Cronbach’s alpha of this measurement was found to be 0.85, which is a good score.  
 
The constructs measured above are not the only constructs that can be measured in Human-Computer interactions, but they are still the most prevalent in recent HCI and HTI research. Given the circumstances, they do give a great overview of the overall concepts in HCI research, as they contain most of the different basic principles in their foundation. This means that these measures are a great starting point for more in-depth research. 
 
 
'''How to display emotions as a robot'''
 
A robot can display emotions when it combines body, facial and vocal expressions.  
 
The way such emotional reaction is expressed highly depends on the robot’s degree of anthropomorphism. For robots with a simple appearance, it may be sufficient to express emotions by means of e.g. lights or sounds. However, as the degree of anthropomorphism increases, it turns necessary to match the robot's behavior with the appearance to avoid falling into the uncanny valley <ref name=":6">Marcos-Pablos, S., & García‐Peñalvo, F. J. (2021). Emotional Intelligence in Robotics: A Scoping review. In Advances in intelligent systems and computing (pp. 66–75). <nowiki>https://doi.org/10.1007/978-3-030-87687-6_7</nowiki>  </ref>.
 
The idea behind the uncanny valley proposes that as robots keep approaching a more human-like appearance, people can experience a feeling of uneasiness / disturbance <ref>Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review Of General Psychology, 19(4), 393–407. <nowiki>https://doi.org/10.1037/gpr0000056</nowiki>  </ref>. These experiences also occur as the robot’s user perceives a mismatch between the robot’s appearance and behavior.  There are also differences in the way that the uncanny valley is perceived across different ages and cultures. As Eastern countries and children are less likely to be disturbed by this phenomenon <ref name=":6" />.
 
Developers of humanoid robots found that next to body posture, hands also play a role in conveying emotions, as human hands can contribute to the human ability of emotional expression. These developers then created the emotion expression humanoid robot WE-4RII, with the integration of robot hands. This humanoid robot was eventually able to express emotion using facial expression, arms, hands, waist and neck motion. They also concluded that motion velocity is equally as important as body posture. “WE-4RII quickly moves its body for surprise emotional expression, but it slowly moves its body for sadness emotional expression.” <ref>Effective emotional expressions with expression humanoid robot WE-4RII: integration of humanoid robot hand RCH-1. (2004). IEEE Conference Publication | IEEE Xplore. <nowiki>https://ieeexplore.ieee.org/abstract/document/1389736?casa_token=LP_352U3xbQAAAAA:Yugjlzs5aZ-KEfzz2UxVjNIZDKTyRkeEXNjyImWL_TXrR1NHVd75pi6-ZKfHd3Zd10c5xykvxQ</nowiki>  </ref>
 
Next to that, vocal prosody also contributes to the quality of the emotion that is being displayed. In human-to-human interaction, patterns of pitch, rhythm, intonation, timing, and loudness contribute to our emotional expression. A sudden change in volume or pitch could emphasize excitement or emphasis. Or when the pitch rises at the end of a sentence, it will be more clear that the robot is asking a question, this could indicate confusion and / or curiosity <ref name=":7">Crumpton, J., & Bethel, C. L. (2015). A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal Of Social Robotics, 8(2), 271–285. <nowiki>https://doi.org/10.1007/s12369-015-0329-4</nowiki>  </ref>.
 
Studies have shown that humans will interpret both linguistic and non-linguistic emotion displaying sounds in an emotional way. But there is a preference towards the linguistic type of robot, as research has shown that people prefer human-like voices. In the example of a virtual car passenger, the driver appeared to be more attentive and less involved in accidents, as the virtual passenger’s speech matched the driver’s emotion. So it is not only beneficial to sound like a human being, but also the capability of matching the user’s emotions contributes to the emotion displaying quality of the robot <ref name=":7" />.
 
'''Pepper'''
 
Currently, Pepper is deployed in thousands of homes and schools. However, Pepper was initially designed for an application of business-to-business. It was launched in June 2014. Then, Pepper became of interest all over the world for multiple other applications. For example, in business-to-consumer, business-to-academics and business-to-developers fields. So, in the end, it was adapted for business-to-consumer purposes <ref name=":9">Pandey, A. K., & Gelin, R. (2018). A Mass-Produced sociable humanoid robot: Pepper: the first machine of its kind. ''IEEE Robotics & Automation Magazine'', ''25''(3), 40–48. <nowiki>https://doi.org/10.1109/mra.2018.2833157</nowiki> </ref>.  
 
Pepper is capable of exhibiting body language, perceiving and interacting with its environment and it is able to move itself around. The robot can also analyse other people's expressions and their voice tones, using emotion and voice recognition algorithms in order to create interaction. It is equipped with high-level interfaces and features for multimodal communication with humans surrounding Pepper <ref name=":9" />.
 
Pepper has a lot of capabilities, among which mapping and navigation, object detection, hearing, speech, and face detection <ref name=":10">Mishra, D., Romero, G., Pande, A., Bhuthegowda, B. N., Chaskopoulos, D., & Shrestha, B. (2023). An exploration of the Pepper robot’s capabilities: unveiling its potential. ''Applied Sciences'', ''14''(1), 110. <nowiki>https://doi.org/10.3390/app14010110</nowiki> </ref>. Pepper is a humanoid robot, meaning it is designed to have a physical human appearance. It's sound and speech recognition capabilities yield good results, even with several accents. However, it's built-in navigation system is unreliable, which makes it hard to get to destinations accurately. Sometimes, object and face detection of Pepper gives inconsistent results. So, Pepper can be improved in those fields <ref name=":10" />.
 
 
''Figure: Pepper the Humanoid and Programmable Robot'' <ref>''Pepper the humanoid and programmable robot |Aldebaran''. (n.d.). <nowiki>https://www.aldebaran.com/en/pepper</nowiki>
</ref>
 
 
Pepper uses facial recognition to pick up emotions on human faces, like sadness or hostility and it uses voice recognition to hear concern. It has age tools like age detection and basic emotions embedded intro its framework <ref name=":10" />. It bases the recognition mostly on eye contact, the central part of the face and distance. It can not only detect emotions, but also knows how to respond and react to them appropriately. For example, it will detect sadness based on a person’s expression and voice tone and by using sensors that are built-in and pre-programmed algorithms, the robot will react properly <ref>Mlot, S. (2014, June 5). “Pepper” robot reads, reacts to emotions. ''PCMAG''. <nowiki>https://www.pcmag.com/news/pepper-robot-reads-reacts-to-emotions</nowiki> </ref>. Several applications of this robot are answering questions, greeting guests and playing with kids in Japanese homes <ref>Glaser, A. (2016, June 7). Pepper, the emotional robot, learns how to feel like an American. ''WIRED''. <nowiki>https://www.wired.com/2016/06/pepper-emotional-robot-learns-feel-like-american/</nowiki> </ref>.  
 
Pepper can also use several gestures while responding to a human, like waving and nodding. It has 12 hours of battery life, and it can return to its charging station if necessary. It is 1.2 meter tall, has 3 omnidirectional wheels in order to move smoothly and 17 joints for body language <ref name=":9" />. Pepper is designed to make it appropriate and acceptable in daily life usage for interacting with human beings. Some design principles behind Pepper are; a pleasant appearance, safety, affordability, interactivity and good autonomy. The aim was to make it not too exact a human likeness robot, since the designers wanted to avoid the ‘uncanny valley’ <ref name=":9" />. 
 
== Research proposal ==
'''<big>Project team: PRE2023 3 Group 4</big>'''
 
'''Researchers:'''  
 
===== ''Liandra Disse – 1529641'' =====
 
===== ''Margit de Ruiter – 1627805'' =====
 
===== ''Danique Klomp – 1575740'' =====
 
===== ''Isha Rakhan – 1653997'' =====
 
===== ''Emma Pagen – 1889907'' =====
'''Study name'''
 
The influence of congruent emotion displayed with an emotional message on students’ acceptance of a social robot.
 
'''Study description'''
 
The study aims to research the reactions of students when interacting with robots. More specifically, the research question that will be studied is “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”. We expect that participants will prefer interacting with the robot while displaying the emotion that fits with the content of its message and be open to more future interactions like this with the robot. On the other hand, we expect that a mismatch between the emotion displayed by robot and the story it is telling will make participants feel less comfortable and therefore less accepting of the robot. The main focus of this research is thus on how accepting the students are of the robot after interacting with it, but we will also try to gain insights into potential underlying reasons, such as the amount of trust the students have in the robot and how comfortable they feel when interacting with them. The results could be used to provide insights into whether robots could be used on university campuses as assistant robots.  
 
'''Participants'''
 
As mentioned before, the study is investigating the viewpoint of students and therefore the participants will be gathered from the TU/e. We have chosen to target this specific group because of their in general higher openness to social robots and the increased likelihood that this group will deal a lot with social robots in the near future ​<ref name=":11">Manzi, F., Sorgente, A., Massaro, D., Villani, D., Di Lernia, D., Malighetti, C., Gaggioli, A., Rossignoli, D., Sandini, G., Sciutti, A., Rea, F., Maggioni, M. A., Marchetti, A., & Riva, G. (2021). Emerging Adults’ Expectations about the Next Generation of Robots: Exploring Robotic Needs through a Latent Profile Analysis. ''Cyberpsychology, Behavior, and Social Networking'', ''24''(5), 315–323. <nowiki>https://doi.org/10.1089/CYBER.2020.0161</nowiki> </ref>​. Therefore, we expect that this is an interesting user group during the beginning of the implementation stage of social robots that we are currently in ​<ref name=":11" />​. We aim to gather around 10 participants from our own circle of fellow students, preferably with an equal distribution of gender.  
 
'''Method'''
 
The study design consists of a 2 (positive/negative emotional message) x 3 (happy/neutral/sad emotion displayed by robot) within-subject experiment. Each participant will thus be told two stories, three times each, as can be seen in Table 1. This results in six conditions that differ in terms of a match between the content of the story with the emotion of the robot.
{| class="wikitable"
{| class="wikitable"
|+Table 1: The six conditions in the experiment
|'''Name'''
|Story / displayed emotion
|'''Total Hours'''
|Happy
|'''Break-down'''
|Neutral
|-
|Sad
|Danique Klomp
|10,5
|Group meeting (2h),  adjust presentation text (0.5h), Check source final report (0.5h), Adjust preference results text (1h), Practice presentation (2,5h), Presentation session (3h), Check the wiki (1h)
|-
|Liandra Disse
|10,5
|Finalize main findings (1h), give and incorporate feedback on whole report, especially discussion (3h), check presentation (0.5h),  group meeting (2h), presentation session (3h), check the wiki (1h)
|-
|Emma Pagen
|12
|Finalize future research in discussion (2h), Group meeting (2h), read through report (1,5h), presentation session (3h), updating wiki (2,5h), final wiki check (1h)
|-
|-
|Positive
|Isha Rakhan
|Congruent
|10,5
|Emotionless
|Group meeting (2h), Read through the report and give feedback (2h), Writing the limitations (3h), Presentation session (3h), Final wiki check (1h)
|Incongruent
|-
|-
|Negative
|Margit de Ruiter
|Incongruent
|11
|Emotionless
|Read through whole report and give feedback (2h), practice presentation (2h), group meeting (2h), practice presentation (1h),  Presentation session (3h), final wiki check (1h)
|Congruent
|}
|}
For this experiment, the robot Pepper will be used. Pepper will be programmed to display happiness and sadness based on pitch and body posture. How we will exactly program Pepper will be further investigated in the literature and feasible programming possibilities. Examples of what behavior Pepper could display for each emotion can be seen in Table 2 and Figure 1 ​<ref name=":12">Bishop, L., Van Maris, A., Dogramadzi, S., & Zook, N. (2019). Social robots: The influence of human and robot characteristics on acceptance. ''Paladyn'', ''10''(1), 346–358. <nowiki>https://doi.org/10.1515/PJBR-2019-0028/MACHINEREADABLECITATION/RIS</nowiki> </ref>​. Facial expressions cannot be used, since the morphology of Pepper does not allow for it.
'''Total hours spend during the course'''
{| class="wikitable"
{| class="wikitable"
|+Table 2: Pepper's behavior per emotional condition (Bishop et al., 2019)
|'''Name'''
|
|'''Total Hours'''
|Happy
|'''Result'''
|Neutral
|-
|Sad
|Danique Klomp
|122 (gem. 15.25)
|<nowiki>+0.2 </nowiki>
|-
|Liandra Disse
|113 (gem. 14.1)  
|<nowiki>+0.2 </nowiki>
|-
|Emma Pagen
|108 (gem. 13.5)
|0
|-
|-
|Pitch of voice
|Isha Rakhan
|High pitch
|95.5 (gem. 11.9)
|Average of happy and sad condition
|<nowiki>-0.2 </nowiki>
|Low pitch
|-
|-
|Body posture
|Margit de Ruiter
|Raised chin, extreme movements, upwards arms
|90 (gem. 11.25)
|Average of happy and sad condition
|<nowiki>-0.2 </nowiki>
|Lowered chin, small movements, hanging arms
|}
|}
 
 
''Figure 1: Example of Pepper's behavior for (a) happy, (b) neutral, and (c) sad condition of displayed emotion'' ​<ref name=":12" />''.''​
 
 
The aforementioned research question will be investigated by taking a qualitative approach. First of all, participants will be asked to read and fill in the consent form. Next, they have to complete a short survey about their demographic information, including age, gender, study program and general acceptance towards robots. Then, they will proceed to start the experiment in small groups of participants. After the participants have listened to all three versions of the same, either positive or negative, story, they will be asked to engage in an interview with one of the researchers. During the interviews, an audio-recording will be made for transcribing and depersonalized later. After the completion of this interview, the participants will listen to the three versions of the other story and once again take part in a similar interview with the researcher. Participants will be randomly assigned to either first the positive story, or first the negative story to prevent bias. The study will end with thanking the participants, answering potential questions and explaining to them what the aim of the study was. The experiment will take about an hour in total.  
 
The in-between interview will be structured. The interview questions are not yet determined but will be based on literature and careful consideration of the researchers. The questions will be targeted at asking the participant's opinion about how they felt during the interaction with Pepper, which version of the story they liked best and why, and how they feel towards having a Pepper robot on campus. The answers to the interviews will be qualitatively evaluated by performing a thematic analysis.
 
'''Materials'''
 
In the study, the Pepper robot will be used. The reason this robot was chosen is because Pepper is a well-known robot that multiple studies have been done on and that is already being applied in different settings, such as hospitals and customer service. Based on young adults preferences for robot design, Pepper would also be most useful in student settings, given its human-like shape and ability to engage emotionally with people ​<ref name=":8" />​. The experiment itself will be conducted in one of the robotics labs on the TU/e campus, where the robot Pepper is readily available.  
 
The positive and negative stories that the robot will tell are fictional stories about polar bears, inspired by the study ​<ref name=":12" />​. The content of the stories is based on non-fictional internet sources and rewritten to best fit our purpose. We have decided to keep the stories fictional and about animals rather than humans, because of the lower risk of doing emotional damage to the participants associated with elicited feelings based on personal circumstances. If possible, we would like to accompany the story with an image of polar bears displayed on Pepper's screen to make the story more visual ''(see Figure 2 and 3).''
 
Positive story: When Artic gold miners were working on their base, they were greeted by a surprising guest, a young lost polar bear cub. It did not take long for her to melt the hearts of the miners. As the orphaned cub grew to trust the men, the furry guest soon felt like a friend to the workers on their remote working grounds. Even more surprising, the lovely cub loved to hand out bear hugs. Over the many months that followed, the miners and the cub would create a true friendship. The new furry friend was even named Archie after one of the researcher’s children. When the contract of the gold miners came to an end, the polar bear cub would not leave their side, so the miners decided to arrange a deal with a sanctuary in Moscow, where the polar bear cub would be able to live a happy live in a place where its new-found friends would come to visit every day.  
 
 
''Figure 2: Picture to accompany the positive story''
 
 
Negative story: While shooting a nature documentary on the Arctic Ocean island chain of Svalbard, researchers encountered a polar bear family of a mother and two cubs. During the mother's increasingly desperate search for scarce food, the starving family was forced to use precious energy swimming between rocky islands due to melting sea ice. This mother and her cubs should have been hunting on the ice, even broken ice. But they were in water that was open for as far as the eye could see. The weaker cub labors trying to keep up and the cub strained to pull itself ashore and then struggled up the rock face. The exhausted cub panicked after losing sight of its mother and its screaming could be heard from across the water. That's the reality of the world they live in today. To see this family with the cub, struggling due to no fault of their own is extremely heart breaking.    
 
 
''Figure 3: Picture to accompany the negative story''
 
'''Feasibility'''  
 
We believe the study is feasible in the time we have available. Since this is qualitative research, not many participants are needed to perform the study. Realistically, we want to have 10 participants, which comes down to each person in our research team of five has to find two participants. Moreover, the interaction with Pepper will be done in small groups of about five participants at the same time to limit the time that the total study takes. The research will be done with the Pepper robot in a laboratory, which are both available on TU/e. Our research team consists of somewhat experienced programmers that have worked with other robots such as Misty before and if needed, experts are available to provide support with the programming of Pepper. We also may be able to receive the coding for Peppers emotional behavior that was used in another similar study ​<ref>Van Otterdijk, M. I. (2021, July). ''Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior''. Retrieved from ResearchGate: <nowiki>https://www.researchgate.net/publication/354699157_Preferences_of_Seniors_for_Robots_Delivering_a_Message_With_Congruent_Approaching_Behavior</nowiki> </ref>.
 
'''Societal importance/application context'''  
 
Human-robot interaction (HRI) is a field that is growing very rapidly these days. In many sectors, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. In order to make this human-robot interaction feel natural and enjoyable for humans, robots must make use of human social norms ​<ref name=":2" />​. Therefore, it is important to gain knowledge on how humans react to robots showing emotions, which is what will be studied in this project.  
 
The aim of this study is to provide insight into whether a robot such as Pepper could be used on campus as an assistant, either as a tutor or for more general questions about the campus. This research provides a first step by investigating how comfortable students feel around the Pepper robot, and whether they would want such a robot on campus in the first place. Based on the results, more research can be done into how exactly the robot can be used in student settings.  


== Sources ==
== Sources ==
<references />
<references />

Latest revision as of 13:10, 4 April 2024

Group members

This study was approved by the ERB on Sunday 03/03/2024 (number ERB2024IEIS22). For this approval, an ERB form was filled in and a research proposal was made.

Name Student Number Current Study program Role or responsibility
Margit de Ruiter 1627805 BPT Note-taker
Danique Klomp 1575740 BPT Contact person
Emma Pagen 1889907 BAP End responsible Wiki update
Liandra Disse 1529641 BPT Planner
Isha Rakhan 1653997 BPT Programming responsible

Introduction

The use of social robots, specifically designed for interacting with humans and other robots, has been rising for the past several years. These types of robots differ from the robots we have been getting used to over the past decades which often only perform on specific and dedicated tasks. Social robots are now mostly used in services settings, as companions and support tools [1][2]. In many promising sectors of application, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. To make this human-robot interaction (HRI) feel natural and enjoyable for humans, robots must make use of human social norms[3]. This requirement originates from humans anthropomorphizing robots, meaning that we attribute human characteristics to robots and engage and form relationships with them as if they are human[4][3]. We use this to make the robot’s behavior familiar, understandable and predictable to us, and infer the robot’s mental state. However, for this to be a correct as well as intuitive inference, the robot’s behavior must be aligned with our social expectations and interpretations for mental states[4].

One very important integrated element in human communication is the use of nonverbal expressions of emotions, such as facial expressions, gaze, body posture, gestures, and actions[3][4]. In human-to-human interaction as well as human-robot interaction, these nonverbal cues support and add meaning to verbal communication, and expressions of emotions specifically help build deeper and more meaningful relations, facilitate engagement and co-create experiences[5]. Besides adding conversational content, it is also shown that humans can unconsciously mimic the emotional expression of the conversational partner, known as emotional contagion, which helps to emphasize with others by simulating their feelings[3][5]. Due to our tendency to anthropomorphize robots, it is possible that emotional contagion also occurs during HRI and can facilitate making users feel positive affect while interacting with a social robot[5]. Artificial emotions can be used in social robots to facilitate believable HRI, but also provide feedback to the user about the robot’s internal state, goals and intentions[6]. Moreover, they can act as a control system through which we learn what drives the robots’ behavior and how it is affected by and adapts due to different factors over time[6]. Finally, the ability of social robots to display emotions is crucial in forming long-term social relationships, which is what people will naturally seek due to the anthropomorphic nature of social robots[3].

Altogether, the important role of emotions in human-robot interaction requires us to gather information about how robots can and should display emotions for them to be naturally recognized as the intended emotion by humans. A robot can display emotions when it combines body posture, motion velocity, facial expressions and vocal signs (e.g. prosody, pitch, loudness), highly depending on the possibilities considering the robot’s morphology and degree of anthropomorphism[7][8][9] Social robots are often more humanoid, increasing anthropomorphism, and therefore a match is required between the robot's behavior and appearance to avoid falling into the uncanny valley, which elicits a feeling of uneasiness or disturbance[7][10]. Some research has already been done on testing the capability of certain social robots, including Pepper, Nao and Misty, to display emotions and resulted in robot-specific guidelines on how to program displaying certain emotions[11][12][13].

Based on these established guidelines for displaying emotions, we can look further into how humans are affected by the robot’s emotional cues during interaction with a robot. We will research this in a context where we would also expect a human to display emotions, namely during telling an emotional story. Our research takes inspiration from the study of Van Otterdijk et al. (2021)[14] and Bishop et al. (2019)[11] in which the robot Pepper was used to deliver either a positive or negative message accompanied by congruent or incongruent emotional behavior. We extend on these studies by taking a different combination of context for application and research method: interaction with students as researching application in an educational setting rather than healthcare and using interviews to gain a deep understanding rather than surveys. We opted for this qualitative approach, as we had to work with a small participant pool of ten people due to feasibility constraints. This allowed us to dig deeper into the details of robot-human interaction by capturing the intricate nuances of participants’ experiences and perspectives, providing us with a deeper understanding of our topic. Moreover, students are an important target group for robots, because they represent future workforce and innovations. Understanding their needs can help developers design the robots so that they are engaging, user-friendly and educational[15].

More specifically, the research question that will be studied in this paper is “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”. We expect that participants will prefer interacting with the robot while displaying the emotion that fits with the content of its message and to be open to more future interactions like this with the robot. On the other hand, we expect that a mismatch between the emotion displayed by robot and the story it is telling will make participants feel less comfortable and therefore less accepting of the robot. Moreover, we expect that the influence of congruent emotion displaying will be more prominent with a negative than a positive message. The main focus of this research is thus on how accepting the students are of the robot after interacting with it, but also gaining insights into potential underlying reasons, such as the amount of trust the students have in the robot and how comfortable they feel when interacting with them. The results could be used to provide insights into the importance of congruent emotion displaying and whether robots could be used on university campuses as assistant robots.

Method

Design

This research consisted of an exploratory study. The experiment was a within-subjects design, where all the participants were exposed to the six conditions of the experiment. It consisted of a 2 (positive/negative story) x 3 (happy/neutral/sad emotion displayed by robot) experiment. These six conditions differ in terms of a match between the content of the story (either positive or negative) and the emotion (happy, neutral or sad) of the robot. An overview of the conditions can be seen in Table 1.

Table 1: The six conditions in the experiment
Story / displayed emotion Happy Neutral Sad
Positive Congruent Emotionless Incongruent
Negative Incongruent Emotionless Congruent

The independent variables in this experiment were the combination of displayed emotion and the kind of emotional story. The dependent variable was the acceptance of the robot. This was measured by qualitatively analyzing the interviews held with the participants during the experiment.

Participants

The study investigated the viewpoint of students and therefore the participants were gathered from the TU/e. We have chosen to target this specific group because of their in general higher openness to social robots and the increased likelihood that this group will deal a lot with social robots in the near future[15].

Ten participants took part in this experiment and all the participants were allocated to all the six conditions of the study. There were five men and five women who completed the study. Their age ranged between 19 and 26 years, with an average of 21.4 years (+- 1.96). They are all students at the Eindhoven University of Technology and volunteers, meaning they were not compensated financially for participating in this study. The participants were gathered from the researchers’ own networks, but multiple different studies were included (see Table 2). The general attitude towards robots of all the participants was measured, and all of them had relatively positive attitudes towards robots. Most of the participants saw robots as a useful tool that would help to reduce the workload of humans, however two participants commented that current robots would be unable to fully replace humans in their jobs. When asked whether they had been in contact with a robot before, six of the participants had seen or worked with a robot before. One participant was even familiar with the Pepper robot that was used during the experiment. Three participants responded that they had not been in contact before, but they had experience with AI or Large Language Models (LLM). One participant had never been in contact with a robot before but did not comment on whether they had used AI or LLM. All in all, the participants were all familiar with robots and the technology surrounding robots, which is expected as robots in general are a large part of the curriculum of the Technical University they are enrolled at.

Table 2: The distribution of current study programs of the participants
Study Number of participants
Psychology and Technology 3
Electrical engineering 2
Mechanical engineering 1
Industrial Design 1
Biomedical technology 1
Applied physics 1
Applied mathematics 1

Materials

Robot Pepper

For this experiment, the robot Pepper was used, which is manufactured by SoftBank Robotics.

Figure 1: Pepper's behavior for happy, neutral and sad condition (from left to right)

The reason this robot was chosen is because Pepper is a well-known robot that multiple studies have been done on and that is already being applied in different settings, such as hospitals and customer service. Based on young adults' preferences for robot design, Pepper would also be most useful in student settings, given its human-like shape and ability to engage emotionally with people[16]. The experiment itself was conducted in one of the robotics labs on the TU/e campus, where the robot Pepper is readily available.

When looking for a suitable robot for our project, the robots that were readily available at the TU/e and suggested by the supervisors of this research were considered, including Misty, SociBot and Pepper. With those robots in mind, the possibilities for conveying the desired emotions were compared. According to Cui et al. (2020)[17], posture is considered important for conveying emotions, and out of the three options, Pepper was the most suitable for that task. Next to that, Pepper was used in the aforementioned study by Van Otterdijk et al. (2021)[14] and Bishop et al. (2019)[11], which added to the convenience of using Pepper.

Pepper was programmed in Choregraphe to display happiness and sadness based on the voice, body posture and gestures, and LED eye colour. The behavior that Pepper displayed is shown in Table 3 and Figure 1, based on the research of Bishop et al. (2019)[11] and Van Otterdijk et al. (2021)[14]. Facial expressions cannot be used, since the morphology of Pepper does not allow for it.

Table 3: Robot behavior during each of the emotion displaying conditions
Happy Neutral Sad
Pitch of voice High pitch, speed, volume and emphasis Average of happy and sad condition Low pitch, speed, volume and less emphasis
Body posture and gestures Raised chin, extreme movements, upwards arms, strong nodding Average of happy and sad condition Lowered chin, small movements, hanging arms, not looking around
LED eye colour Yellow White Light blue

The voices for each story were created using Voicebooking.com, using their free AI voice over generator. This program was chosen instead of the built-in Pepper voice because it was quite inaudible for some parts of the stories the robot was supposed to tell. In Voicebooking, the preferred voice was picked and there were moods created based on adjustments for the speed, pitch, and emphasis of the storytelling. The greatest values of each of these were assigned to the happy mood and the lowest for the sad one . For the neutral robot, the values were averaged out between these two, as mentioned in Table 3. After uploading these audio files to Choregaphe, the volume levels were changed to 80%, 90%, and 100% for the sad, neutral, and happy robot, respectively[18].

Pepper's posture and gestures were created with dialog boxes that were readily available in Choregraphe. Dialog boxes are graphical interfaces that contain pre-installed movements of behaviors for the robot. For each of the robot moods, a selection was made of suitable pre-installed movements. The happy robot had the most expressive movements, making great use of its arms and nodding strongly[17]. The neutral robot would be gently swaying its arms and make gestures with them every now and then, but those were not as strong as those of the happy one. Next to that, the neutral robot was also programmed to look around, making eye contact with the participants[17]. And finally, for the sad robot, it was the objective to minimize movement and give Pepper a sad posture. This was achieved by using the same built-in movement that was used for the gentle swaying of the neutral robot. This behavior included the eye contact movement of the head, so the head movement had to be disabled using the settings of the dialog box. Next to shutting off the eye contact behavior, Pepper was programmed to look down at all times within these same settings. The sad posture was finalized by adjusting the hinge at the hip and the shoulders[19].

The robot's eye colors were changed using the eye LEDs to represent the three different moods: yellow was used for happy, white for neutral, and light blue for the sad robot[14].

A video was made to visually show the three different emotional behavior's of Pepper.

Laptops

In the full study, four laptops were used. At the start of the experiment, three laptops were used to hand to the participant for filling in the demographics LimeSurvey. During the experiment, one laptop was used to direct Pepper to tell the different stories with the different emotions. This was done from the control room. Two other researchers were present in the room with the participants and Pepper to assist if necessary and take notes on their laptop, so two other laptops were used there. Moreover, during the interviews, the researcher could choose to keep their laptop with them for taking notes or recording. If chosen not to, the researcher used a mobile phone to record.

LimeSurvey for demographics

The demographics survey that participants were asked to fill in consisted of the following questions:

  1. What is your age in years?
  2. What is your gender?
    • Male
    • Female
    • Non-binary
    • Other
    • Do not want to say
  3. What study program are you enrolled in currently?
  4. In general, what do you think about robots?
  5. Did you have contact with a robot before? Where and when?

These last two questions were based on an article by Horstmann & Krämer[20], which focused on the expectations that people have with robots and their expectations when confronted with other social robots concepts.

Stories told by the robot

The positive and negative stories that the robot told are fictional stories about polar bears, inspired by the study Bishop et al., 2019[11]. The content of the stories is based on non-fictional internet sources and rewritten to best fit our purpose. It was decided to keep the stories fictional and about animals rather than humans, because of the lower risk of doing emotional damage to the participants associated with elicited feelings based on personal circumstances.

The positive story is and adaptation of Cole (2021)[21] and shown below:

“When Artic gold miners were working on their base, they were greeted by a surprising guest, a young lost polar bear cub. It did not take long for her to melt the hearts of the miners. As the orphaned cub grew to trust the men, the furry guest soon felt like a friend to the workers on their remote working grounds. Even more surprising, the lovely cub loved to hand out bear hugs. Over the many months that followed, the miners and the cub would create a true friendship. The new furry friend was even named Archie after one of the researcher’s children. When the contract of the gold miners came to an end, the polar bear cub would not leave their side, so the miners decided to arrange a deal with a sanctuary in Moscow, where the polar bear cub would be able to live a happy life in a place where its new-found friends would come to visit every day.”

The negative story is an adaptation of Alexander (n.d.)[22] and shown below:

"While shooting a nature documentary on the Arctic Ocean Island chain of Svalbard, researchers encountered a polar bear family of a mother and two cubs. During the mother's increasingly desperate search for scarce food, the starving family was forced to use precious energy swimming between rocky islands due to melting sea ice. This mother and her cubs should have been hunting on the ice, even broken ice. But they were in water that was open for as far as the eye could see. The weaker cub labours trying to keep up and the cub strained to pull itself ashore and then struggled up the rock face. The exhausted cub panicked after losing sight of its mother and its screaming could be heard from across the water. That's the reality of the world they live in today. To see this family with the cub, struggling due to no fault of their own is extremely heart breaking.”

Interview questions

Two semi-structured interviews were held per participant, one after the first three conditions, in which the story is the same, and one after the second three conditions. These interviews were practically the same, except for one extra question (question 7) in the second interview (see below). The interview questions 1-8 were mandatory and questions a-q were optional to use as probing questions. Researchers were free to use these probing questions or use new questions to get a deeper understanding of the participant's opinion during the interview. The interviews also included a short explanation beforehand. The interview guide was printed for each interview with additional space for taking notes.

The interview questions were largely based on literature research. They were divided into three different categories: attitude, trust and comfort. Overall, these three categories should give insight into the general acceptance of robots by students[23][24][25]. Firstly, a manipulation check was done to make sure the participants had a correct impression of the story and the emotion the robot was supposed to convey. These were followed by questions about the attitude, focusing on the general impression that the students had of the robot, their likes and dislikes towards the robot, and their general preference for a specific robot emotion. These questions were mainly based on the research of Wu et al (2014)[26] and Del Valle-Canencia (2022)[27]. The questions about the trustworthiness of the different emotional states of the robot are based on Jung et al. (2021)[28] and Madsen and Gregor (2000)[29]. The comfort-category focused mainly on how comfortable the participants felt with the robot. These questions were based on research (Erken, 2022)[30]. Lastly, the participants were asked whether they think Pepper would be suitable to use on campus and for which tasks. The complete interview, including the introduction, can be seen below: You have now watched three iterations of the robot telling a story. During each iteration the robot had a different emotional state. We will now ask you some questions about the experience you had with the robot. We would like to emphasize that there are no right or wrong answers. If there is a question that you would not like to answer, we will skip it. 

  1. What was your impression of the story that you heard?
    • a. Briefly describe, in your own words, the emotions that you felt when listening to the three versions of the story?
    • b. Which emotion did you think would best describe the story?
  2. How did you perceive the feelings that were expressed by the robot?
    • c. How did the robot convey this feeling?
    • d. Did the robot do something unexpected?
  3. What did you like/dislike about the robot during each of the three emotional states?
    • e. What are concrete examples of this (dis)liking?
    • f. How did these examples influence your feelings about the robot?
    • g. What were the effects of the different emotional states of Pepper compared to each other?
      • i. What was the most noticeable difference?
  4. Which of the three robot interactions do you prefer?
    • h. Why do you prefer this emotional state of the robot?
    • i. If sad/happy chosen, did you think the emotion had added value compared?
    • j. If neutral chosen, why did you not prefer the expression of the matching emotion?
  5. Which emotional state did you find the most trustworthy? And which one the least trustworthy?
    • k. Why was this emotional state the most/least trustworthy?
    • l. What did the robot do to cause your level of trust?
    • m. What did the other emotional states do to be less trustworthy?
  6. Which of the three emotional states of the robot made you feel the most comfortable in the interaction?
    • n. Why did this emotional state make you feel comfortable?
    • o. What effect did the other emotional state have?
  7. Do you think Pepper would be suitable as a campus assistant robot and why (not)?
    • p. If not, in what setting would you think it would be suitable to use Pepper?
    • q. What tasks do you think Pepper could do on campus?
  8. Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?

Procedure

Figure 2: Setup of experiment

When participants entered the experiment room, they were instructed to sit down on one of the five chairs in front of the robot Pepper (see Figure 2). Each chair had a similar distance to the robot of about 1 meter. The robot Pepper was already moving rather calmly to get participants used to the robot movements. This was especially important since some participants were not yet familiar with Pepper.

The participants started with a short introduction of the study, as can also be seen in the research protocol linked below, and the request to read and sign the consent form and continue with filling in a demographic survey on LimeSurvey. There were two sessions with each five participants. First, the participants listened to Pepper, who told the group a story about a polar bear, either a positive or a negative one (see Table 4 and 5 for exact condition-order per session). Pepper told this story three times, each time with a different emotion, which could be ‘happy', ‘sad’ or ‘neutral’. The time that these three iterations took was approximately 6 minutes. When Pepper was finished, the five participants were asked to each follow one of the researchers into an interview room. These one-on-one interviews were held simultaneously and lasted approximately 10 minutes. After completing the interview, the participants went back to the room where they started and listened again to a story about a polar bear. If they had already listened to the positive story, they now proceeded to the negative one and vice versa. After Pepper had finished this story, the same interview was held under the same circumstances. After completion of the interview, there was a short debriefing. The total the experiment lasted about 45-60 minutes.

Table 4: Condition order for experiment session 1
Round story Emotion 1 Emotion 2 Emotion 3
1 Negative Neutral Sad Happy
2 Positive Neutral Happy Sad
Table 5: Condition order for experiment session 2
Round story Emotion 1 Emotion 2 Emotion 3
1 Positive Neutral Happy Sad
2 Negative Neutral Sad Happy

An elaborate research protocol was also made, which explains more detailed what should be done during each part of the experiment.

Data analysis

The data analysis done in this research is a thematic analysis of the interviews. The interviews were audio-recorded and from the recordings a transcript was made using Descript. As a first step to the data analysis process, the raw transcripts were cleaned. This includes removing nonsense words, like “uhm” and “nou” or any other forms of stop words. The speakers in the transcript were then also labelled with “interviewer” and “participant X” to make the data analysis easier. After the raw data was cleaned, the experimenters were instructed to become familiar with the transcripts, after which they could start the initial coding stage. This means highlighting important answers and phrases that could help answer the research question. These highlighted texts were then coded using a short label. All the above steps were done by the experimenters individually for their own interviews.

The next step would be combining codes and refining them. This was done during a group meeting where all the codes were carefully examined and combined to form one list of codes. After this, the experimenters recoded their own interview with this list of codes and another experimenter checked the recoded transcript. Any uncertainties or discussions on coding that arose were discussed in the next group meeting. In this meeting, some codes were added, removed or adjusted and the final list of codes was completed. The codes in this final list are divided into themes that can be used to eventually answer the research question. After this meeting, the interviews were again recoded using the final list of codes and the results were compiled from the final coding. The fully codes transcripts were all combined in one file.

Results

Results of thematic analysis

An overview of the final codes and themes as emerged from the thematic analysis is shown in Figure 3 and an explanation for each code and theme is provided in Table 6.

Table 6: Overview of themes and codes with explanation
Overarching themes Themes Subthemes Code Explanation
Impression of the experimental conditions Impression of story This theme includes all the impressions from the participants of the stories told by the robot
Negative story perceived as sad Negative story is perceived as sad or described in a sad way. Sad undertones included.
Negative story perceived as documenting Negative story is perceived as documenting. This is a neutral factual way of talking about the story.
Not/other understanding of the story Difficulty understanding/ following the story, influence on later perceptions or preferences
Positive story perceived as positive Positive story is perceived as happy, funny, entertaining, enthusiastic, etc. The tone is positive.
Robot emotion perception This theme includes all the perceptions and opinions on the robot's emotional behavior.
Happy emotion perception Happy robot behavior perceived as happy All behavior of the happy robot that was perceived as happy, entertaining, funny, excited, energetic, etc.
Happy robot behavior perceived as chaotic or not natural All behavior of the happy robot that was perceived as too chaotic or random in movements, sometimes leading to not being natural.
Sad emotion perception Sad robot behavior perceived as sad The behavior of the sad robot was perceived as sad.
Sad robot behavior perceived as shy or not confident All behavior of the sad robot that was perceived as the robot feeling shy, hesitant, uncomfortable, not confident, etc.
Sad robot behavior perceived as uninterested All behavior of the sad robot that was perceived as the robot not being interested in what it was telling.
Sad robot behavior perceived as documenting All behavior of the sad robot that was perceived as the robot being serious or telling a story in a documenting way.
Neutral emotion perception Neutral robot behavior perceived as neutral All behavior of the neutral robot that is perceived as not having a specific emotion
Neutral robot behavior perceived as natural All behavior of the neutral robot that is perceived as natural human-like behavior.
Neutral robot behavior perceived as documenting All behavior of the neutral robot that was perceived as the robot being serious or telling a story in a documenting way.
Influence on acceptance Influence of robot on participant This theme includes all the influences that participants experienced with direct regard to the robot's emotional behavior.
Influence happy robot Engaged by happy robot The participant felt inspired, engaged or encouraged to listen to the story and pay attention by the happy robot.
Distracted by happy robot The participant felt distracted by the happy robot movements and noise of movements
Influence sad robot Bored/Disengaged by sad robot The sad robot was perceived as boring and participants were disengaged by the robot.
Less distracted by sad robot The sad robot was perceived as less distracting or allowing for more focus on the message than the other robots


Influence neutral robot Engaged by neutral robot The participant felt inspired, engaged or encouraged to listen to the story and pay attention by the neutral robot.
Less distracted by neutral robot The neutral robot was perceived as less distracting or allowing for more focus on the message than the other robots.
Influence of participant expectation and experience Participant remarks on how their expectations of the experiment and experience with the robot Pepper influenced their perception
Influence robot behavior on trustworthiness The reasoning behind why a certain robot was/wasn’t trustworthy based on its behavior.
Influence robot behavior on comfortability The reasoning behind why a certain robot did make the participant feel (un)comfortable based on its behavior.
Importance eye contact Participant adresses the effect of the robot (not) making eye contact.
Importance of gesture timing with speech Participant adresses that gestures did not match with speech and what effect this had on them.
Emotion-message match These are the codes that look at the match between the emotional behaviors displayed by the robots and the message that is told by the robot and how important emotions are in storytelling.
Opinion congruent emotion Congruent emotion did match The robot behavior that was expected to match did match the story that the robot told.
Congruent emotion didn't match The robot behavior that was expected to match the story did not match according to participants.
Opinion non-congruent emotion Non-congruent emotion didn't match The robot behavior did not match the story that the robot told.
Non-congruent emotion did match The robot behavior that was not expected to match the story did match according to participants, though this seemed due to numerous different reasons such as not understanding the story.
Value of emotion to story Discusses if the participant felt that the emotion had added value to the story-telling or not.
Emotional robot behavior preference These are all the preferences that the participants expressed regarding the robot's emotional behavior.
Robot preferences Neutral robot preferred Participant indicates a preference the neutral robot.
Happy robot preferred Participant indicates a preference the happy robot.
Sad robot preferred Participant indicates a preference for the sad robot.
Combination of robots preferred The participant preferred the robot with a combination or switch between multiple emotional states.
Voice preference Discusses what the participant likes/disliked about the voice of the robot
Application of Pepper This theme looks at the real-life applications of pepper and whether the robot would be suitable as an application.
Not suitable implementation The pepper robot is not suitable or needs adjustments before it is suitable in real life.
Suitable implementation The pepper robot would be suitable for specific roles (navigation, guidance, administration tasks etc.) as is.
Figure 6: Overview of codes

Impression of experimental conditions

The overarching theme ‘Impression of experimental conditions’ arose from questions regarding how the participant perceived the emotional story and the emotional robot behaviour. These answers were gathered to do a manipulation check on the experimental conditions, thus checking if these are perceived as intended.

Impression of story

For the positive story that was told by Pepper, it was found that nearly all participants perceived it as a positive story and described it as either happy, funny, entertaining or any other positive adjective, such as shown in these quotes: “It was a very cute story. A nice story.” (Participant 013 – PS, [00:00:39]), “It was a happy story just because, it had a happy ending.” (Participant 014 – PS, [00:01:06]) and “I thought it was a funny story.” (Participant 023 – PS, [00:00:33]).

The negative story on the other hand, was sometimes perceived as sad and sometimes as more documenting, factual and with no clear emotional message. The following quotes illustrates this: “The general tone of the story was a bit sad, a bit anxious.” (Participant 012 – NS, [00:00:45]) and “The story sounded like a documentary.” (Participant 022 – NS, [00:01:09]).

Moreover, not all participants understood the story as intended or at all and this seemed to have influenced their attitude towards the emotional states of the robot. Some participants indicated that they found the story hard to follow due to difficult words or that they did not know what to expect at the beginning of the experiment and were a bit distracted.

Robot emotion perception

Additionally, the participants were asked which emotion they would assign to each of the robot's three emotional states. For the robot behavior that was intended as happy, most participants identified this as happy, enthusiastic, or another similar positive adjective.  However, the happy robot behavior was also often perceived as chaotic, random or programmed in its movements, sometimes leading to their behaviour seen as not natural. This could also be linked to the fact that many participants noticed that the timing of the gestures did not fit with the context of the story at that point. This was mostly noticeable in the happy robot, but participants also made this comment about the other robot conditions. Examples are given in the following quotes: “It seemed just very excited and maybe happy to tell the story” (Participant 024 – PS, [00:02:02]), “The more energetic one is just too much for me. I think the gestures don't make any sense. It's just really random and really chaotic.” (Participant 014 – PS, [00:07:32]) and “I also felt that the arm movements again did just not at all correlate with what a person presenting the same story would do. Because it just was too much." (Participant 012 – PS, [00:02:01]).

Concerning the sad robot behaviour, the opinions of participants on how the undertone of the behaviour was perceived varied more, from sad to shy to uninterested to more seriously documenting the story. The following quotes again illustrate the different opinions of the participants: "The robot was a bit more sad and more serious in their voice” (Participant 021 – NS, [00:02:18]) and “It was more like it was a person presenting who was afraid to present." (Participant 012 – NS, [00:03:15]).

Finally, the neutral robot behavior was successfully interpreted as neutral in emotion by most participants, and additionally as more seriously documenting the story. The neutral robot was often also seen as the most natural and human-like behaving robot. Some examples of participants illustrating this: “The first one wasn't really an emotion, but more informative or something.” (Participant 022 – NS, [00:01:09]) and "It felt more human like.” (Participant 024 – PS, [00:01:28]).

Influence on acceptance

The next overarching theme ‘Influence on acceptance’ discussed various factors that participants experienced regarding the robot's behavior. This includes specific robot behavior characteristics that influenced how engaged participants felt to listen to the story, such as eye contact and timing of gestures, but also behavior that was noted as influencing comfortability and trustworthiness. These factors were often mentioned in relation to how engaged participants felt and a how natural the robot behavior was perceived.

Influence of robot on participant

Continuing with the subtheme ‘Influence of robot on participant’, the different emotional robot behaviors had different effects on the participants in term of their focus on the story. Almost all participants indicated multiple times that the happy robot engaged them to listen, but that the extreme movements and associated noise of the actuators distracted them a lot: “It tried to actively engage listeners” (Participant 012 – PS, [00:02:01]) and “It became distracting, I thought. But mostly also because the movement of the robot makes noise, and I'm going to pay more attention to that.” (Participant 022 – PS, [00:01:47]).

The sad robot seemed to strongly have the opposite effect, namely that participants were less distracted by the robot, but sometimes even to the extent that they felt disengaged to listen or even bored, as demonstrated here: “There was too little motion and hand gestures. And I think for a long period of time, that would be too boring to listen to. But, the benefit from that, is that you are only focused on the story itself, not on the robot movement” (Participant 014 – NS, [00:03:40]).

The neutral robot also distracted the participants less, but a part of the participant still felt engaged to pay attention to the story: “It's not as distracting as the other ones and the most being focused on the story.” (Participant 021 – NS, [00:05:09]). These factors clearly influenced the opinion of the participant towards each robot's emotional states and ultimately may have determined their preference for a certain robot emotion for the story that was told.

Eye contact, timing of gestures, trust and comfortability

For example, for the low engagement level of the sad robot, many participants gave their need for eye contact as an explanation. They felt less connected and engaged with the robot since it only looked downwards. Many participants also noted the incongruence between the gestures and speech as a reason for their low engagement with the robot: “Maybe when the story is building up, you can try to see if one gesture fits better with that part of the story. So you can really connect to it because you have a beginning point and end point of it” (Participant 011 – NS, [00:03:32]).  

From time to time, these factors were linked to the robot's trustworthiness and participant's comfortability during interacting with the robot. There were also other reasons for a difference in trusting or feeling comfortable around a certain robot behaviour. For example, participants mentioned that they felt more comfort and trust with the happy robot during the positive story since the gestures fitted: “Now I would say the happy one the most trustworthy, because it fits the story well. [...] I noticed the gestures is less heavy than in the previous story. Maybe because it blended so well.” (Participant 011 – PS, [00:05:35]). However, when a participant felt that the emotion did not match or was unpredictable, it felt less comfortable with the happy robot: “The active [happy] robot makes it maybe a bit uncomfortable because it was too happy in combination with the story that was told” (Participant 021 – NS, [00:05:40]) and “I just felt like it was trying way too hard. [...] He basically just stood there, I know. But it felt a bit unpredictable or something.(Participant 023 – PS, [00:08:23]).

The lack of eye contact from the sad robot was often mentioned as a reason for a low level of trust and/or comfort: “And the scary one looked only down at you with those eyes. So it lacked a bit of motion. So that just affected their comfort.” (Participant 011 – PS, [00:07:25]). Some participants also found the sad robot to be less trustworthy because they perceived it as shy. “Well, the third one [sad robot] [...] was not trustworthy as it was telling the story as if it didn't want to tell you the story because it was [...] shy and drawn back. The second one [happy robot], you could call it trustworthy [...]. It was actively trying to convince the listeners that that story was of a certain emotion and maybe it overdid that.” (Participant 012 – PS, [00:08:41]).

Most often, the participants saw the neutral robot as the most comfortable or trustworthy, for example since it was the most natural: “Because if it's natural, then it's trustworthy.” (Participant 013 – PS, [00:06:12]). Also, the neutral robot showed more eye-contact, which many participants found to be more trustworthy: “Because the robot was really looking at us. And I think you would do that when you're confident about your own story and want to be trusted.” (Participant 021 – PS, [00:05:42]).

Moreover, for some participants, their expectations of the experiment had notable influence on what they noticed during the conditions or the extent to which they had previous experience with the robot determined a strong overall attitude towards the robot. “Because this was totally new, I didn't know what to expect. that's also maybe an aspect that maybe influences my thoughts upon the robot itself. Just because when you get the story from the robot, then a lot of things pop into my mind, [...] so you're really distracted by many things. And then, the second and the third robots [happy and sad robot], you're more like at rest [...]. So that may cause a different outputs from people, when they encounter to those robots for the first time.” (Participant 014 – NS, [00:07:53]).

Emotion-message match

Participants also noted a congruence or incongruence between the emotional state of the robot and the story that it told. Most often, participants indicated a match between the intended congruent emotional state and the story, or a mismatch between the intended incongruent emotional state and the story. The following quotes exemplify these occasions: “Its emotions, [...] the sad eyes, matched better with the story that was being told.” (Participant 012 – NS, [00:05:35]) and “This story was of course happy and then, more happy emotions would also be more realistic for the audience” (Participant 014 – PS, [00:01:57]).

However, during multiple interviews, participants had another understanding of the emotional state or story and therefore did not find it fitting, or they understood what emotional state was congruent, but they did not feel that it fitted well due to other disliked prominent behavior of the robot. As a result, most of these participants preferred the neutral robot, though some of them felt that the intended non-congruent emotion matched better. This is mentioned in the next quotes: “I would say the energetic one, because that would match the story. But for some reason, if I would have to trust the robot itself, then I would say the medium one [neutral robot], just because the over energetic one is... then I would get the feeling that they want to convince me” (Participant 014 – PS, [00:05:59]) and “The [negative] story sounded more like a documentary. I thought the first one [neutral robot] was better for a documentary. The second one [sad robot] was too sad. And the third one [happy robot] was too enthusiastic.” (Participant 022 – NS, [00:03:36]).

During this, it was occasionally also discussed if participants felt like the emotional states of the robot added value to the storytelling or not. When the neutral or non-congruent emotion was preferred, participants frequently indicated that the emotion had no added value, while this was the other way around for congruent emotion preferences. For example, one of the participants preferred the happy robot during the positive story, and they mentioned that this happy emotion did give the story more value: “I could feel this joy of the people there, that could kind of adopt a polar bear, and a polar bear that could see his friends.” (Participant 011 – PS, [00:05:09]). However, there were also participants that did not want any emotion in the storytelling: “I don't think so. I think when you're trying to convey a story, you just want to have the story told. And you want the story to be the main purpose of telling.” (Participant 015 – PS, [00:04:30]).

Emotional robot behavior preference

Taking all this into account, theme ‘Emotional robot behavior preference’ includes which robot each participant named as an overall preferred, most trustworthy and most comfortable robot's emotional state for each story. The results from this can also be seen in Table 7, which is discussed later.

Additionally, some participants also discussed specifically which version(s) of the voices they liked or disliked and why. For example,  “Now the last one [voice of happy robot] was definitely more engaging, but sometimes it was a bit loud, even though it didn't really fit the story. The first one [voice of neutral robot] was too neutral, so the intonations were on one line, so maybe a combination? Just having more intonation, I would say.” (Participant 011 – NS, [00:07:32]). Sometimes, the participants also found the voice of the sad robot to be inaudible: “For the second one [sad robot], it was [...] less audible than the other, than the first one [neutral robot].” (Participant 015 – NS, [00:00:42]).

Application of Pepper

Finally, the theme ‘Application of Pepper’ encompasses the responses that participants had to the question of for which application Pepper would be suitable. The interviews showed that most participants could see Pepper being used in real life, but mostly for short, easy tasks, such as pointing people in the right direction on campus or giving some basic information: “I can already see Pepper standing in front of the university and leading the way around. Answering questions like where do you have to go, and then the road.” (Participant 013 – PS, [00:07:59]). However, they also noted possible limitations in Pepper’s implementations. For example, they do not believe that Pepper could have deeper conversations with people, so it would not be suitable for more difficult campus related tasks such as teaching or more serious or emotional messages. Some also noted that they don’t see the usefulness of Pepper, since humans will usually be able to do a task better. For example, “For storytelling, making people aware of really serious things, then I would think maybe less suitable because I don't really see the seriousness or maybe I don't really feel like I should listen to the robot. I think a real human would make more impact.” (Participant 021 – NS, [00:06:51]) and “There is already this reception down below as well, where you can ask questions, where there's always a person, which will still be better than most robots.” (Participant 015 – PS, [00:06:25]). However, most participants were more positive about Pepper, although improvements would first have to be made.

Participants preferences for emotional states of the robot

As introduced before, Table 7 shows the preferences of participants in which emotional state they preferred overall, which was more trustworthy and which they found most comfortable. The table includes the total scores that the robot received from all the participants. When a participant preferred a combination of robots, the robot received a fraction of a full point.

From Table 7 it can been seen that the neutral robot was scored the highest by the participants in all three categories, for both the positive and the negatives story. From the more expressive robots, the happy robot received the highest scores in all three the conditions again for both the positive and the negative story, while the sad robot received only a small fraction of the scores.

In addition to the stand-alone preference of the participants, it was also examined whether participants preferred the same robot or the same combination of robots in all three the categories. The results of this can be found in Table 8 and 9. In summary, seven out of ten participants preferred the same robot in all three categories. Most participants choose the neutral robot in all three categories, but sometimes a combination with the neutral and happy robot was preferred. The negative story had slightly more mixed results, these showed only three participants that preferred the same robot in all three the categories. Again, most participants preferred the neutral robot in all three categories, while some preferred the happy robot.

Table 7: Indicated preferences for emotional states of the robot during each of the stories
General
Positive story Negative story
Happy robot 2 2⅔
Sad robot 0 2⅔
Neutral robot 8 4⅔
Trust
Positive story Negative story
Happy robot 3 1 ½
Sad robot 0 ½
Neutral robot 7 8
Comfort
Positive story Negative story
Happy robot 2 ½ 2 ½
Sad robot 1 ½
Neutral robot 6 ½ 7
Table 8: Indicated preference distribution for the positive story, showing if participants chose the same robot behavior for multiple preference categories
Participant General Trust Comfort Conclusion
1 Neutral Neutral Neutral All three the same
2 Neutral Happy Happy Trust + Comfort the same
3 Combination happy and neutral Combination happy and neutral Neutral General + Trust the same
4 Happy Happy Happy All three the same
5 Neutral Neutral Sad General + Trust the same
6 Neutral Neutral Neutral all three the same
7 Neutral Neutral Neutral all three the same
8 Combination happy and neutral Combination happy and neutral Combination happy and neutral all three the same
9 Neutral Neutral Neutral all three the same
10 Neutral Neutral Neutral all three the same


Table 9: Indicated preference distribution for the negative story, showing if participants chose the same robot behavior for multiple preference categories.
Participant General Trust Comfort Conclusion
1 Combination happy and neutral Neutral Combination happy and neutral General + Comfort the same
2 Combination sad and neutral Neutral Neutral Trust + Comfort the same
3 Neutral Neutral Neutral All three the same
4 Combination happy and neutral Neutral Combination happy and neutral General + Comfort the same
5 Combination of all three the robots Combination happy and neutral Combination happy and neutral Trust + Comfort the same
6 Sad Neutral Combination sad and neutral All three different
7 Happy Happy Happy All three the same
8 Neutral Neutral Neutral All three the same
9 Combination sad and happy Combination sad and neutral Neutral All three different
10 Combination of all three the robots Neutral Neutral Trust + Comfort the same
10 Neutral Neutral Neutral All three the same

Discussion

Main findings

When looking back at the research question, “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”, in combination with the results, it becomes clear that most participants were not that influenced by the match of emotion to story. As shown in Table 7, the neutral robot state was clearly most preferred for each story and each aspect of acceptance. Therefore, our pre-research expectation that the congruent emotion would be most preferred, was falsified. Taking into account the in-depth reasoning of participants during the interview, this preference did not always depend on how suitable the robot's emotional state was for the specific story. Rather, in most cases, participants preferences stemmed from disliking the happy and sad emotional robot state and finding the neutral behavior the most natural for human-like behavior or a good in-between the other more extreme states.

Moreover, it was expected that the importance of congruent emotion displaying would be more apparent for the negative story than the negative story. This would mean that there is a clearer preference for the congruent emotion and less for the incongruent emotion when comparing to the positive story. However, Table 7 shows inconsistent results. Only for the overall preference, the congruent emotion is more preferred for the negative story than for the positive story. For all other cases, the congruent emotion is more preferred and the incongruent emotion less preferred for the positive story than for the negative story. This may indicate that congruent emotion displaying is more important for positive messages. The higher level of trust in the happy robot could also be explained by positive, in comparison to negative, emotional expression leading to higher anthropomorphic trust in social robots[31].

Comparing these results to other similar research, such as the study of Van Otterdijk et al. (2021)[14], shows contrasting results. Van Otterdijk and colleagues found no clear preference among different robot behaviors, where our study found a prominent preference for the neutral robot behavior. Moreover, they found that for delivering a negative message, the congruent sad robot behavior was significantly more preferred. This is again in contrast with our results whereas the importance congruence was more present for the positive story. However, it is important to note a clear difference in robot behavior, namely that in the study of Van Otterdijk et al., Pepper was approaching participants, whilst our Pepper was standing still. As the authors also explain, the surprising results could also be caused by the emotional robot behavior not being experienced as intended. For their study, the neutral and happy behavior were perceived as quite similar, though in our study participants indicated a clear difference. Moreover, they also discussed that in the context of elderly in a care center, bringing sad news may be more triggering. Appel et al. (2021)[32] conducted a similar study with the robot Reeti. However, differences are that only facial expressions were manipulated, and the robot told the stories from first-person perspective. The found that congruent emotion displaying positively affected transportation in the story world and led to more positive evaluations of the robot. The differences in results can again be explained by differences in robot behavior or the robot’s anthropomorphism, but also the extent to which the stories and emotions were interpreted as intended.

Though the results do not give a clear answer on the research questions, the in-depth analysis of the interview resulted in some theoretical implications and practical recommendations for programming emotional behavior with Pepper. The results showed varying opinions about suitable implementations for the Pepper robot as campus assistant, such as for guiding or providing general information, but no tasks that deal with emotions or responsibility and occasionally still a preference for humans performing the task. This indicates that the targeted user group is not yet very acceptable of using Pepper based on the interaction during the experiment and previous attitude, especially when it comes to its emotion displaying. The participants’ dislikes of the emotions states probably influenced this. Consequently, improvements must be made on how Pepper displayed emotions to change the users’ attitude towards Pepper engaging in tasks involving emotions. First of all, the happy robot behavior was very often perceived as way too extreme in its movements, while the sad robot was noted as moving too little, and the neutral robot was seen as the most natural. The emotional robot behavior used in this study and the differences between states were too extreme. The participants clearly indicated that they prefer more subtle changes and natural movements. Therefore, we recommend for practical implications that the neutral robot behavior should be used as a basis from which only small adjustments should be made to create robot behavior for other emotions. These adjustments should be carefully determined by investigating how humans naturally convey these emotions while telling a story.

Limitations

In this study, despite careful planning and execution, several constraints emerged that warrant acknowledgment. Understanding these limitations is crucial for contextualizing the results and guiding future research. The following section outlines the key limitations that were encountered during the experimental process, shedding light on areas for refinement and further investigation.

There were a few cases in which the participant misunderstood a story. The stories serve as crucial components in determining participants' preferences for congruent versus incongruent scenarios. When participants misunderstand the narrative, their responses may not accurately reflect their true preferences or emotional reactions[33]. In addition to that, the negative story was sometimes perceived as a newspaper story. This was because it lacked personal emotional value, as the decision was made to lower the risk of participants experiencing emotional damage due to the storytelling. The study by Appel et al (2021)[32] made use of a story that was personal to the robot itself. If this was done during this particular experiment, emotional value would have been added properly and possibly could have prevented the sense of detachment from the emotional essence of the story.

Due to the duration of the course and the availability of Pepper, there was limited time to get familiar with the robot in combination with the Choregraphe software. This resulted in the choice of using the pre-installed Choregraphe behaviors. Consequently, there was not enough time to explore methods to make the robot's expressions more nuanced and less exaggerated, as some participants perceived them to be, particularly regarding the portrayal of happiness. Some of the participants also stated that they would have preferred a different emotion if the other one was not as extreme.

For feasibility reasons, the participants were all exposed to the same order of emotions for each of the stories. Whether they heard that in combination with the positive or negative story first depended on which iteration of the experiment they were participating in. Considering the peak-end rule, the predetermined mood sequence could have impacted the participants’ overall experience with Pepper. If the participant felt a positive or negative emotional peak near the end of the experiment, they might have attributed that feeling towards the entire experiment, regardless of how they, for example, felt for the first mood iteration. Thus, the sequence of mood exposure could have influenced participants' hindsight evaluations[34].

As the researchers on this project were not very experienced with interviewing, it has occurred that the interviewer could have asked for more explanation when they didn't. This resulted in missing out on possible relevant information for the study and having a possible, less deeper understanding of the results.

The last limitation was the limited screening that was done on participants before they were invited to the experiment. As the experiment relied on the ability of participants to pick up detailed emotional ques, some participants might have been better suited than other participants. Some conditions can have an impact on the perception of emotions and the interpretation of these emotions, like autism can have. As the participants were not specifically asked whether they had autism or any other condition that could have an impact on emotional perception, it might have had an impact on the misunderstanding of the emotion[35].

Recommendations for future research

Although the outcomes of this study deviated from expectations, they still shed valuable light on the field of Human-Robot Interaction (HRI), which holds significant relevance in contemporary society. Future research endeaveours can build upon these findings, refining them by addressing identified limitations. Primarily, it became evident that participants didn't always interpret the narrative as intended. To solve this, preliminary testing of the story and robot's emotional cues on non-participants could refine storylines before the main study. Additionally, participants noted difficulties comprehending the stories and maintaining focus during the robot's movements. Implementing measures such as pre-reading the story or initially having the robot deliver the story without movements could remove this issue. Moreover, Pepper's chest screen could be utilized to visually complement the story, enhancing engagement. These strategies aim to increase the likelihood that the stories have the intended effect, and therefore provide more valuable insights into emotion-message congruence.

One of the limitations that was mentioned was the choice for pre-installed Choregraphe behaviours. This lead to led to less the emotions being perceived as too exaggerated by many participants. In future research, it is recommended to spend to more time getting familiar with suitable programs and methods for programming Pepper. This could lead to more nuanced and subtle expressions that would be more appealing for the listeners. For this, also more research would have to be done into perceiving in human-robot interaction with Pepper, paying attention to what cues humans use to pick up on certain emotions of robots. Furthermore, by manually programming Pepper, more variation can be introduced in the intensities of the emotions during the story-telling, meaning the gestures and emotions as shown by Pepper correspond better to the context of the story that is told. Besides, a wider range of emotions could be used in future research, instead of only “happy”, “sad” and “neutral”, to gain more knowledge into the exact emotion that is preferred by participants, as they now often saw the “happy” and “sad” robot as too extreme. In real life, human emotions manifest across a nuanced spectrum. Emotions are very subtle and complex, since the perception of emotions is highly sensitive to context and personal factors. This implies the importance of considering this complexity in refining Pepper's emotional behavior[36].

In future investigations, it would be compelling to conduct a study with a more extensive participant pool. As depicted in Table 7, 8 and 9, while many participants exhibited consistent preferences for specific robots, outliers were observed — individuals with differing perceptions or opinions on robot emotions. Replicating the experiment on a broader scale as confirmatory research, with appropriate improvements, allows for statistical significance testing and could enhance the generalizability of results. Only then it is possible to validate whether the observed patterns hold true for a larger demographic.

In general, participants expressed a positive attitude toward having Pepper on campus for providing basic information like directions, although with some suggested adjustments. Further research is necessary to delve into the specifics of Pepper's role as a campus robot, including the type of information it can convey and the way it will do this (e.g., via voice, emotion display, movements). Since this study primarily examined students' general attitudes towards Pepper rather than its specific use as a campus robot, further research can aim for more targeted exploration in this regard.

Planning

Each week, there was a mentor meeting on Monday morning followed by a group meeting. Another group meeting was held on Thursday afternoon and by Sunday afternoon the wiki was updated for work done that week (weekly deliverable).

Week 1

  • Introduction to the course and team
  • Brainstorm to come up with ideas for the project and select one (inform course coordinator)
  • Conduct literature review
  • Specify problem statement, user group and requirements, objectives, approach, milestones, deliverables and planning for the project

Week 2

  • Get confirmation for using a robot lab, and which robot  
  • Ask/get approval for conducting this study
  • Create research proposal (methods section of research paper)
  • If approval is already given, start creating survey, programming the robot or creating video of robot

Week 3

  • If needed, discuss final study specifics, including planning the session for conducting the study
  • If possible, finalize creating survey, programming the robot or creating video of robot
  • Make consent form
  • Start finding and informing participants

Week 4

  • Final arrangements for study set-up (milestone 1)
  • Try to start with conducting the study  

Week 5

  • Finish conducting the study (milestone 2)

Week 6

  • Conduct data analysis
  • Finalize methods section, such as including participant demographics and incorporate feedback
  • If possible, start writing results, discussion and conclusion sections

Week 7

  • Finalize writing results, discussion and conclusion sections and incorporate feedback, all required research paper sections are written (milestone 3)
  • Prepare final presentation

Week 8

  • Give final presentation (milestone 4)
  • Finalize wiki (final deliverable)
  • Fill in peer review form (final deliverable)

Individual effort per week

Week 1

Name Total Hours Break-down
Danique Klomp 13.5 Intro lecture (2h), Group meeting (2h), Group meeting (2h), Literary search (4h), Writing summary LS (2h), Writing problem statement first draft (1,5h)
Liandra Disse 13.5 Intro lecture (2h), group meeting (2h), Searching and reading literature (4h), writing summary (2h), group meeting (2h), updating project and meeting planning (1,5h)
Emma Pagen 12 Intro lecture (2h), group meeting (2h), literary search (4h), writing a summary of the literature (2h), writing the approach for the project (1h), updating the wiki (1h)
Isha Rakhan 11 Intro lecture (2h), group meeting (2h), group meeting (2h), Collecting Literature and summarizing (5h)
Margit de Ruiter 13 Intro lecture (2h), group meeting (2h), literature research (4h), writing summary literature (3h) group meeting (2h)

Week 2

Name Total Hours Break-down
Danique Klomp 16,5 Tutormeeting (35min), groupmeeting 1(2.5h),  groupmeeting 2 (3h), send/respond to mail (1h), literature interview protocols and summarize (3h), literature on interview questions (6.5h),  
Liandra Disse 12 Tutormeeting (35min), groupmeeting (3h), write research proposal (3h), groupmeeting (3h), finalize research proposal and create consent form (2,5h)
Emma Pagen 11,5 Tutormeeting (35min), groupmeeting (3h), write research proposal (2h), groupmeeting (3h), finalize research proposal and create consent form (1,5h), updating wiki (1,5h)
Isha Rakhan 10 Research on programming (7h), groupmeeting (3h)
Margit de Ruiter 11,5 Tutormeeting (35min), groupmeeting (3h), read literature Pepper and summarize (3h), groupmeeting (3h), research comfort question interview (2h)

Week 3

Name Total Hours Break-down
Danique Klomp 14 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), preparation Thematic analysis & protocol (2h), mail and contact (1,5h), meeting Zoe (1h), group meeting (3h)
Liandra Disse 12 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), update (meeting) planning (1h), prepare meeting (1h),  group meeting (3h), find participant (30min)
Emma Pagen 12 Tutormeeting (35min), groupmeeting 1(3h), finish ERB form (1h), create lime survey (1,5h), make an overview of the content sections of final wiki page (1h), group meeting 2 (3h), updating the wiki (2h)
Isha Rakhan 12,5 Tutormeeting (35min), groupmeeting 1(3h), meeting zoe (1h), group meeting (3h), programming (5h)
Margit de Ruiter *was not present this week, but told the group in advance and had a good reason*

Week 4

Name Total Hours Break-down
Danique Klomp 11,5 Tutor meeting (35min), group meeting (3h), review interview questions (1h), finding participants (0.5h), mail and contact (1h), reading and reviewing wiki (1.5h), group meeting (4h), lab preparations (1h)
Liandra Disse 11 Prepare and catch-up after missed meeting due to being sick (1,5h), find participants (0.5h), planning (1h), group meeting (4h), set-up final report and write introduction (4h)
Emma Pagen 13 Tutormeeting (35min), group meeting (3h), adding interview literature (2h), find participants (0,5h), group meeting (4h), reviewing interview questions (1h), going over introduction (1h), updating wiki (1h)
Isha Rakhan 11,5 Tutormeeting (35min), group meeting (3h), research and implement AI voices (2h), documentation choices Pepper behavior (2h), group meeting (4h)
Margit de Ruiter 11 Tutormeeting (35min), group meeting (1h), find participants (0.5h), testing interview questions (1h), group meeting (4h), start writing methods (4h)

Week 5

Name Total Hours Break-down
Danique Klomp 24,5 Tutor meeting (30min), group meeting (experiment) (3h),  transcribe first round of interviews (2h),  familiarize with interviews (5h),  highlight interviews (2h),  group meeting (experiment) (3h), transcribe second round of interviews (2h), first round of coding (3h), refine and summarize codes (2h), prepare next meeting (1h), adjust participants in methods section (1h)  
Liandra Disse 21,5 Tutor meeting (30min), group meeting (experiment) (3h), transcribe, familiarize and code first interviews (6h), incorporate feedback introduction (1h), group meeting (experiment) (3h), transcribe, familiarize, code second interviews and refine codes (7h), extend methods section (1h)
Emma Pagen 20,5 Tutor meeting (30min), group meeting (experiment) (3h), transcribe first round of experiments (3h), familiarize with interviews and coding of first interviews (4h), group meeting (experiment) (3h), transcribe second round of interviews (3h), familiarize with interviews and coding of second interviews (4h)
Isha Rakhan 14,5 Tutor meeting (30min), group meeting (experiment) (3h), group meeting (experiment) (3h) transcribing all of the interviews (4h), coding all of the interviews (4h)
Margit de Ruiter 17 Tutor meeting (30min), group meeting (experiment) (3h), transcribing all the interviews (7h), group meeting (experiment) (3h) coding all the interviews (3,5h)

Week 6

Name Total Hours Break-down
Danique Klomp 18,5 Tutor meeting (30 min), group meeting first round coding (3h),  second round of recoding interviews (2h),  check recoding of other interviewer (2h), look at comments/check recoding of other interviewer (1h), preparations meeting Thursday (2h),  create preference count document for the group meeting (2,5h), group meeting (3h), recode and finalize transcripts (1,5h), work on thematic map and finalizing the themes/codes (1h),
Liandra Disse 17,5 Tutor meeting (30 min), group meeting first round coding (3h), second round of recoding interviews (2h), check recoding of other interviewer, go over comments on own interview and make suggestions for adjusting codes (4h), planning (1h), group meeting (3h), finalize transcripts (1h), start on results section and give feedback on methods (3h)
Emma Pagen 16,5 Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2h), check coding of other interviewer (2h), adjust own coding based on suggestions from other group member (1h), group meeting (3h), recoding after group meeting and finalize transcript (2h), finalize method (1h), update wiki (2h)
Isha Rakhan 15 Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2,5h), check coding of other interviewer (2h), check codes feedback other interviewer (1h), group meeting (2h), adjusting codes after group meeting (1h), working on the "Pepper" part of the report (3h)
Margit de Ruiter 15 Tutor meeting (30 min), group meeting first round coding (3h), second round of coding (2,5h), check coding of other interviewer (2,5h), check codes feedback other interviewer (2h) group meeting (2h), adjusting codes after group meeting (1h), check overview with themes and codes (1,5h)

Week 7

Name Total Hours Break-down
Danique Klomp 13 Tutor meeting (30min), group meeting (3h),  group meeting (2h), write preference results (1,5h), read main findings (1h),  write presentation text (2h), adjust presentation template (1h), prepare presentation (2h),  
Liandra Disse 15 Tutor meeting (30min), group meeting (3h), clean document of all interviews (1h), outline discussion (0,5h), planning (0,5h) extend results section (3h), group meeting (3h), write discussion main findings (3,5h)
Emma Pagen 10,5 Tutor meeting (30min), groups meeting (3h), working on results section (2h), group meeting (3h), working on discussion section (2h)
Isha Rakhan 10,5 Tutor meeting (30min), groups meeting (3h), group meeting (3h), Improving the introduction (1h), Working on the methods section (3h)
Margit de Ruiter 11,5 Tutor meeting (30min), groups meeting (3h), Prepare presentation ppt and word document (3h), feedback intro & methods change (0,25h), group meeting (3h), practice presentation (1,5h)

Week 8

Name Total Hours Break-down
Danique Klomp 10,5 Group meeting (2h),  adjust presentation text (0.5h), Check source final report (0.5h), Adjust preference results text (1h), Practice presentation (2,5h), Presentation session (3h), Check the wiki (1h)
Liandra Disse 10,5 Finalize main findings (1h), give and incorporate feedback on whole report, especially discussion (3h), check presentation (0.5h),  group meeting (2h), presentation session (3h), check the wiki (1h)
Emma Pagen 12 Finalize future research in discussion (2h), Group meeting (2h), read through report (1,5h), presentation session (3h), updating wiki (2,5h), final wiki check (1h)
Isha Rakhan 10,5 Group meeting (2h), Read through the report and give feedback (2h), Writing the limitations (3h), Presentation session (3h), Final wiki check (1h)
Margit de Ruiter 11 Read through whole report and give feedback (2h), practice presentation (2h), group meeting (2h), practice presentation (1h),  Presentation session (3h), final wiki check (1h)

Total hours spend during the course

Name Total Hours Result
Danique Klomp 122 (gem. 15.25) +0.2
Liandra Disse 113 (gem. 14.1) +0.2
Emma Pagen 108 (gem. 13.5) 0
Isha Rakhan 95.5 (gem. 11.9) -0.2
Margit de Ruiter 90 (gem. 11.25) -0.2

Sources

  1. Biba, J. (2023, March 10). What is a social robot? Retrieved from Built In: https://www.nature.com/articles/s41598-020-66982-y#citeas
  2. Borghi, M., & Mariani, M. (2022, September). The role of emotions in the consumer meaning-making of interactions with social robots. Retrieved from Science Direct: https://www.sciencedirect.com/science/article/pii/S0040162522003687
  3. 3.0 3.1 3.2 3.3 3.4 Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and Autonomous Systems, 58(3), 322–332. https://doi.org/10.1016/J.ROBOT.2009.09.015
  4. 4.0 4.1 4.2 Breazeal, C. (2004). Designing Sociable Robots. Designing Sociable Robots. https://doi.org/10.7551/MITPRESS/2376.001.0001
  5. 5.0 5.1 5.2 Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. Journal of Retailing and Consumer Services, 61, 102551. https://doi.org/10.1016/J.JRETCONSER.2021.102551
  6. 6.0 6.1 Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4), 143–166. https://doi.org/10.1016/S0921-8890(02)00372-X
  7. 7.0 7.1 Marcos-Pablos, S., & García‐Peñalvo, F. J. (2021). Emotional Intelligence in Robotics: A Scoping review. In Advances in intelligent systems and computing (pp. 66–75). https://doi.org/10.1007/978-3-030-87687-6_7  
  8. Miwa, H., Itoh, K., Matsumoto, M., Zecca, M., Takariobu, H., Roccella, S., Carrozza, M. C., Dario, P., & Takanishi, A. (n.d.). Effective emotional expressions with emotion expression humanoid robot WE-4RII. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), 3, 2203–2208. https://doi.org/10.1109/IROS.2004.1389736  
  9. Crumpton, J., & Bethel, C. L. (2015). A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal Of Social Robotics, 8(2), 271–285. https://doi.org/10.1007/s12369-015-0329-4  
  10. Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review Of General Psychology, 19(4), 393–407. https://doi.org/10.1037/gpr0000056  
  11. 11.0 11.1 11.2 11.3 11.4 Bishop, L., Van Maris, A., Dogramadzi, S., & Zook, N. (2019). Social robots: The influence of human and robot characteristics on acceptance. Paladyn, 10(1), 346–358. https://doi.org/10.1515/pjbr-2019-0028
  12. Johnson, D. O., Cuijpers, R. H., & van der Pol, D. (2013). Imitating Human Emotions with Artificial Facial Expressions. International Journal of Social Robotics, 5(4), 503–513. https://doi.org/10.1007/S12369-013-0211-1/TABLES/8
  13. Zhao, F. O., White, N. T., Cagiltay, B., Niedenthal, P. M., Michaelis, J., & Mutlu, B. (2023, January). (PDF) Designing Emotional Expressions for a Reading Companion Robot. https://doi.org/10.31234/osf.io/7p2ns
  14. 14.0 14.1 14.2 14.3 14.4 Van Otterdijk, M., & Barakova, E. I., Torresen, J. & Neggers, M. E. M. (2021). Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior. 10.1109/ARSO51874.2021.9542833.
  15. 15.0 15.1 Manzi, F., Sorgente, A., Massaro, D., Villani, D., Di Lernia, D., Malighetti, C., Gaggioli, A., Rossignoli, D., Sandini, G., Sciutti, A., Rea, F., Maggioni, M. A., Marchetti, A., & Riva, G. (2021). Emerging Adults’ Expectations about the Next Generation of Robots: Exploring Robotic Needs through a Latent Profile Analysis. Cyberpsychology, Behavior, and Social Networking, 24(5), 315–323. https://doi.org/10.1089/CYBER.2020.0161
  16. Björling, E. A., Thomas, K. A., Rose, E., & Çakmak, M. (2020). Exploring teens as robot operators, users and witnesses in the wild. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.00005
  17. 17.0 17.1 17.2 Cui, M., Fang, J., & Zhao, Y. (2020). Emotion recognition of human body’s posture in open environment. 2020 Chinese Control And Decision Conference (CCDC). https://doi.org/10.1109/ccdc49329.2020.9164551
  18. M. Rabiei and A. Gasparetto, "A system for feature classification of emotions based on speech analysis; applications to human-robot interaction," 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 2014, pp. 795-800, doi: 10.1109/ICRoM.2014.6991001.
  19. I. Cohen, R. Looije and M. A. Neerincx, "Child's recognition of emotions in robot's face and body," 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 2011, pp. 123-124, doi: 10.1145/1957656.1957692.
  20. Kramer, S. & Horstmann, W. (2019). ​Perceptions and beliefs of academic librarians in Germany and the USA: a comparative study. LIBER Quarterly, 29(1), ​1​-18​. ​doi: https://doi.org/10.18352/lq.10314
  21. Cole, J. (2021, April). Good News Network. Retrieved from Orphaned Polar Bear That Loved to Hug Arctic Workers Gets New Life: vhttps://www.goodnewsnetwork.org/orphaned-polar-bear-rescued-russian-arctic/
  22. Alexander, B. (n.d.). USA Today Entertainment. Retrieved from Polar bear cub's agonizing struggle in Netflix's 'Our Planet II' is telling 'heartbreaker': https://eu.usatoday.com/story/entertainment/tv/2023/06/15/netflix-our-planet-2-polar-bear/70296362007/
  23. Wagner Ladeira, M. G. P., & Santini, F. (2023). Acceptance of service robots: a meta-analysis in the hospitality and tourism industry. Journal of Hospitality Marketing \& Management, 32(6), 694–716. https://doi.org/10.1080/19368623.2023.2202168
  24. Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. Proceedings of the 6th International Conference on Human-Robot Interaction, 147–148. https://doi.org/10.1145/1957656.1957704
  25. Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model. International Journal of Social Robotics, 2(4), 361–375. https://doi.org/10.1007/s12369-010-0068-5
  26. Wu, Y.-H., Wrobel, J., Cornuet, M., Kerhervé, H., Damnée, S., & Rigaud, A.-S. (2014). Acceptance of an assistive robot in older adults: a mixed-method study of human–robot interaction over a 1-month period in the Living Lab setting. Clinical Interventions in Aging, 9(null), 801–811. https://doi.org/10.2147/CIA.S56435
  27. Valle-Canencia, Marta & Moreno, Carlos & Rodríguez-Jiménez, Rosa-María & Corrales-Paredes, Ana. (2022). The emotions effect on a virtual characters design–A student perspective analysis. Frontiers in Computer Science. 4. 10.3389/fcomp.2022.892597.
  28. Jung, M., Lazaro, M. J. S., & Yun, M. H. (2021). Evaluation of Methodologies and Measures on the Usability of Social Robots: A Systematic Review. Applied Sciences, 11(4). https://doi.org/10.3390/app11041388
  29. Madsen, Maria & Gregor, Shirley. (2000). Measuring human-computer trust.
  30. Erken, E. (2022). Chatbot vs. Social Robot: A Qualitative Study Exploring Students’ Expectations and Experiences Interacting with a Therapeutic Conversational Agent. Tilburg University, Tilburg. Microsoft Word - EstherErken_2049726_MasterThesis_Final.docx (uvt.nl)
  31. Song, Y., Tao, D., & Luximon, Y. (2023). In robot we trust? The effect of emotional expressions and contextual cues on anthropomorphic trustworthiness. Applied Ergonomics, 109, 103967. https://doi.org/10.1016/J.APERGO.2023.103967  
  32. 32.0 32.1 Appel, M., Lugrin, B., Kühle, M., & Heindl, C. (2021). The emotional robotic storyteller: On the influence of affect congruency on narrative transportation, robot perception, and persuasion. Computers in Human Behaviour, 120. https://doi.org/10.1016/J.CHB.2021.106749  
  33. J. Xu, J. Broekens, K. Hindriks and M. A. Neerincx, "Effects of a robotic storyteller's moody gestures on storytelling perception," 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 2015, pp. 449-455, doi: 10.1109/ACII.2015.7344609.
  34. Kyeong-Soo Han, Han-kyu Lee, Youngho Jeong and Youn-Seon Jang, "Improvement of viewers' preference inference by applying the peak-end rule," 2013 International Conference on ICT Convergence (ICTC), Jeju, Korea (South), 2013, pp. 792-797, doi: 10.1109/ICTC.2013.6675481.
  35. C. Tsangouri, W. Li, Z. Zhu, F. Abtahi and T. Ro, "An interactive facial-expression training platform for individuals with autism spectrum disorder," 2016 IEEE MIT Undergraduate Research Technology Conference (URTC), Cambridge, MA, USA, 2016, pp. 1-3, doi: 10.1109/URTC.2016.8284067.
  36. Ben-Ze'ev, Aaron. (2001). The subtlety of emotions. Psycoloquy. 12.