PRE2023 3 Group4: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(added weekly hours)
mNo edit summary
Line 226: Line 226:
|-
|-
|Isha Rakhan  
|Isha Rakhan  
|
|14,5
|Tutor meeting (30min),  
|Tutor meeting (30min), group meeting (experiment) (3h), group meeting (experiment) (3h) transcribing all of the interviews (4h), coding all of the interviews (4h)  
 
group meeting (experiment) (3h)  


|-
|-

Revision as of 10:26, 17 March 2024

This study was approved by the ERB on Sunday 03/03/2024 (number ERB2024IEIS22).

Below a few links are listed to important documents that were used for this study:

  • ERB form [1]
  • Research proposal [2]
  • Consent form [3]
  • Research protocol [4]

Group members

Name Student Number Current Study program Role or responsibility
Margit de Ruiter 1627805 BPT Note-taker
Danique Klomp 1575740 BPT Contact person
Emma Pagen 1889907 BAP End responsible Wiki update
Liandra Disse 1529641 BPT Planner
Isha Rakhan 1653997 BPT Programming responsible

Planning

Each week, there will be a mentor meeting on Monday morning followed by a group meeting. Another group meeting will be held on Thursday afternoon and by Sunday afternoon the wiki will be updated for work done that week (weekly deliverable).

Week 1

  • Introduction to the course and team
  • Brainstorm to come up with ideas for the project and select one (inform course coordinator)
  • Conduct literature review
  • Specify problem statement, user group and requirements, objectives, approach, milestones, deliverables and planning for the project

Week 2

  • Get confirmation for using a robot lab, and which robot  
  • Ask/get approval for conducting this study
  • Create research proposal (methods section of research paper)
  • If approval is already given, start creating survey, programming the robot or creating video of robot

Week 3

  • If needed, discuss final study specifics, including planning the session for conducting the study
  • If possible, finalize creating survey, programming the robot or creating video of robot
  • Make consent form
  • Start finding and informing participants

Week 4

  • Final arrangements for study set-up (milestone 1)
  • Try to start with conducting the study  

Week 5

  • Finish conducting the study (milestone 2)

Week 6

  • Conduct data analysis
  • Finalize methods section, such as including participant demographics and incorporate feedback
  • If possible, start writing results, discussion and conclusion sections

Week 7

  • Finalize writing results, discussion and conclusion sections and incorporate feedback, all required research paper sections are written (milestone 3)
  • Prepare final presentation

Week 8

  • Give final presentation (milestone 4)
  • Finalize wiki (final deliverable)
  • Fill in peer review form (final deliverable)

Individual effort per week

Week 1

Name Total Hours Break-down
Danique Klomp 13.5 Intro lecture (2h), Group meeting (2h), Group meeting (2h), Literary search (4h), Writing summary LS (2h), Writing problem statement first draft (1,5h)
Liandra Disse 13.5 Intro lecture (2h), group meeting (2h), Searching and reading literature (4h), writing summary (2h), group meeting (2h), updating project and meeting planning (1,5h)
Emma Pagen 12 Intro lecture (2h), group meeting (2h), literary search (4h), writing a summary of the literature (2h), writing the approach for the project (1h), updating the wiki (1h)
Isha Rakhan 11 Intro lecture (2h), group meeting (2h), group meeting (2h), Collecting Literature and summarizing (5h)
Margit de Ruiter 13 Intro lecture (2h), group meeting (2h), literature research (4h), writing summary literature (3h) group meeting (2h)

Week 2

Name Total Hours Break-down
Danique Klomp 16,5 Tutormeeting (35min), groupmeeting 1(2.5h),  groupmeeting 2 (3h), send/respond to mail (1h), literature interview protocols and summarize (3h), literature on interview questions (6.5h),  
Liandra Disse 12 Tutormeeting (35min), groupmeeting (3h), write research proposal (3h), groupmeeting (3h), finalize research proposal and create consent form (2,5h)
Emma Pagen 11,5 Tutormeeting (35min), groupmeeting (3h), write research proposal (2h), groupmeeting (3h), finalize research proposal and create consent form (1,5h), updating wiki (1,5h)
Isha Rakhan 10 Research on programming (7h), groupmeeting (3h)
Margit de Ruiter 11,5 Tutormeeting (35min), groupmeeting (3h), read literature Pepper and summarize (3h), groupmeeting (3h), research comfort question interview (2h)

Week 3

Name Total Hours Break-down
Danique Klomp 14 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), preparation Thematic analysis & protocol (2h), mail and contact (1,5h), meeting Zoe (1h), group meeting (3h)
Liandra Disse 12 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), update (meeting) planning (1h), prepare meeting (1h),  group meeting (3h), find participant (30min)
Emma Pagen 12 Tutormeeting (35min), groupmeeting 1(3h), finish ERB form (1h), create lime survey (1,5h), make an overview of the content sections of final wiki page (1h), group meeting 2 (3h), updating the wiki (2h)
Isha Rakhan 12,5 Tutormeeting (35min), groupmeeting 1(3h), meeting zoe (1h), group meeting (3h), programming (5h)
Margit de Ruiter *was not present this week, but told the group in advance and had a good reason*

Week 4

Name Total Hours Break-down
Danique Klomp 11,5 Tutor meeting (35min), group meeting (3h), review interview questions (1h), finding participants (0.5h), mail and contact (1h), reading and reviewing wiki (1.5h), group meeting (4h), lab preparations (1h)
Liandra Disse 11 Prepare and catch-up after missed meeting due to being sick (1,5h), find participants (0.5h), planning (1h), group meeting (4h), set-up final report and write introduction (4h)
Emma Pagen 13 Tutormeeting (35min), group meeting (3h), adding interview literature (2h), find participants (0,5h), group meeting (4h), reviewing interview questions (1h), going over introduction (1h), updating wiki (1h)
Isha Rakhan 11,5 Tutormeeting (35min), group meeting (3h), research and implement AI voices (2h), documentation choices Pepper behavior (2h), group meeting (4h)
Margit de Ruiter 11 Tutormeeting (35min), group meeting (1h), find participants (0.5h), testing interview questions (1h), group meeting (4h), start writing methods (4h)
Name Total Hours Break-down
Danique Klomp 24,5 Tutor meeting (30min), group meeting (experiment) (3h),  transcribe first round of interviews (2h),  familiarize with interviews (5h),  highlight interviews (2h),  group meeting (experiment) (3h), transcribe second round of interviews (2h), first round of coding (3h), refine and summarize codes (2h), prepare next meeting (1h), adjust participants in methods section (1h)  
Liandra Disse 21,5 Tutor meeting (30min), group meeting (experiment) (3h), transcribe, familiarize and code first interviews (6h), incorporate feedback introduction (1h), group meeting (experiment) (3h), transcribe, familiarize, code second interviews and refine codes (7h), extend methods section (1h)
Emma Pagen 20,5 Tutor meeting (30min), group meeting (experiment) (3h), transcribe first round of experiments (3h), familiarize with interviews and coding of first interviews (4h), group meeting (experiment) (3h), transcribe second round of interviews (3h), familiarize with interviews and coding of second interviews (4h)
Isha Rakhan 14,5 Tutor meeting (30min), group meeting (experiment) (3h), group meeting (experiment) (3h) transcribing all of the interviews (4h), coding all of the interviews (4h)
Margit de Ruiter Tutor meeting (30min),

group meeting (experiment) (3h)

Pepper

Pepper, the Humanoid and Programmable Robot[1]

Currently, Pepper is deployed in thousands of homes and schools. However, Pepper was initially designed for an application of business-to-business. It was launched in June 2014. Then, Pepper became of interest all over the world for multiple other applications. For example, in business-to-consumer, business-to-academics and business-to-developers fields. So, in the end, it was adapted for business-to-consumer purposes [2].  

Pepper is capable of exhibiting body language, perceiving and interacting with its environment and it is able to move itself around. The robot can also analyse other people's expressions and their voice tones, using emotion and voice recognition algorithms in order to create interaction. It is equipped with high-level interfaces and features for multimodal communication with humans surrounding Pepper [2].

Pepper has a lot of capabilities, among which mapping and navigation, object detection, hearing, speech, and face detection [3]. Pepper is a humanoid robot, meaning it is designed to have a physical human appearance. It's sound and speech recognition capabilities yield good results, even with several accents. However, it's built-in navigation system is unreliable, which makes it hard to get to destinations accurately. Sometimes, object and face detection of Pepper gives inconsistent results. So, Pepper can be improved in those fields [3].

Pepper uses facial recognition to pick up emotions on human faces, like sadness or hostility and it uses voice recognition to hear concern. It has age tools like age detection and basic emotions embedded intro its framework [3]. It bases the recognition mostly on eye contact, the central part of the face and distance. It can not only detect emotions, but also knows how to respond and react to them appropriately. For example, it will detect sadness based on a person’s expression and voice tone and by using sensors that are built-in and pre-programmed algorithms, the robot will react properly [1]. Several applications of this robot are answering questions, greeting guests and playing with kids in Japanese homes [4].  

Pepper can also use several gestures while responding to a human, like waving and nodding. It has 12 hours of battery life, and it can return to its charging station if necessary. It is 1.2 meter tall, has 3 omnidirectional wheels in order to move smoothly and 17 joints for body language [2]. Pepper is designed to make it appropriate and acceptable in daily life usage for interacting with human beings. Some design principles behind Pepper are; a pleasant appearance, safety, affordability, interactivity and good autonomy. The aim was to make it not too exact a human likeness robot, since the designers wanted to avoid the ‘uncanny valley’ [2].

Introduction

The use of social robots, specifically designed for interacting with humans and other robots, has been rising for the past several years. These types of robots differ from the robots we have been getting used to over the past decades which often only perform on specific and dedicated tasks. Social robots are now mostly used in services settings, as companions and support tools[5][6];. In many promising sectors of application, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. To make this human-robot interaction (HRI) feel natural and enjoyable for humans, robots must make use of human social norms[7]. This requirement originates from humans anthropomorphizing robots, meaning that we attribute human characteristics to robots and engage and form relationships with them as if they are human. We use this to make the robot’s behavior familiar, understandable and predictable to us, and infer the robot’s mental state. However, for this to be a correct as well as intuitive inference, the robot’s behavior must be aligned with our social expectations and interpretations for mental states [8].  

One very important integrated element in human communication is the use of nonverbal expressions of emotions, such as facial expressions, gaze, body posture, gestures, and actions[7][8]. In human-to-human interaction as well as human-robot interaction, these nonverbal cues support and add meaning to verbal communication, and expressions of emotions specifically help build deeper and more meaningful relations, facilitate engagement and co-create experiences[9]. Besides adding conversational content, it is also shown that humans can unconsciously mimic the emotional expression of the conversational partner, known as emotional contagion, which helps to emphasize with others by simulating their feelings[9][7]. Due to our tendency to anthropomorphize robots, it is possible that emotional contagion also occurs during HRI and can facilitate making users feel positive affect while interacting with a social robot[9]. Artificial emotions can be used in social robots to facilitate believable HRI, but also provide feedback to the user about the robot’s internal state, goals and intentions[10]. Moreover, they can act as a control system through which we learn what drives the robots’ behavior and how it is affected by and adapts due to different factors over time[10]. Finally, the ability of social robots to display emotions is crucial in forming long-term social relationships, which is what people will naturally seek due to the anthropomorphic nature of social robots[7].

Altogether, the important role of emotions in human-robot interaction requires us to gather information about how robots can and should display emotions for them to be naturally recognized as the intended emotion by humans. A robot can display emotions when it combines body posture, motion velocity, facial expressions and vocal signs (e.g. prosody, pitch, loudness), highly depending on the possibilities considering the robot’s morphology and degree of anthropomorphism[11][12][13]. Social robots are often more humanoid, increasing anthropomorphism, and therefore requiring to match the robot's behavior with the appearance to avoid falling into the uncanny valley which elicits a feeling of uneasiness or disturbance[11][14]. Some research has already been done on testing the capability of certain social robots to display emotions and resulted in robot-specific guidelines on how to program displaying certain emotions[12][15][16].

Based on these established guidelines for displaying emotions, we can look further into how humans are affected by the robot’s emotional cues during interaction with a robot. We will research this in a context where we would also expect a human to display emotions, namely during telling an emotional story. Our research takes inspiration from the study of Van Ottendijk et al[17] and Bishop et al[16] in which the robot Pepper was used to deliver either a positive or negative message accompanied by congruent or incongruent emotional behavior. We will extend on these studies by taking a different combination of context for application and research method: interaction with students as researching application in an educational setting rather than healthcare, and using interviews to gain a deep understanding rather than surveys. Moreover, students are an important target group for robots, because they represent future workforce and innovations. Understanding their needs can help developers design the robots so that they are engaging, user-friendly and educational[18].

More specifically, the research question that will be studied in this paper is “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”. We expect that participants will prefer interacting with the robot while displaying the emotion that fits with the content of its message and to be open to more future interactions like this with the robot. On the other hand, we expect that a mismatch between the emotion displayed by robot and the story it is telling will make participants feel less comfortable and therefore less accepting of the robot. Moreover, we expect that the influence of congruent emotion displaying will be more prominent with a negative than a positive message. The main focus of this research is thus on how accepting the students are of the robot after interacting with it, but also gaining insights into potential underlying reasons, such as the amount of trust the students have in the robot and how comfortable they feel when interacting with them. The results could be used to provide insights into the importance of congruent emotion displaying and whether robots could be used on university campuses as assistant robots.

Survey and interview questions

Survey

At the start of the experiment, the participants are asked to fill in a survey on limesurvey. The questions are listed below.

  1. What is your age is years?
  2. What is your gender?
    • Male
    • Female
    • Non-binary
    • Other
    • Do not want to say
  3. What study program are you enrolled in currently?
  4. In general, what do you think about robots?
  5. Did you have contact with a robot before? Where and when?

Interview

In the study that we propose, the participants are subjected to the experiments which are followed with an interview. This interview is done face-to-face with the participant in a quiet room. The interview will be a (semi)-structured interview, as this allows us to prepare questions in advance to the interview, but also allows follow up questions that are not scripted. This will give a slightly more complete way of answering questions, however it will also be slightly harder to code the interviews as some participants might be asked follow up questions.  The semi-structured design will allow for further conversation where necessary, and it will seek the fine line between a too structured interview that is avoid of free conversation, which makes it difficult to collect enough information, and an unstructured interview that has only free conversation, which makes it hard to compare the answers that are given by the participants [19].

In general there are several important steps to preparing and doing an interview. These are the steps that are normally taken:

  1. Design interview questions.
    • Think about who you will interview: In this case we will interview peer students. These students are similar to us, but not all students might understand the same jargon, as not all students will be familiar with robots and psychological terms. This is something to keep in mind when deciding upon questions. In addition to this, the student population is, on average, more intelligent than the general public, allowing for more advanced questions. In addition to this, the students are all from a technical university, creating the expectation that these students will have a positive attitude towards the robot, as they are often familiar with the workings and the ideas.
    • Think about what kind of information you want to obtain from interviews: The research questions aims to look at three constructs. First of all, we want to look at the acceptance of robots by the students. Second, we want to focus on whether students trust the robot and what it says. Third, we would like to focus on how comfortable students are in interacting with the robot. These three constructs need to be included in the interview.
    • Think about why you want to pursue in-depth information around your research topic : The results could be used in robot design for different purposes. As the sample population consists of students, the results will only be applicable to them. This leads to applications like student counselor, assistive robots on campus and classroom bots. The study does not focus on the generalization of results, which might be work for future research, as elderly and children might react in different ways to the robot than students would.
  2. Develop an interview guide (what you do during the actual interview, the protocol).
  3. Plan and manage logistics
    • The interviews will be audio-recorded and transcribed using name program. The recordings will be destroyed July 7th 2024 for privacy reasons.
    • The interviews are done one-on-one, where the interviewer has a printed page of the questions with space to make notes.
    • Each interview will be about 10 minutes.


The interview questions are divided into three subjects: attitude, trust and comfort.

Attitude

There are little interview questions that regard the attitude towards robots. However, attitude is divided into several different aspects. For example, we have general attitude that extends over the general ideas that participants have on robots. The second is more specific attitudes towards one application, as people can have different experiences with different robots and thus have different attitudes.

Concerning general attitude, one article found focussed on the expectations that people have with robots and their expectations when confronted with other social robots concepts (Horstmann & Krämer, 2019). The interview included some great baseline questions, these are great to use as the general attitude questions in the demographics survey:

  • In general, what is your attitude towards new technologies?
  • Did you have contact with a robot before? Where and where?

Specific attitude tries to measure the attitude of students towards the robot application tested in the experiment. This is often measure in how accepting they are of the robot specifically. Acceptance, like general attitude, is often measured in survey data not interviews. Still there is research that tests the user experience and the attitude of the participant using interview questions. One of the found studies applied a semi-structured interview to explore the interaction with the robot more deeply. This was part of a larger research and there were only 5 questions included in the interview (Wu et al., 2014). This research had as an application assistive technologies for in the house, which does not align with the context of school and education. However the questions are general enough that only a few have to be adjusted to fit the context. The questions included in the research were:

  • What do you think about this experiment?
  • What do you think about the appearance of the robot?
  • What do you think about the interaction with the robot?
  • What do you think about having this type of robot one day?
  • Would you use this kind of robot one day?

The only question that falls a bit out of tone is the first question. In our research the robot does a general task, which is telling a story. In the study of Wu et al. the robot performed multiple tasks and interacted with the participant more. This would result in more attitudes towards the experiment itself. In addition to this, question 1 asks about the general attitude towards the experiment. As our study is under some time contraints, it might be a good idea to leave this question out. Apart from that question 2 is not as relevant to our research. it could give some idea on whether the manipulation worked (people need to perceive the robot as happy/positive or sad/negative and neutral). This could thus be a good question to ask. Yet, when we look at time containts it might be another question to remove. Last but not least, question 4 needs to be adjusted as the robot application that we intend is not about domestic use. So the participants will not have the robot, only interact. For this reason and the reason of time contraints, question 4 will be removed. When applying these questions, we could phrase them as a comparative question, as we want to compare the different conditions (happy/sad/neutral).

Attitude is a big part of our research and it often includes aspects of the other concepts (trust and comfort) that we would like to measure. There are multiple questionnaires that used attitude as a basis for their research. On of these studies, researched the effect of emotions on virtual character design (del Valle-Canencia et al., 2022). As this research was directed at students, it is applicable to our target group as well. The study did a mix of user experience centered questions and more attitude-based questions. A selection of these questions has been made, and can be found below. The full interview list can be found in the article itself.

Questions from del Valle-Canencia et al.:

  • Briefly describe, in your own words, your emotion (regarding the robot).
  • Do you think the character seen above would be suitable to be ... ?
  • In a virtual assistant, would you prefer ... ?
  • What did you like about your character?


Trust

Trust is a common measure in the evaluation of robot designs. It is divided into trust in the robot itself and trust in the message that the robot tries to convey (Jung et al., 2021). This division can be used here as well. In contrast to attitude, the same paper suggests that trust is best measured through in-depth interviews, instead of biometric measurements (Jung et al., 2021). However, the in-depth interviews are never standardized and differ a lot between the studies. There are also a lot of studies that measure trust through scales and standardized pronciples. One of the most well-known scales to measure human-computer trust is the measurement scale developed by Madsen and Gregor. In their study they found that affective measures are the most reliable measurements of trust (Madsen & Gregor, 2000). They developed a scale that covers nine different factors. Each of these factors was than classified to fit to affective measures, cognitive measures or a combination of both. The affective measures are faith, personal attachment and perceived reliability. As the scale includes relatively many items, a selection was made to include in our research. After selection, four items were left. These items are again put in the table and rephrased like open questions for the interview. These rephrased questions are listed below:

  • Which robot character did you find the most reliable? And which one the least? Why?
  • Which robot character do you think told the story in the most trustworthy way? Why?
  • Which robot character did you feel the most attracted to?


Comfort


The interview questions that will be asked in between the interactions with Pepper are the following.

Manipulation check {see whether the story did come across as positive}

  1. What was your impression of the story that you heard?
    • Briefly describe, in your own words, the emotions that you felt when listening to the three versions of the story?
    • Which emotion did you think would best describe the story?

Attitude towards the robot:

  1. How did you perceive the feelings that were expressed by the robot?
    • How did the robot convey this feeling?
    • Did the robot do something unexpected?
  2. What did you like/dislike about the robot during each of the three emotional states?
    • What are concrete examples of this (dis)liking?
    • How did these example influence your feelings about the robot?
    • What were the effects of the different emotional states of Pepper compared to each other?  
      • What was the most noticeable difference?
  3. Which of the three robot interactions do you prefer?
    • Why do you prefer this emotional state of the robot?
    • If sad/happy chosen, did you think the emotion had added value compared?
    • If neutral chosen, why did you not prefer the expression of the matching emotion?

Trust:

  1. Which emotional state did you find the most trustworthy? And which one the least trustworthy?  
    • Why was this emotional state the most/least trustworthy?
      • What did the robot do to cause your level of trust?
    • What did the other emotional states do to be less trustworthy?

Comfort:

  1. Which of the three emotional states of the robot made you feel the most comfortable in the interaction?  
    • Why did this emotional state make you feel comfortable?
    • What effect did the other emotional states have?

General:

  1. Do you think Pepper would be suitable as a campus assistant robot and why (not)?
    • If not, in what setting would you think it would be suitable to use Pepper?  
    • What tasks do you think Pepper could do on campus?
  2. Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?

Method

Design

This research consists of an exploratory study. The experiment is a within-subject design. All the participants will be exposed to the 6 conditions of the experiment. It consists of a 2 (positive/negative emotional message) x 3 (happy/neutral/sad emotion displayed by robot) experiment. These six conditions differ in terms of a match between the content of the story (either positive or negative) and the emotion (happy, neutral or sad) of the robot. This scheme can be seen in the table below.  

Table x: The six conditions in the experiment
Story / displayed emotion Happy Neutral Sad
Positive Congruent Emotionless Incongruent
Negative Incongruent Emotionless Congruent

The independent variable in this experiment was the displayed emotion. Another independent variable was the kind of story, which was positive or negative. The dependent variable was the acceptance of the robot. This was measured using the qualitative analysis from the interview held with the participants during the experiment.

Participants

The study is investigating the viewpoint of students and therefore the participants were gathered from the TU/e. We have chosen to target this specific group because of their in general higher openness to social robots and the increased likelihood that this group will deal a lot with social robots in the near future (Manzi et al., 2021).

10 participants took part in this experiment. There were .. men and ..women who completed the study. Their age ranged between .. and .. years. They are all students at the Eindhoven University of Technology and volunteers, meaning they were not compensated financially for participating in this study. They were gathered from the researchers’ own circle of fellow students, within a variety of different studies. All the participants were allocated to all the six conditions of the study. So in total, there were .. participants per condition.  

Materials

For this experiment, the robot Pepper will be used, which is manufactured by SoftBank Robotics.  

The reason this robot was chosen is because Pepper is a well-known robot that multiple studies have been done on and that is already being applied in different settings, such as hospitals and customer service. Based on young adults' preferences for robot design, Pepper would also be most useful in student settings, given its human-like shape and ability to engage emotionally with people (Björling et al., 2020). The experiment itself will be conducted in one of the robotics labs on the TU/e campus, where the robot Pepper is readily available.

Figure X: Example of Pepper's behavior for (a) happy, (b) neutral, and (c) sad condition of displayed emotion  (Bishop et al., 2019).

Pepper will be programmed to display happiness and sadness based on pitch, body posture and LED eye color. The behavior that Pepper will display is shown in Table 2 and Figure 1, based on the research of Bishop et al. (2019) and Van Ottendijk et al. (2021). Facial expressions cannot be used, since the morphology of Pepper does not allow for it.

Table x: The different behaviors of Pepper
Happy Neutral Sad
Pitch of voice High pitch Average of happy and sad condition Low pitch
Body posture Raised chin, extreme movements, upwards arms Average of happy and sad condition Lowered chin, small movements, hanging arms
LED eye color Yellow White Light-blue

During the experiment, one laptop will be used to direct Pepper to tell the different studies with the different emotions. This will be done from the control room. Two other researchers will be present in the room with the participants and Pepper to assist if necessary and take notes on their laptop, so two other laptops will be used there. During the interviews with one researcher and one participant simultaneously, mobile phones will be used as a recording device.  

Two interviews will be held, one after the first 3 conditions, in which the story is the same and one after the second 3 conditions. These interviews are practically the same, except for one extra question (question 7) in the second interview. The interview questions include a short explanation. The final interview questions can be found in the interview section and the (de)briefing can be found in the research protocol.

Procedure

The participants will start the experiment by listening to a researcher explaining the experiment to them. There will be two rounds with five participants, leading to 10 participants in total. First, the participants will listen to Pepper, who will tell the group a story about an ice bear, either a positive or a negative one. Pepper tells this story three times, each time with a different emotion, which could be ‘happy', ‘sad’ or ‘neutral’. The time that these three times take is approximately 6 minutes. When Pepper is finished, the five participants will be asked questions individually during an interview by one of the researchers. These interviews will be held simultaneously and last approximately 10 minutes. After completing the interview, the participants will go back to the room where they started and will now listen again to a story about an ice bear. When they have had the positive story, they will now listen to the negative one and vice versa. After Pepper has finished this story, the same interview will be held under the same circumstances. After completion of the interview, there will be a short debriefing. The total the experiment will last 50-60 minutes.


The positive and negative stories that the robot will tell are fictional stories about polar bears, inspired by the study (Bishop et al., 2019). The content of the stories is based on non-fictional internet sources and rewritten to best fit our purpose. We have decided to keep the stories fictional and about animals rather than humans, because of the lower risk of doing emotional damage to the participants associated with elicited feelings based on personal circumstances. The story will be accompanied with an image of polar bears displayed on Pepper's screen to make the story more visual (see Figure 2 and 3).

Figure X: Picture to accompany the positive story (Tribune, 2018)

The positive story is shown below. It is an adaption of (Cole, 2021)

“When Artic gold miners were working on their base, they were greeted by a surprising guest, a young lost polar bear cub. It did not take long for her to melt the hearts of the miners. As the orphaned cub grew to trust the men, the furry guest soon felt like a friend to the workers on their remote working grounds. Even more surprising, the lovely cub loved to hand out bear hugs. Over the many months that followed, the miners and the cub would create a true friendship. The new furry friend was even named Archie after one of the researcher’s children. When the contract of the gold miners came to an end, the polar bear cub would not leave their side, so the miners decided to arrange a deal with a sanctuary in Moscow, where the polar bear cub would be able to live a happy life in a place where its new-found friends would come to visit every day.”

Figure X: Picture to accompany the negative story (The telegraph, 2012)

The negative story is shown below. This story is an adaption of (Alexander, n.d.).

"While shooting a nature documentary on the Arctic Ocean island chain of Svalbard, researchers encountered a polar bear family of a mother and two cubs. During the mother's increasingly desperate search for scarce food, the starving family was forced to use precious energy swimming between rocky islands due to melting sea ice. This mother and her cubs should have been hunting on the ice, even broken ice. But they were in water that was open for as far as the eye could see. The weaker cub labors trying to keep up and the cub strained to pull itself ashore and then struggled up the rock face. The exhausted cub panicked after losing sight of its mother and its screaming could be heard from across the water. That's the reality of the world they live in today. To see this family with the cub, struggling due to no fault of their own is extremely heart breaking.”

Sources

  1. 1.0 1.1 Mlot, S. (2014, June 5). “Pepper” robot reads, reacts to emotions. PCMAG. https://www.pcmag.com/news/pepper-robot-reads-reacts-to-emotions
  2. 2.0 2.1 2.2 2.3 Pandey, A. K., & Gelin, R. (2018). A Mass-Produced sociable humanoid robot: Pepper: the first machine of its kind. IEEE Robotics & Automation Magazine, 25(3), 40–48. https://doi.org/10.1109/mra.2018.2833157
  3. 3.0 3.1 3.2 Mishra, D., Romero, G., Pande, A., Bhuthegowda, B. N., Chaskopoulos, D., & Shrestha, B. (2023). An exploration of the Pepper robot’s capabilities: unveiling its potential. Applied Sciences, 14(1), 110. https://doi.org/10.3390/app14010110
  4. Glaser, A. (2016, June 7). Pepper, the emotional robot, learns how to feel like an American. WIRED. https://www.wired.com/2016/06/pepper-emotional-robot-learns-feel-like-american/
  5. Biba, J. (2023, March 10). What is a social robot? Retrieved from Built In: https://www.nature.com/articles/s41598-020-66982-y#citeas
  6. Borghi, M., & Mariani, M. (2022, September). The role of emotions in the consumer meaning-making of interactions with social robots. Retrieved from Science Direct: https://www.sciencedirect.com/science/article/pii/S0040162522003687
  7. 7.0 7.1 7.2 7.3 Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and Autonomous Systems, 58(3), 322–332. https://doi.org/10.1016/J.ROBOT.2009.09.015
  8. 8.0 8.1 Breazeal, C. (2004). Designing Sociable Robots. Designing Sociable Robots. https://doi.org/10.7551/MITPRESS/2376.001.0001
  9. 9.0 9.1 9.2 Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. Journal of Retailing and Consumer Services, 61, 102551. https://doi.org/10.1016/J.JRETCONSER.2021.102551
  10. 10.0 10.1 Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4), 143–166. https://doi.org/10.1016/S0921-8890(02)00372-X
  11. 11.0 11.1 Marcos-Pablos, S., & García‐Peñalvo, F. J. (2021). Emotional Intelligence in Robotics: A Scoping review. In Advances in intelligent systems and computing (pp. 66–75). https://doi.org/10.1007/978-3-030-87687-6_7  
  12. 12.0 12.1 Miwa, H., Itoh, K., Matsumoto, M., Zecca, M., Takariobu, H., Roccella, S., Carrozza, M. C., Dario, P., & Takanishi, A. (n.d.). Effective emotional expressions with emotion expression humanoid robot WE-4RII. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), 3, 2203–2208. https://doi.org/10.1109/IROS.2004.1389736  
  13. Crumpton, J., & Bethel, C. L. (2015). A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal Of Social Robotics, 8(2), 271–285. https://doi.org/10.1007/s12369-015-0329-4  
  14. Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review Of General Psychology, 19(4), 393–407. https://doi.org/10.1037/gpr0000056  
  15. Johnson, D. O., Cuijpers, R. H., & van der Pol, D. (2013). Imitating Human Emotions with Artificial Facial Expressions. International Journal of Social Robotics, 5(4), 503–513. https://doi.org/10.1007/S12369-013-0211-1/TABLES/8
  16. 16.0 16.1 Bishop, L., Van Maris, A., Dogramadzi, S., & Zook, N. (2019). Social robots: The influence of human and robot characteristics on acceptance. Paladyn, 10(1), 346–358. https://doi.org/10.1515/pjbr-2019-0028
  17. Van Otterdijk, M., & Barakova, E. I., Torresen, J. & Neggers, M. E. M. (2021). Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior. 10.1109/ARSO51874.2021.9542833.
  18. Manzi, F., Sorgente, A., Massaro, D., Villani, D., Di Lernia, D., Malighetti, C., Gaggioli, A., Rossignoli, D., Sandini, G., Sciutti, A., Rea, F., Maggioni, M. A., Marchetti, A., & Riva, G. (2021). Emerging Adults’ Expectations about the Next Generation of Robots: Exploring Robotic Needs through a Latent Profile Analysis. Cyberpsychology, Behavior, and Social Networking, 24(5), 315–323. https://doi.org/10.1089/CYBER.2020.0161
  19. Pollock, T. (2022, June 14). The Difference Between Structured, Unstructured & Semi-Structured Interviews — Oliver Parks - Search Based Recruitment Experts. Oliver Parks - Search Based Recruitment Experts. https://www.oliverparks.com/blog-news/the-difference-between-structured-unstructured-amp-semi-structured-interviews

Appendix

The complete research protocol can be found via the following link: