H. Ishiguro, T. Ono, M. Imai, T. Maeda, T. Kanda, R. Nakatsu. (2001). Robovie: An Interactive Humanoid Robot.: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 10: Line 10:


- The joint viewpoint represented by the robot’s gestures allows the subject to understand the utterance.
- The joint viewpoint represented by the robot’s gestures allows the subject to understand the utterance.


The key insight to take away is the importance of a shared joint viewing point in human-robot communication. This is also observed in an phenomenon described by the term “mutual manifestness” which represents mental states where two or more humans recognize the same situation or recall similar experiences. More experiments were performed in which among mutual manifestness the impact of eye contact was tested. From these psychological experiments, we have obtained four ideas as follows:
The key insight to take away is the importance of a shared joint viewing point in human-robot communication. This is also observed in an phenomenon described by the term “mutual manifestness” which represents mental states where two or more humans recognize the same situation or recall similar experiences. More experiments were performed in which among mutual manifestness the impact of eye contact was tested. From these psychological experiments, we have obtained four ideas as follows:


- Rich robot behaviors induce various human communicative gestures that help utterance understanding.
- Rich robot behaviors induce various human communicative gestures that help utterance understanding.

Latest revision as of 13:07, 22 March 2021

Summary

The paper describes a robot the authors build, and want to use it to experiment on what aspects facilitate an improved communication between robots and humans. Two key observations are: The importance of physical expressions using the body (1) and the effectiveness of the robot’s autonomy in the robot’s utterance recognition by humans (2). The focus of the experiment was interactions between a subject and a robot while it teaches a route direction to the subject, and investigated the appearance of the subject’s gestures and the level of utterance understanding by using several different gestures during the teaching process. We conclude the experimental results as follows:


- Many and various behaviors of the robot induce various human communicative gestures, the subject’s gestures are increased by entrainment and synchronization with the robot.

- The mutual gestures help the subject to understand the robot’s utterance.

- The joint viewpoint represented by the robot’s gestures allows the subject to understand the utterance.


The key insight to take away is the importance of a shared joint viewing point in human-robot communication. This is also observed in an phenomenon described by the term “mutual manifestness” which represents mental states where two or more humans recognize the same situation or recall similar experiences. More experiments were performed in which among mutual manifestness the impact of eye contact was tested. From these psychological experiments, we have obtained four ideas as follows:

- Rich robot behaviors induce various human communicative gestures that help utterance understanding.

- Attention expression by the robot guides the human’s focus to the robot attention.

- Eye contact by the robot indicates robot’s intention of communication to the human.

- Sharing of a join viewing point and a proper positional relationship establishes a situation where the human can easily understand the robot’s utterance.