Nakazawa, A., & Nakaoka, S., & Ikeuchi, K., & Yokoi, K., (2002). Imitating Human Dance Motions through Motion Structure Analysis.: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(→‎Jeroen Sies: Initial commit)
 
m (moved Nakazawa, A., & Nakaoka, S., & Ikeuchi, K., & Yokoi, K., (2002). Imitating Human Dance Motions through Motion Structure Analysis to [[Nakazawa, A., & Nakaoka, S., & Ikeuchi, K., & Yokoi, K., (2002). Imitating Human Dance Motions through Motion S)
 
(One intermediate revision by the same user not shown)
(No difference)

Latest revision as of 12:11, 25 March 2021

Importing human motions into a robot through visual observation is one of the final problems in the humanoid robot studies. This enables robots to imitate human motions easily.

Human motions consist of some variations of simple motions. These are called “motion primitives” and together in some structure they make up more complex motions. The goal of the project is to import human dance motions into humanoid robots.

Detect motion primitives and structure of the dance moves

The human dance motion is acquired by the motion capture system that consists of 8 cameras and PC clusters. The cameras are arranged to surround a person and the motion acquisition is carried out by attaching lighting markers on the desirable positions of the human body. During the actor performing the dance motion, all images are only acquired, and afterwards depth measurement is done by matching markers between the images.

To detect the motion primitives, we paid attention to the velocity of the end effectors (hands and feet), because it represents the human motion segment: the start and end points of the motion primitives.
The following 15 measurement points are used: hands(L, R), elbows(L, R), shoulders (L, R), head, hip, body center, waists(L, R) and feet(L, R).
The analysis consists of the following steps:

  1. Define the body center coordinate system.
  2. Coordinate conversion of target points.
    The target points (both hands and feet) are changed into a body center coordinate system.
  3. Preliminary segmentation. Calculate the velocities of the target points and detect the local minimum.
  4. Evaluate the correlation between the segments.
  5. Cluster and label the segments.
    Segments in which the target points pass the similar locus have the same labels.

So the whole motion sequence is segmented and clustered into the segments in which a target point draws the same trajectory. We call these segments the ‘minimum motion segments’.
Furthermore, many long motion units exist for dance motions. They are found using the following steps:

  1. Find sequences of minimum motion segments that appear frequently.
    These are registered as ‘higher level motion segments’.
  2. Find the motion primitives among the different target points.
  3. The motion segment sequences that are labeled to the same motion are equalized for its 3D trajectories.

Generate new motions from motion primitives

Assume two motion primitives are selected to be concatenated, the following steps generates the transition between these motions:

  1. Set up a support leg during the transition. The latter primitive is translated so that this leg stays in the same position.
  2. Calculate the positions of the unsupported foot, waist, body and neck during transitions.
  3. Linear interpolation is applied for the waist and neck coordinate system.
  4. Use the minimum joint angle jerk model for the movements of the arms and feet.

Presenting dance motions to a humanoid robot

While performing moves the robot couldn’t keep its balance. To keep the robot standing its ZMP (Zero Moment Point), which indicates a balanced force point, must be within a support area enclosed by its soles. Mostly because of the movements of the arms the ZMP moved outside the area. To compensate for this the movements had to be modified so the ZMP could be kept within the support area.

Conclusion

There were two problems when importing human motions into the robot: the limit of the angles and their velocities, and keeping balance while moving. For the later problem a new method using ZMP trajectories was used. Final simulation results proved this method can keep both the robot’s balance and the shape of the original motion.