RoPod/Tutorials/Localization

From Control Systems Technology Group
Revision as of 13:56, 21 July 2017 by 20170140 (talk | contribs)
Jump to navigation Jump to search

In order to localize the robot, at this moment the localisation plugin of the Environment Descriptor is used, see their tutorials for more details tutorial. Ensure you did tutorials 1-5 of ED to understand the basics.

In order to localize yourself, use the following configuration-file on the robot:

world:
- type: robotics_testlabs
  pose: { x: 0, y: 0, z: 0 }

plugins:
  - name: gui_server
    lib: libed_gui_server_plugin.so
  - name: localization
    lib: libed_localization_plugin.so
    parameters:
      robot_name: robot    # the robot will also be in the world model. This is the
                           # id the robot entity will get in ED
      initial_pose_topic: /initialpose
      num_particles: 500   # maximum number of particles to use
      initial_pose:        # where does the robot start (in map frame)?
        x: 0
        y: 0
        rz: 0              # rotation
      laser_model:
        topic: /pico/laser   # Laser topic
        num_beams: 100         # Max number of beams used per particle (evenly spread)
        z_hit: 0.95            # \
        sigma_hit: 0.2         # |
        z_short: 0.1           # |-- These are all parameters of the probabilistic laser
        z_max: 0.05            # |    model. See 'Probabilistic Robotics' for more info.
        z_rand: 0.05           # |
        lambda_short: 0.1      # /
        range_max: 10
        min_particle_distance: 0.01            # Particles that are too close together will
        min_particle_rotation_distance: 0.02   # be combined (resulting in less particles)
      odom_model:
        map_frame: map
        odom_frame: /pico/odom
        base_link_frame: /pico/base_link
        alpha1: 0.2   # rot -> trans + strafe    # \
        alpha2: 0.2    # trans -> rot            # |-- These are all parameters of the
        alpha3: 0.2    # trans -> trans          # |   probabilistic odom model. See
        alpha4: 0.2    # rot -> rot              # |   'Probabilistic Robotics' for more info
        alpha5: 0.2    # trans -> strafe         # /

If necessary, refer to the proper name of you map created using g-mapping. Now, start the communication with the sensors, run ED using this config-file, ensure you can visualize the topics on your laptop and load rviz using the configuration file placed on github.

So, on the robot execute the following:

pstart
rosrun ed ed <name of config file>
rosrun ed_gui_server ed_rviz_publisher

On your own laptop:

pico-core
rosrun rviz rviz <path-to-config-file>/ED-config-localization.rviz

Now click on the "2D pose estimate"-button in rviz (at the top of the rviz-window) and click on the position in your map where pico approximately is located. This publishes BROKEN!! due to network-problems? the position in the map on the /initialpose-topic.

Start driving around (for example using the pico-teleop) and you will see that the position and orientation of the robot is tracked in rviz. If the given position was not accurate enough or the localization turns out to be bad, give again the initial position of the robot using the "2D pose estimate"-button.