Embedded Motion Control 2015 Group 7: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 61: Line 61:
'''Week 3: 4 May - 10 May '''
'''Week 3: 4 May - 10 May '''
<ul >
<ul >
   <li></li>
   <li>Evaluating initial design</li>
   <li></li>
   <li>Composition pattern</li>
  <li>Initial Presentation</li>
   <li>'''6 May: First presentation design'''</li>
   <li>'''6 May: First presentation design'''</li>
  <li>implement point-to-point movement</li>
  <li>8 May: First experiment with Pico</li>
  <li>Implement edge detection, collision avoidance</li>
  <li>Initial Git structure</li>
</ul>
</ul>


'''Week 4: 11 May - 17 May'''
'''Week 4: 11 May - 17 May'''
<ul >
<ul >
   <li></li>
   <li>Implement edge detection</li>
   <li></li>
   <li>Implement obstacle avoidance</li>
  <li>Finish initial software for Pico, with '''primary''' functions for corridor competition</li>
  <li>12 May: Second experiment Pico</li>
   <li>'''13 May: Corridor Competition'''</li>
   <li>'''13 May: Corridor Competition'''</li>
</ul>
</ul>


= '''Introduction''' =
= '''Introduction''' =

Revision as of 20:05, 9 May 2015

About the group

Student information
Name Student id E-mail
Student 1 Camiel Beckers 0766317 c.j.j.beckers@student.tue.nl
Student 2 Brandon Caasenbrood 0772479 b.j.caasenbrood@student.tue.nl
Student 3 Chaoyu Chen 0752396 c.chen.1@student.tue.nl
Student 4 Falco Creemers 0773413 f.m.g.creemers@student.tue.nl
Student 5 Diem van Dongen 0782044 d.v.dongen@student.tue.nl
Student 6 Mathijs van Dongen 0768644 m.c.p.v.dongen@student.tue.nl
Student 7 Bram van de Schoot 0739157 b.a.c.v.d.schoot@student.tue.nl
Student 8 Rik van der Struijk 0739222 r.j.m.v.d.struijk@student.tue.nl
Student 9 Ward Wiechert 0775592 w.j.a.c.wiechert@student.tue.nl

Updates Log

28-04: Updated the page with planning and initial design.

29-04: Updated page with initial design PDf (not yet "wikistyle")

07-05: Small changes in wiki page

09-05: Changes in design (incl. composition pattern), primary ideas and evaluating first experiment

10-05: Changes in design (incl. composition pattern), primary ideas and evaluating first experiment

Planning

Week 1: 20 April - 26 April

  • Introduction
  • Brainstorming initial design
  • 27-04 12:00: Deadline initial design

Week 2: 27 April - 3 May

  • Download and Install Ubuntu
  • Start and Finish C/C++ tutorials
  • Additional Brainstorming Design

Week 3: 4 May - 10 May

  • Evaluating initial design
  • Composition pattern
  • Initial Presentation
  • 6 May: First presentation design
  • implement point-to-point movement
  • 8 May: First experiment with Pico
  • Implement edge detection, collision avoidance
  • Initial Git structure

Week 4: 11 May - 17 May

  • Implement edge detection
  • Implement obstacle avoidance
  • Finish initial software for Pico, with primary functions for corridor competition
  • 12 May: Second experiment Pico
  • 13 May: Corridor Competition

Introduction

The goal of the "A-MAZE-ING PICO" challenge is to design and implement a robotic software system that will let robots Pico or Taco autonomously solve a maze in the robotics lab. Software must be designed to complete this goal.

<img src="http://cstwiki.wtb.tue.nl/images/thumb/Gostai-Jazz-500x500.jpg/350px-Gostai-Jazz-500x500.jpg" alt="Robots Pico and Taco">


Initial Design

Proper software design requires the following points to be taken in account: requirements, functions, components, specifications, and interfaces. A top-down structure can be applied to discuss the following initial points.

Requirements

Requirements specify and restrict the possible ways to reach a certain goal. In order for the robot to autonomously navigate trough a maze, and being able to solve this maze the following requirements are determined with respect to the software of the robot. It is desired that the software meets the following requirements:

  • Move from A to B
  • No collisions
  • Detection
  • Compatibility with both robots
  • Fully autonomous

Functions

To satisfy the said requirements, one or multiple functions are designed and assigned to the requirements. Figure 1.1 describes which functions are considered necessary to meet a specific requirement.

[Image will follow soon]

The function ’Build world model’ describes the visualisation of the environment of the robot. ’Path memory’ corresponds with memorizing the travelled path of the robot. ’Sensor calibration’ can be applied to ensure similar settings of both robots, so that the software can be applied for both.

Components and Specifications

Components, both hardware and software, are used to realise the functions. To realise these functions, the following components can be assigned to each function respectively, see Figure 1.2. In which both hardware and software components are considered.

[Image will follow soon]

If the requirements should be satisfied within certain boundaries, desired specifications should be assigned to the components realising the functions. The specifications describe what the components must at least be able to. The initial hardware specifications correspond with a certain accuracy of a part of the robot. The initial software specifications mainly correspond with the amount of data which is stored and applied for faster calculations.

Components and Specifications
Component Specification
Motor left / Motor right Accuracy of 1 cm
Laser sensor / Motor encoder right After conversion to coordinates:

accuracy of 1cm

Motor encoder left / Motor encoder right Accuracy of 5 counts
Kinect encoder Accuracy of 5 counts
Software: voltage to 2D location data point separation 0.25 times

the width of the robot

Software: 2D location to wall coordinates data point separation 0.25 times

the width of the robot

Software: Odometry data point separation 0.25 times

the width of the robot

Software: Convert to world model data point separation 0.25 times

the width of the robot

Software: path memory Save data in points at decision locations
Software: Database of situations Not specified
Software: Dijkstra / Tremaux Not specified
Software: Wall

folower || Not specified

Software: calibration sensors Accuracy of 1cm of sensors after

calibration

Note that the accuracy of the left and right motor in centimeters is based on the accuracy of the encoders. Due to unknown measurements of the maze, an initial accuracy 1 [cm] for the motors is applied. The data point seperation of 0.25 times the width of the robot is initially determined such that a sufficient amount of data is present to for instance recognize corners or a door that recently opened. Larger data seperation should result in a loss of accuracy, whilst a smaller seperation should result in an increase of accuracy and a decrease in processing time (more data to be taken into account for certain decisions and processes).

Interface

The interface describes the communication between the specified contexts and functions necessarry to perform the task. The contexts and functions are represented by blocks in Figure 1.3 and correspond with the requirements, functions, components and specifications previously discussed.

The robot context describes the abilities the robots Pico and Taco have in common, which in this case is purely hardware. The environment context describes the means of the robot to visualize the surroundings, which is done via the required world model. The task context describes the control necessary for the robot to deal with certain situations (such as a corner, intersection or a dead end). The skill context contains the functions which are considered necessary to realise the task.

[Image will follow soon]

Note that the block ’decision making’ is considered as feedback control, it acts based on the situation recognized by the robot. However ’Strategic Algorithm’ corresponds with feedforward control, based on the algorithm certain future decisions for instance can be ruled out or given a higher priority.

To further discuss the interface of Figure 1.3, the following example can be considered in which the robot approaches a T-junction: The robot will navigate trough a corridor, in which information from both the lasers as the kinect can be applied to avoid collision with the walls. At the end of the corridor, the robot will approach a T-junction. With use of the data gathered from the lasers and the kinect, the robot will be able to recognize this T-junction. The strategic algorithm will determine the best decision in this situation based on the lowest cost. The decision will be made which controls the hardware of the robot to perform the correct action. The T-junction is considered a ’decision location’, which will be saved in the case the robot is not capable of solving the maze with the initial decision at this location. During the movement of the robot, the walls are mapped to visualise the surroundings.


Corridor Assignment