System Architecture Robotic Drone Referee: Difference between revisions
(Created page with '<p> Any ambitious long-term project starts with a vision of what the end product should do. For the robotic drone referee this has taken the form of the System Architecture prese…') |
|||
(3 intermediate revisions by the same user not shown) | |||
Line 23: | Line 23: | ||
There are six main blocks: The system architecture consists of six main layers (see Figure 1 below): A world model, a hardware layer, elementary skills, advanced skills, task control and a task layer. These are all interconnected bilaterally. Communication is in the form of either, data flow, synchronization, discrete events or configurations etc. The working principles of, and relations between the layers are explained through example use cases. | There are six main blocks: The system architecture consists of six main layers (see Figure 1 below): A world model, a hardware layer, elementary skills, advanced skills, task control and a task layer. These are all interconnected bilaterally. Communication is in the form of either, data flow, synchronization, discrete events or configurations etc. The working principles of, and relations between the layers are explained through example use cases. | ||
[[File:SystArch roboticdronereferee.png| | [[File:SystArch roboticdronereferee.png|1000px|thumb|center|Figure 1: Proposed System Architecture]] | ||
<p> | |||
'''Hardware:''': | |||
*'''''Drones''''': The referee system is chosen to be a system of cooperating systems, i.e. a multiple drone solution is selected. Instead of one drone, multiple drones are used together for refereeing purposes. In this sense, a drone is considered a smart sensor. With a camera on board it is a smart 6 DOF mobile camera. On request it is able to perceive a part of the world within its vision. | |||
*'''''Ultra-Wide Band System''''': Next to the drones, auxiliary sensing equipment are used. E.g. an Ultra-Wide Band System is used to measure drone position in an external coordinate frame. | |||
*'''''Top camera''''': External cameras can be used as well. E.g. a top camera can be used to measure drone orientation from a top view. | |||
</p> | |||
* | <p> | ||
** | '''World Model''': | ||
* | *'''''WM-UWBS''''': From the world model, drone state can be obtained through measurement processing. | ||
* | *'''''WM-Sensor Fusion''''': In the world model, data from the smart sensors and auxiliary equipment are processed and fused for improved accuracy on both the drone state and the game state. | ||
*'''''WM-Field Line Estimator''''': The world model is also used for prediction of drone vision, based on the drone state. In this case its position and orientation. The position is expressed in terms of rho (the perpendicular distance from the origin to the detected line. Rho is expressed in terms of theta, see figure. With a camera facing downwards, the field of vision depends on the drone height. The number of objects (e.g. lines) depends on the x and y position of the drone. The configuration of these objects in the image frame depends on the drone orientation (yaw angle). | |||
</p> | |||
<p> | |||
'''Task:''' Depending on the game state (ball scored, offence, ball out of pitch, ball inside pitch) a high refereeing task is selected for the refereeing system e.g.: | |||
*'''''Penalty ref.''''': This task is assigned when the game state is: offence against attacking team in penalty region. | |||
*'''''Corner kick ref.''''': Is activated when the game state is: ball out of pitch through goal line by defending team. | |||
*'''''Goal kick ref.''''': Is activated when the game state is: ball out of pitch through goal line by attacking team. | |||
*'''''Game monitoring''''': This task is selected for the drone agents when in “normal” game state. For illustrative means, the referee systems is assigned the task of game monitoring. | |||
</p> | |||
<p> | |||
'''Task control''': After a higher task has been selected for the complete refereeing system, an advanced skill is selected for each drone agent. Supervisory control is used to assign the appropriate task to the right drone agent, e.g. after checking if the drone state meets specific conditions. | |||
</p> | |||
<p> | |||
'''Advanced Skills''': | |||
*'''''Position above ball''''': For example, a drone agent is assigned the task to position itself above the ball only if compared to its neighbors it is closest to the ball already. For this, the drone needs to only know the state of its neighbors. Intelligence is local, making the system scalable. | |||
*'''''Rule evaluation''''': Another task that can be assigned to a drone agent is that of rule evaluation. For example, a drone agent is assigned this task if its position is directly above the ball making it the best candidate for e.g. evaluating ball crossing side line or goal line. | |||
</p> | |||
<p> | |||
'''Elementary Skills''': Depending on the selected advanced task the elementary tasks are configured. | |||
*'''''Positioning''''': In the case the agent is given the advanced skill to position above the ball, a suitable/optimal position will be generated. Positioning is done through e.g. potential field algorithms. | |||
*'''''Trajectory Planning''''': After positioning, trajectory planning is executed e.g. by a repulsive effect algorithm a reference signal is generated for the agent. | |||
*'''''Motion Control''''': For the specific reference, the controller generates inputs for the drone agent motors to steer it to the desired position. | |||
*'''''Detection''''': When a drone agent is given the task to conduct the advanced skill of rule evaluation, the detection elementary skill is activated. The drone onboard camera captures frames. The ball and the field lines are detected through computer vision algorithms. | |||
</p> |
Latest revision as of 09:04, 1 April 2016
Any ambitious long-term project starts with a vision of what the end product should do. For the robotic drone referee this has taken the form of the System Architecture presented in this section. The goal is to provide a possible road map and create a framework to start development, such as the proof of concept described later on in this document. Firstly the four key drives behind the architecture are discussed and explained. In the second part a detailed description and overview of the proposed system is given.
System Architecture - Design Choices
- Key drive I: Optimally utilize limited communication resource:
- Choose Central Task Control solution: Configure only what is necessary for a specific task, i.e. share tasks (advanced skills) and share resources (communication) as much as possible.
- Key drive II: Develop a flexible system:
- Choose System of cooperative agents: Choose a multiple drone agents solution amongst which tasks/skills and resources (communication) are shared.
- Key drive III: Develop a scalable system:
- Choose multiple drone solution (adding more drone-agents is easily feasible)
- Choose Ultra-Wide Band System for drone localization (adding more drone-tags is easily feasible).
- Key drive IV: Develop an system with adaptable accuracy:
- Choose multiple sensor/ drone agent solution: The systems becomes fault tolerant due to redundancy of agents.
- Choose Ultra-Wide Band System (UWBS): The system is accurate especially in static cases and becomes more accurate when the number of UWBS tags is increased.
- Choose Computer Vision, especially for ball/line detection.
Detailed System Architecture
There are six main blocks: The system architecture consists of six main layers (see Figure 1 below): A world model, a hardware layer, elementary skills, advanced skills, task control and a task layer. These are all interconnected bilaterally. Communication is in the form of either, data flow, synchronization, discrete events or configurations etc. The working principles of, and relations between the layers are explained through example use cases.
Hardware::
- Drones: The referee system is chosen to be a system of cooperating systems, i.e. a multiple drone solution is selected. Instead of one drone, multiple drones are used together for refereeing purposes. In this sense, a drone is considered a smart sensor. With a camera on board it is a smart 6 DOF mobile camera. On request it is able to perceive a part of the world within its vision.
- Ultra-Wide Band System: Next to the drones, auxiliary sensing equipment are used. E.g. an Ultra-Wide Band System is used to measure drone position in an external coordinate frame.
- Top camera: External cameras can be used as well. E.g. a top camera can be used to measure drone orientation from a top view.
World Model:
- WM-UWBS: From the world model, drone state can be obtained through measurement processing.
- WM-Sensor Fusion: In the world model, data from the smart sensors and auxiliary equipment are processed and fused for improved accuracy on both the drone state and the game state.
- WM-Field Line Estimator: The world model is also used for prediction of drone vision, based on the drone state. In this case its position and orientation. The position is expressed in terms of rho (the perpendicular distance from the origin to the detected line. Rho is expressed in terms of theta, see figure. With a camera facing downwards, the field of vision depends on the drone height. The number of objects (e.g. lines) depends on the x and y position of the drone. The configuration of these objects in the image frame depends on the drone orientation (yaw angle).
Task: Depending on the game state (ball scored, offence, ball out of pitch, ball inside pitch) a high refereeing task is selected for the refereeing system e.g.:
- Penalty ref.: This task is assigned when the game state is: offence against attacking team in penalty region.
- Corner kick ref.: Is activated when the game state is: ball out of pitch through goal line by defending team.
- Goal kick ref.: Is activated when the game state is: ball out of pitch through goal line by attacking team.
- Game monitoring: This task is selected for the drone agents when in “normal” game state. For illustrative means, the referee systems is assigned the task of game monitoring.
Task control: After a higher task has been selected for the complete refereeing system, an advanced skill is selected for each drone agent. Supervisory control is used to assign the appropriate task to the right drone agent, e.g. after checking if the drone state meets specific conditions.
Advanced Skills:
- Position above ball: For example, a drone agent is assigned the task to position itself above the ball only if compared to its neighbors it is closest to the ball already. For this, the drone needs to only know the state of its neighbors. Intelligence is local, making the system scalable.
- Rule evaluation: Another task that can be assigned to a drone agent is that of rule evaluation. For example, a drone agent is assigned this task if its position is directly above the ball making it the best candidate for e.g. evaluating ball crossing side line or goal line.
Elementary Skills: Depending on the selected advanced task the elementary tasks are configured.
- Positioning: In the case the agent is given the advanced skill to position above the ball, a suitable/optimal position will be generated. Positioning is done through e.g. potential field algorithms.
- Trajectory Planning: After positioning, trajectory planning is executed e.g. by a repulsive effect algorithm a reference signal is generated for the agent.
- Motion Control: For the specific reference, the controller generates inputs for the drone agent motors to steer it to the desired position.
- Detection: When a drone agent is given the task to conduct the advanced skill of rule evaluation, the detection elementary skill is activated. The drone onboard camera captures frames. The ball and the field lines are detected through computer vision algorithms.