AutoRef system architecture: Difference between revisions
20204923@TUE (talk | contribs) |
20204923@TUE (talk | contribs) |
||
Line 47: | Line 47: | ||
'''Tasks:''' A task is a level more specific than a task group. The general guideline for what is considered a task, is that a task is an explanation of what should be done to enforce a law, written in such a way that is understandable by a human. For example, a task would be to decide if a goal is scored or not. | '''Tasks:''' A task is a level more specific than a task group. The general guideline for what is considered a task, is that a task is an explanation of what should be done to enforce a law, written in such a way that is understandable by a human. For example, a task would be to decide if a goal is scored or not. | ||
'''Skills:''' A skill is one level lower than a task. The general guideline for a skill is that it is a command given to a robot without getting into any technical details. The main difference between commands given to a robot and instructions given to a human is that human instructions usually do not include any details about what kind of information needs to be extracted from the surrounding, while a robot needs this information. For example, know the position of the ball is a skill, but use a camera to know the position of the ball is too detailed for our definition of a skill as it gets into the territory of technical specification. | '''Skills:''' A skill is one level lower than a task. The general guideline for a skill is that it is a command given to a robot without getting into any technical details. The main difference between commands given to a robot and instructions given to a human is that human instructions usually do not include any details about what kind of information needs to be extracted from the surrounding, while a robot needs this information. For example, know the position of the ball is a skill, but use a camera to know the position of the ball is too detailed for our definition of a skill as it gets into the territory of technical specification. | ||
{| class="wikitable" | |||
|+ Task guideline examples | |||
|- | |||
! Task !! Header text | |||
|- | |||
| Example || Example | |||
|- | |||
| Example || Example | |||
|- | |||
| Example || Example | |||
|} | |||
===Law-task-skill breakdown (database)=== | ===Law-task-skill breakdown (database)=== | ||
===Game state flow visualization=== | ===Game state flow visualization=== |
Revision as of 18:54, 31 March 2021
The system architecture for the AutoRef autonomous referee for RoboCup Middle Size League (MSL) robot soccer is a proposed conceptual model which describes the structure, behavior, and more views of the AutoRef system.
The system architecture is based on the specification of functions as derived from the MSL rulebook (v21.4). In short, this functional specification (as provided by AutoRef MSD 2020) is a breakdown of MSL rulebook laws into robot skills through robot tasks: tasks are statements describing what the AutoRef must do to enforce the rules, written in plain language as to fully explain referee actions without describing the means by which to achieve them; skills are fundamental abilities which are needed to accomplish a specific task. A systems thinking approach underscores the system architecture.
Recommendations for future work emphasize an updated functional decomposition to synchronize the textual breakdown of law-task-skill and the corresponding game state flow visualization.
System architectures proposed by teams prior to MSD 2020 are available within their respective AutoRef team contribution pages.
Background
A systems thinking analysis by MSD 2020 initiated the development of the system architecture with respect to the AutoRef goal as an autonomous referee system for RoboCup MSL. This process identified two primary stakeholder concerns:
- Fairness, a concern for the RoboCup committee, soccer teams, and spectators.
- Project continuity, a concern for AutoRef stakeholders and teams.
Systems thinking also identified that one the most important and challenging refereeing duties in ensuring fairness is to be able to enforce all the laws of the MSL rulebook.
An archiving of past work running parallel to systems thinking revealed continuity issues — namely, continuity issues based on the patterns observed in past generations’ work. Two main issues identified by this archiving were:
- the lack of an overarching structure and goal for all generations; and
- the lack of an easy and quick overview of what past generations have done and what is yet to be done.
The combined results of systems thinking and archiving led to the idea of creating a global structure which translates laws from the MSL rulebook into enforcement tasks to specify what a referee must do to enforce the laws. Further work revealed that a structure with a single layer was not appropriate for specifying the referee's functions; thus, a second layer of skills was added to the structure to add value to the purpose of that structure. The primary purpose of the skill layer is to describe the kind of information which needs to be collected at MSL matches to perform enforcement tasks.
A puzzle analogy to understand the system architecture approach
To better help in the understanding of the problem and proposed solution, the AutoRef project can be seen as a puzzle with numerous puzzle pieces. Team contributions prior to MSD 2020 introduced new puzzle pieces to the collection of pieces, but in a very unstructured manner, making it difficult for the subsequent teams to integrate their piece to previously developed pieces. The lack of a grid or puzzle layout also made it very difficult for new generations to have a global understanding of the whole system, and thus greater effort was spent trying to understand and analyze the big picture.
The system architecture approach established by MSD 2020 is to introduce the grid and identify all the needed puzzle pieces within the scope of the MSL rulebook. In reality, AutoRef is more complex than a puzzle, and having a grid with all pieces identified is not enough to streamline the development process for all teams, so a visualization showing the connections between different areas of the puzzle was also added. Functional specification is the term given to the development this grid.
Functional specification
Functional specification works like a blueprint that helps development teams to understand how an application will function. A functional specification essentially tells developers what features they need to build and why. For AutoRef, the functional specification defines the functionalities needed to enforce the laws in the RoboCup MSL rulebook.
How the functional specification for AutoRef is derived
Using the MSL rulebook's chapters and laws as the starting point, each chapter of the rulebook was analyzed, then the laws involved in each chapter were extracted, consequently each law is then translated into a set of task groups then tasks, and finally each task is translated into a set of skills. The skills represent the most basic functionality within this structure.
One of the most challenging problems was defining the number of layers involved in this transformation and drawing boundaries between what is considered a task group, a task or a skill. Another challenge was to keep the skills at the right abstraction level to avoid getting into technical specification, thus keeping the functional specification as flexible, when it comes to technical design, as possible.
Guidelines to create the database document
Law chapters: The RoboCup MSL rulebook is already split into 17 chapters, with each chapter denoted with a number (1 to 17). Each chapter tackles a certain area of the game (Fouls, offsides, the duration of the match … etc.). The chapters are following the FIFA laws, stating the FIFA law and then stating and explaining the corresponding RoboCup Law. We focused on the RoboCup Laws.
Laws: Each chapter contains 1 or more laws, sometimes a law might include sub laws as well. For example, Law 10.1 has 2 sub Laws 10.1.1 and 10.1.2. The decision on what is considered a law was based on the structure used in the law book. For example, 10.1.1 is a law as it has no further sub laws, and 14.1 is also considered a law as it has no further sub laws.
Task groups: As sometimes the smallest subsection within the law book was still too broad, an additional layer between laws and task was introduced to bridge the gap between them. The general guideline for what is considered a task group is that it is a statement describing a huge task that can be further split into smaller tasks.
Tasks: A task is a level more specific than a task group. The general guideline for what is considered a task, is that a task is an explanation of what should be done to enforce a law, written in such a way that is understandable by a human. For example, a task would be to decide if a goal is scored or not.
Skills: A skill is one level lower than a task. The general guideline for a skill is that it is a command given to a robot without getting into any technical details. The main difference between commands given to a robot and instructions given to a human is that human instructions usually do not include any details about what kind of information needs to be extracted from the surrounding, while a robot needs this information. For example, know the position of the ball is a skill, but use a camera to know the position of the ball is too detailed for our definition of a skill as it gets into the territory of technical specification.
Task | Header text |
---|---|
Example | Example |
Example | Example |
Example | Example |