User interface and communication model: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(Created page with '== Introduction == Now armed with the best mechanism for seeding from the Designing the robot section and the lessons learned from the Case studies we can talk real robot…')
 
No edit summary
Line 1: Line 1:
== Introduction ==
== Introduction ==
Now armed with the best mechanism for seeding from the [[Designing the robot]] section and the lessons learned from the [[Case studies]] we can talk real robot operation. However, we still need to specify how the robots will know what to do, before they can be deployed. It is already clear how they will go about their job, as it was concluded a drill mechanism would be best for the planting of seeds. This page will be dedicated to placing the ground works for a dedicated interface in which the park rangers can specify the details of the reforestation mission to the robot and how a fleet robots would be able to communicate progress with each other.
Now armed with the best mechanism for seeding from the [[Designing the robot]] section and the lessons learned from the [[Case studies]] we can talk real robot operation. However, we still need to specify how the robots will know what to do, before they can be deployed. It is already clear how they will go about their job, as it was concluded a drill mechanism would be best for the planting of seeds. This page will be dedicated to placing the ground works for a dedicated interface in which the park rangers can specify the details of the reforestation mission to the robot and how a fleet robots would be able to communicate progress with each other. <br>
General information about the project can be found at [[PRE2017 4 Groep6]].
General information about the project can be found at [[PRE2017 4 Groep6]].



Revision as of 13:16, 11 June 2018

Introduction

Now armed with the best mechanism for seeding from the Designing the robot section and the lessons learned from the Case studies we can talk real robot operation. However, we still need to specify how the robots will know what to do, before they can be deployed. It is already clear how they will go about their job, as it was concluded a drill mechanism would be best for the planting of seeds. This page will be dedicated to placing the ground works for a dedicated interface in which the park rangers can specify the details of the reforestation mission to the robot and how a fleet robots would be able to communicate progress with each other.
General information about the project can be found at PRE2017 4 Groep6.

Envisioned working principles of the interface

After a forest fire has raged through a national park it will have left an a priori known area devastated which requires reforestation. The parameters defining this area, primarily its location, shape and edges can be inquired from the observations of the fire fighters or using satellite images of the area from during the forest fire. Park rangers will know the history of the national park and hence also the composition of the vegetation species in the burnt areas. The goal of the robot is not only to reforest the area but also to restore the biodiversity which was previously present in the burnt area, with the preference that the newly reforested area resembles the lost one as closely as possible.

In order for the robot, or fleet of robots, to operate in such a manner that the above goal can be accomplished, the park rangers will have to communicate the appropriate parameters such as area size and shape, type of seeds, desired species composition, etc.. The easiest way to implement the biodiversity requirement would be to employ a fleet or swarm of robots, each with a reservoir filled with only a specific type of seeds. The park ranger would then only have to specify to each robot what type of seed they carry, so that the robots could infer from an internal database the properties belonging to those seeds (e.g. diameter, seeding depth, etc.) in order to know exactly how to plant the seeds.

Now one non-trivial case still remains, the robot needs to know where to plant the seeds in order for the reforestation operation to be a success. We cannot just let the robots run amok and spread the seeds with complete randomness, as this will most likely result in segments of the new forest which are underseeded and only partly recover, or segments which are overseeded and lead to a high degree of competition between the tree species which also reduces total turnover. These two effects do not yet include the mutual interference the robots will have: it is quite possible that a robot will run over a previously seeded area, possibly disturbing the already sown seeds, or even worse drilling through a previously planted seed and completely destroying it. Also the robots could possibly damage each other by crashing into each other if no path planning is taken into account, further delaying the operation. The short error-analysis above reveals two crucial components which are required to solve the where question. Each robot needs to be given precise orders as to which locations it must plant its seeds, and each robots needs to able to communicate with the rest of the collective, to prevent mutual obstruction.

The problem of knowing which seeds to sow where requires human intervention, as the target area, the relative occupation of the tree species, and the formation of tree colonies need to be specified. Luckily two of these parameters can be obtained from the park rangers, using thermal satellite images or mapping of the fire done by fire departments the park rangers can outline the area on a map in need of reforestation. The most user-friendly way of providing this information to the fleet of robots would be a graphical user interface (GUI) where a park ranger could click on points of the map to define the edges of the area in need of reforestation. Then the relative percentage of seeding per area have to be defined, which has to mimic the previously existing biodiversity. If the new area would have to be seeded non-uniformly or with a variable tree occupation, a next step in the GUI would be to provide the option to the park ranger to define partitions in the target area and define it into sub-areas, or grid elements as we’ll call them from now on. These element will be polygon shaped as to get the best fit to near the edges of the perimeter of the total area. Then for each element, the park ranger can specify the vegetation composition more in detail, by deselecting the plants present in the overall area selection. Then once the desired level of refinement in the map by division into elements has been reached, a computer algorithm suited for optimisation can calculate the geometrical distribution of tree species within each element, to create a point cloud representation of the elements, color coded for each tree species such that the park ranger can confirm that this computed distribution is indeed the desired one.
However, for this method to work this requires the robots to know their exact position on the map, seeds will in general not be larger than a few centimeters, hence high planting densities could arise for some species (e.g. grasses or flowers) which would then require a high degree of precision and exhibit a great need for control. Intuitively one would say to use GPS. However standard civilian equipment GPS, that is GPS equipment which is within the price range of the budget National Parks have to acquire a fleet or reforestation robots, is only accurate up to a few meters. This is mostly because of the blocking and back scattering of GPS signals, which cannot be filtered out by most GPS software. Some GPS software exists which can filter out these effect, however they require a sophisticated antenna costing a few thousand USD (Patel, 2015) [1]. Especially in a forest area, a lot of back scattering of the signal will happen because of the vegetation, which does usually does not influence precision, but accuracy. An alternative solution to this, which would also immediately tackle the problem for how communication between robots of the collective would have to occur, is to deploy beacons. A set of special beacons can be deployed at the edges of the problem in order to effectively let the robots know they should not venture beyond these beacons and allow for the operation to be contained in an area. If then another set of beacons is placed with a fixed distance between them, in the inside of this perimeter defined by the set of special beacons, they can be used to accurately triangulate the position of each robot if the beacon density (#beacons/m^2) is adequate. However, the installation of beacons is then again cost and time intensive. The most promising GPS alternative is BeiDou, which has a public accuracy of 10cm in the Asian-Pacific region. It is currently in its last developmental stage and will start releasing in 2020. As it most likely that it will take some years beyond the release, say 2025, until the technology is widespread available and accurate all over the world. Given our technology would still need some time to develop, the beacons can be used during the developing and testing stage, such that it can later be updated to the new GPS technology.


Next if these beacons and not only equipped with a radio receiver/transmitter for position triangulation, but also provided with the appropriate communication protocols and antennas, they can be used for the robot collective to provide updates on the progress of individual robots. Suppose robot A has finished its job in element E4, it will then send a message to the beacon to inquire the other robots in the vicinity, which still have to do their seeding operation in element E4, that the area is currently vacant. Robot B which has been idle and waiting to plant its seeds can then enter element E4 and proceed with its seeding plan, whilst avoiding to move through the area previously seeded by robot A to not distort the seeds which were only just planted. Because of the position triangulation with the beacons, this can be done by means of path planning algorithm within the element E4. Of course this example is a simplification, where it is assumed that only 1 robot is operating in 1 element at a time to prevent the robots from interfering with each other’s task, which will most likely not happen in a real planting situation, but which will suffice for building a prototype of the GUI.

The above example describes a fairly sophisticated level of autonomy of the robots which can intercommunicate to make decisions on which element to seed next. It is however most likely that unanticipated obstacles will be present during the planting operation. For example, a burnt tree could have fallen down and impede the robot from planting its seeds. If such a situation would occur the robot would have to obtain new orders about what to do; in the simplest case it could either continue its current planting operation in an element to the best extent possible, by avoiding the obstacle or it could completely abandon it and cease the planting operation. In such a case it is probably best if the park ranger made this decision, as they will have to arrange for the obstacle to be removed anyway, so it is best if they know about the obstacle’s existence as soon as possible so that they can start the appropriate countermeasures. Therefore is such a situation were to arise, the robot should send a message to the park ranger which will pop up in the GUI, asking the park ranger to select from a list of options (again in the simplest case, the options are to abort or continue the seeding operation in a particular element). If the obstacle does not require immediate action, the robot can be allowed to continue, but if the obstacle does require immediate action (e.g. in the case of a small remaining fire seat, which might still be smoldering after the forest fire) the instruction could be given to abort the operation and let the robot leave the current element. In the latter case, once the option for abandoning has been chosen, other robots which still have to seed in the same element will most likely also come across this obstacle, creating a potential clutter of incoming requests from the robots to the park rangers. To prevent this, again, the beacon system can be used to propagate this information and telling the robots that the particular cell with the obstacle is now off-limits.

Development of the interface

From the envisioned workings of the interface a list of design requirements and preferences for the interface is made. This is done taken into account the limited time left for the project course (2 weeks) and the desire to leave something behind which can be picked up and further developed by someone else.


Bibliography

  1. Patel, P. (2015). “Cheap Centimeter-Precision GPS For Cars, Drones, Virtual Reality”. IEEE spectrum. Retrieved from: https://spectrum.ieee.org/tech-talk/transportation/self-driving/cheap-centimeterprecision-gps-for-cars-and-drones. Accessed at 11-06-2018.