PRE2019 3 Group17: Difference between revisions
m (→Time Sheets) |
|||
(65 intermediate revisions by 4 users not shown) | |||
Line 35: | Line 35: | ||
=== Time Sheets === | === Time Sheets === | ||
[[File: | [[File:group17final.png | 1000px]] | ||
=== Peer Review === | === Peer Review === | ||
Line 100: | Line 100: | ||
===Interview=== | ===Interview=== | ||
We | [[File:IntExmpl.png|thumb|right|350 px|Example UI's we showed during the interview for feedback. Retrieved from https://www.kickstarter.com/projects/openrov/openrov-trident-an-underwater-drone-for-everyone]] | ||
We interviewed [https://www.wur.nl/en/Persons/Erik-dr.-EHWG-Erik-Meesters.htm Erik Meesters] and [https://www.wur.nl/nl/Personen/Oscar-dr.-OG-Oscar-Bos.htm Oscar Bos], both doctors at the Wageningen University & Research. Dr. Meesters works with coral reefs and specifically studies the long-term development of the reefs in Bonaire and Curacao, and dr. Bos focusses on the preservation of the biodiversity in the North Sea. They recently bought a robot that is helping them with their research. An interview was done with them to find user needs and therefore a series of questions was asked about what they would want from this robot. | |||
We only managed to arrage this meeting further along in our project, when we had already started research into current underwater robots and coral reefs. This made the interview flow better, as we could refer to our findings during it and relate it to the information we already had. | We only managed to arrage this meeting further along in our project, when we had already started research into current underwater robots and coral reefs. This made the interview flow better, as we could refer to our findings during it and relate it to the information we already had. | ||
Line 187: | Line 189: | ||
* Triclosan | * Triclosan | ||
==== General Coral Reef Information | ==== General Coral Reef Information ==== | ||
Finally, some general information on where the coral reefs are is useful, as it tells us in what environments our robot might be used. | Finally, some general information on where the coral reefs are is useful, as it tells us in what environments our robot might be used. Most of our information comes from the [https://www.unep-wcmc.org/resources-and-data/world-atlas-of-coral-reefs-2001 World Atlas of Coral Reefs (1st edition)], the first chapter of which provides a lot of general explanation of what the coral reefs are. There are many types of corals, but for the coral reefs it is the hermatypic (reef-building) corals that are important. They flourish in warm, shallow water. Hermatypic corals grow extremely slowly, with just a few millimetres each year. Branching corals grow much faster, but even they only grow about 150 millimetres each year. Hermatypic corals lay down a skeleton of calcium carbonate, and these structures form the basis of the coral reefs, since other corals and organisms can then grown on these structures. This source provides much more information regarding the different types of reefs, their spread over the Earth and the organisms found in reefs, but this more specific information is not relevant at this stage of our project: if it turns out that the exact nature of the corals in a reef have a strong influence on the requirements of the robot, this is a topic worth revisiting, but otherwise this general information is enough. | ||
According to the Scripps Institution of Oceanography at the University of California San Diego: “Coral reefs can be found at depths exceeding 91 m (300 ft), but reef-building corals generally grow best at depths shallower than 70 m (230 ft). The most prolific reefs occupy depths of 18–27 m (60–90 ft), though many of these shallow reefs have been degraded.” [https://scripps.ucsd.edu/projects/coralreefsystems/about-coral-reefs/biology-of-corals/ (Scripps, n.d.)] Studying these degrading coral reefs will likely be one of the main applications of a research assisting robot, so the information that those coral reefs are largely close to the surface is useful. | |||
=== General Underwater Robotics === | === General Underwater Robotics === | ||
Operating in water, particularly salt water, has a great impact on the design of a robot. It needs to be well adapted to this environment, it should be able to resist the corrosion of the salt water and be waterproof so that the electronics are not damaged. It should also be able to move freely through the water. | Operating in water, particularly salt water, has a great impact on the design of a robot. It needs to be well adapted to this environment, it should be able to resist the corrosion of the salt water and be waterproof so that the electronics are not damaged. It should also be able to move freely through the water. | ||
==== Movement ==== | |||
Many ROVs [Remote Operate Vehicles] and AUVs [Autonomous Underwater Vehicles] use a wide variety of different moving methods and techniques to navigate underwater. Some employ biomimicry, these robots move around the water in ways that are inspired by nature. However, most robots that are more oriented at professional users, such as marine researchers, use a number of propellers to move around in all directions underwater. This provides multidirectional movement. It is important that these thrusters are powerful enough to move through the current in the ocean. | |||
Moving along the x- and y-axes is not a great problem, thrusters can be used to push it in a direction or some kind of steering wheel could be manipulated to allow for turning. Moving up and down is a bit more complicated. Thrusters could also be used for this, but having an underwater robot gives an alternative option as well: using buoyancy. If the robot is naturally slightly buoyant, it will start floating up if there is no downwards force acting on it, this construction would mean that thrusters are used for downward movement, but going up just means turning the thrusters off. Alternatively the density of the robot could be flexibile by having it suck water into a tank to increase the density and move down, and having it push the water out again to return to its natural, slightly buoyant state and have it move up. If this movement system is chosen, it will affect the chosen materials for the robot since the final robot will need to be less dense than salt water. | |||
==== Sensors ==== | |||
It is likely we will want some form of obstacle avoidance, regardless of whether the movement of the robot is fully automated or controlled by the user. This system will make sure that in case the user makes a wrong judgment, no collision will happen. This is important since no harm should come to the corals. | |||
Sonars are frequently used for collision detection. The resolution and reliability of sonar sensors degrade when being close to objects. Since the robot will have to manoeuvre a lot in coral reefs, sonar will not work sufficiently (Dunbabin, Dayoub, Lamont & Martin, 2018). | |||
There are real-time vision-based perception approaches that make it possible to provide a robot in coral reefs with collision avoidance technologies. To make obstacle avoidance possible the challenges of real-time vision processing in coral reef environments needs to be overcome. To do so image enhancement, obstacle detection and visual odometry can be used. Cameras are not used frequently in underwater robots but will work well in coral reefs since the coral reefs are quite close to the surface (Spalding, Green & Ravilious (2001)), and the water in these areas is very clear. | |||
Image enhancement is useful for making the processed images more valuable. To do so different types of colour correction are applied. And for detection semantic monocular obstacle detection can be used. Dunbabin et al.(2018) explain that for shallow water position estimation visual odometry combined with operational methods to limit odometry drift was explored and evaluated in early work using a vision only AUV. This already showed navigation performance errors of <8% of distance travelled (Dunbabin et al. 2018). Therefore this visual odometry can be used for the robot. | |||
==== Localization ==== | |||
[[File:Localization image.png|thumb|right|350 px|'''fig. 1''' Illustrating how an AUV can be located by means of transducers sending out signals to transponders located on the seafloor. ''Note.'' Reprinted from “Absolute Positioning of an Autonomous Underwater Vehicle Using GPS and Acoustic Measurements”, by Kussat, N. H., Chadwell, C. D., & Zimmerman, R., 2005, IEEE Journal of Oceanic Engineering, 30(1), 153–164.]] | |||
As a user, you want to know where your robot is, particularly if you are controlling it from a distance. For this, localization is needed. | |||
As GPS is known to not work subsurface and thus cannot solely be used to detect the location of our underwater robot. It is, however, possible for GPS positions to be transferred underwater. As Kussat, Chadwell, & Zimmerman (2005, p. 156) state, the location of autonomous underwater vehicles (AUVs) can be determined by acquiring ties between the GPS before and after subsurface and integrating the acceleration, velocity, and rotation of the vehicle during the time of subsurface. However, they go on stating that this method causes an error of 1% of the distance traveled, which means a 10 m error with an AUV track of 1 km. According to them, this error occurs due to the quality of the inertial measurement unit (IMU). | |||
Having such an error in an underwater robot that is being teleoperated can create many issues with relocating where the robot has been. The ocean is yet still a very unknown and widely spread out space where there is constant movement that can interrupt the location constantly. This could be problematic if one were to want to research in the same location again. For this reason, it is highly important to be able to detect an object's location, as accurately as possible as well as above water as underwater. Kussat et al.(2005, p. 156), continue explaining how a much more precise localization for AUVs can be achieved by combining precise underwater acoustic ranging and kinematic GPS positioning together. | |||
To use such a method, a precise measuring of travel time is required. Acquired travel time data with a resolution of only a couple of microseconds, can be achieved by improving the correlator system (Spiess, 1997). This can be done by having fixed delay lines and cross-correlation of a coded signal in the transponders (Kussat et al., (2005, p. 156). Kussat et al, (2005, p.156) continue to explain that the method starts off by determining the location of the transducers, usually aboard a ship, by means of kinematic GPS. With their method transponders were placed on the seafloor, receiving signals from the transducers. This was done so a coordinate frame could globally be referenced. As a second step the autonomous underwater vehicles were located relative to these transponders by means of acoustic signals (see fig. 1). | |||
==== User Interaction and Communication ==== | ==== User Interaction and Communication ==== | ||
If the robot is | If the robot is sent out along a preprogrammed path, it can collect and store all its data while traveling and does not need to transfer it during its movement underwater. It would be good to have the robot send some signal to the users of roughly where it is so that if something goes wrong the users know where to pick it up, but this information is quite simple and does not need to be updated constantly, regular updates of its current position would be enough. On the other hand, if we want the user to be able to control the robot in real-time, the user should have access to real-time information about where the robot is and what its surroundings are. This will likely involve video footage and potentially the data from some other sensors as well, for instance, close range sonar data to see if the robot is about to bump into something in any of the directions the camera is not pointing. This is a lot of information that needs to be transmitted very quickly to the user. The user should also be able to give commands to the robot that it responds to very quickly. "high-frequency EM signals cannot penetrate and propagate deep in underwater environments. The EM properties of water tend to resist their propagation and cause severe attenuation." (Qureshi, et al., 2016). Due to this, a tether must be used for most communication with the robot in the use-case of live control. While there are systems that exist for underwater wireless communication they are rare, pricey, and reserved for military and large scale project use. | ||
== Robot Design Requirements == | == Robot Design Requirements == | ||
Line 335: | Line 358: | ||
==== Movement ==== | ==== Movement ==== | ||
Through interviewing the researchers, we gained some insights on the required range and method of movements the robot will need in order to be sufficient. First and foremost, the robot must have 2 operation modes to be optimal. These modes are tethered live control (teleoperation) and tetherless pre-programmed routes. These 2 modes ensure that the robot can be used comfortably and for a wide range of uses, a robot that can be used for many different tasks is, after all, a more cost-effective and sustainable solution. The teleoperated mode would ideally be used to explore new sectors and find new sites to scan underwater. The tetherless operation is intended for the scanning and photographing of known sites, this eliminates the need for operation by an expert in order to study certain sites. The researchers we interviewed were asked about biomimicry, robots that are inspired by nature for their design and operation. Examples of robots that use biomimicry are the Aqua2 and the JenniFish. Our conclusion is that biomimicry is not the answer for most types of research done on reefs. This is because most forms of movement that these types of robots use are not nearly as stable and precise as the thruster system can be. The way underwater animals move is inherently different than what this robot needs to accomplish to provide stability for underwater pictures and provide precise positioning for measurement. Therefore we decided that it is not a suitable movement method for our robot. Therefore, the multi-propeller system, such as Scubo 2.0, is much preferred. This is due to its agility and ability to move fluently in all directions. Furthermore, omnidirectional thrusters can be used to stabilize the robot while taking photographs or scans underwater, and steady and clear scans and photographs are, according to researchers, one of the single most important features this robot can have. This movement must be followed by powerful enough thrusters to counter some strong currents (up to 0.5 knots) in order to be able to remain stable in the water. | |||
For controlling upwards and downwards movement, the method preferred by dr. Meesters and dr. Bos was having the downward movement done by thrusters, but making the upwards movement facilitated by building the robot to be slightly buoyant, so that it will float towards the surface when the thrusters are turned off. This will affect the materials of the robot. This method means that the robot can float away from the ocean floor without kicking up sand, which could damage the thrusters and mess up images taken. | |||
==== UI and Control ==== | ==== UI and Control ==== | ||
[[File:Sigma7.jpg|thumb|right|350 px|The Sigma7 haptic feedback control device. Image retrieved from https://www.forcedimension.com/products/sigma-7/gallery/55]] | |||
[[File:User_interface.jpg|350px|thumb|right|An exploration of a preferred user interface]] | |||
User Interface [UI] and Control of an ROV are very crucial, without an easy method to send instructions to the robot, the operator cannot efficiently carry out the task that is required, and in research use cases, this could mean bad data collection and potential time wasting. Currently most robots use a few different kinds of input methods to control the robots via a computer interface. Most amateur oriented robots use on screen controls on a tablet or smartphone, while most professionally oriented robots use either standard gamepads, joysticks, or more specialised systems for a more unique robot operation (such as the sigma.7 haptic controller and foot pedals used to control the Ocean One robot). The use of a standard joystick or gamepad seems to be the most common control input method, which is likely due to most of these hardware devices being widely available, easy to operate, and very cost effective (a standard Xbox One controller costs 60 euros (Xbox)). On-screen controls seem to be missing from most serious professional ROV. The user interface of most of these robots seem to follow a general feature list, while the organization on screen varies slightly from system to system. Some features that can be found on the user interface of these robots include: a live camera feed, speedometer, depth meter, signal strength, battery life, orientation of the robot, compass, current robot location, and other less relevant features on some systems. | User Interface [UI] and Control of an ROV are very crucial, without an easy method to send instructions to the robot, the operator cannot efficiently carry out the task that is required, and in research use cases, this could mean bad data collection and potential time wasting. Currently most robots use a few different kinds of input methods to control the robots via a computer interface. Most amateur oriented robots use on screen controls on a tablet or smartphone, while most professionally oriented robots use either standard gamepads, joysticks, or more specialised systems for a more unique robot operation (such as the sigma.7 haptic controller and foot pedals used to control the Ocean One robot). The use of a standard joystick or gamepad seems to be the most common control input method, which is likely due to most of these hardware devices being widely available, easy to operate, and very cost effective (a standard Xbox One controller costs 60 euros (Xbox)). On-screen controls seem to be missing from most serious professional ROV. The user interface of most of these robots seem to follow a general feature list, while the organization on screen varies slightly from system to system. Some features that can be found on the user interface of these robots include: a live camera feed, speedometer, depth meter, signal strength, battery life, orientation of the robot, compass, current robot location, and other less relevant features on some systems. | ||
The system used by the researchers we interviewed uses an xbox one controller to navigate the robot, they had little to say on this matter as neither of them are avid gamers or have used this robot yet. This leads us to believe that the control input method is less relevant in a general sense, but the key importance is that it should be a standardized system in such a way that any user can plug in their controller of choice. This ensures that not only are the users using their most prefered hardware, but also will enable them to use existing hardware to avoid having to spend more resources on buying another item. Each person will have their prefered method to control the robot (which is made increasingly difficult when operating from a moving vessel on the water) and will therefore be more comfortable in controlling the robot. | |||
The system used by the researchers we interviewed uses an xbox one controller to navigate the robot, they had little to say on this matter as neither of them are avid gamers or have used this robot yet. This leads us to believe that the control input method is less relevant in a general sense, but the key importance is that it should be a standardized system in such a way that any user can plug in their controller of choice. This ensures that not only are the users using their most prefered hardware, but also will enable them to use existing hardware to avoid having to spend more resources on buying another item. Each person will have their prefered method to control the robot (which is made increasingly difficult when operating from a moving vessel on the water) and will therefore be more comfortable in controlling the robot. | |||
Furthermore, we discussed the case of the UI with the researchers, and the conclusion was that the less busy the display is, the more ease of use the system has. The most important features we concluded are: the depth meter, the compass and orientation display, and the current location of the robot. According to researchers, the most important thing when controlling a robot underwater is knowing where you are at all times (this is why underwater location systems are so complex, expensive, and crucial to these operations). | Furthermore, we discussed the case of the UI with the researchers, and the conclusion was that the less busy the display is, the more ease of use the system has. The most important features we concluded are: the depth meter, the compass and orientation display, and the current location of the robot. According to researchers, the most important thing when controlling a robot underwater is knowing where you are at all times (this is why underwater location systems are so complex, expensive, and crucial to these operations). Finally, there should be some way of indicating the scale of the picture, how far away from the corals it was taken so that their size can be assessed. | ||
In the movement section, 2 operation methods were discussed; a tethered live control and a tetherless pre-programmed operation. For the tetherless operation, UI can be just as crucial for two distinct reasons. The first being that you want to be able to easily program the needed path and instructions you want the robot to take. Most marine researchers will not also be programmers, so the software to program the paths needs to be very intuitive and full of features primarily being the ability to control the area of operation, and the type and size of photographs (or other types of readings) the robot will take. The second use of this software would be similar to the tethered operation but except for the live control of it, this will be for monitoring purposes and ideally has the exact same features as the tether operation setting, with the small addition of a progress meter to indicate how the route is progressing. | In the movement section, 2 operation methods were discussed; a tethered live control and a tetherless pre-programmed operation. For the tetherless operation, UI can be just as crucial for two distinct reasons. The first being that you want to be able to easily program the needed path and instructions you want the robot to take. Most marine researchers will not also be programmers, so the software to program the paths needs to be very intuitive and full of features primarily being the ability to control the area of operation, and the type and size of photographs (or other types of readings) the robot will take. The second use of this software would be similar to the tethered operation but except for the live control of it, this will be for monitoring purposes and ideally has the exact same features as the tether operation setting, with the small addition of a progress meter to indicate how the route is progressing. | ||
Line 360: | Line 382: | ||
===Power Source=== | ===Power Source=== | ||
In the robots which were researched, power was either supplied by batteries, a tether or a combination of both. There are different kinds of tethers. Some tethers only provide power, others transfer data and some do both (Kohanbash, 2016). The most commonly used batteries are lithium based. As mentioned in the online guides from OpenROV one should be cautious when using Lithium batteries as they can be dangerous (OpenROV, n.d.). During the interview, the researchers said that once something went wrong with the charging of a battery and | In the robots which were researched, power was either supplied by batteries, a tether or a combination of both. There are different kinds of tethers. Some tethers only provide power, others transfer data and some do both (Kohanbash, 2016). The most commonly used batteries are lithium based. As mentioned in the online guides from OpenROV one should be cautious when using Lithium batteries as they can be dangerous (OpenROV, n.d.). During the interview, the researchers said that once something went wrong with the charging of a battery and caused a fire on a ship. | ||
As described in the movement chapter, the robot we will be designing operates in two settings. Firstly, the user can program a path via software on a laptop, upload the path and the robot will autonomously move along the path. In this case, just the batteries will suffice in providing power. Without a tether, the robot can move closer to the reefs with the risk of getting entangled. In the second setting, the user can control the robot via a controller and real time video feedback. In that case, a tether is needed for the data transfer. Power can still be supplied by the batteries only. | As described in the movement chapter, the robot we will be designing operates in two settings. Firstly, the user can program a path via software on a laptop, upload the path and the robot will autonomously move along the path. In this case, just the batteries will suffice in providing power. Without a tether, the robot can move closer to the reefs with the risk of getting entangled. In the second setting, the user can control the robot via a controller and real time video feedback. In that case, a tether is needed for the data transfer. Power can still be supplied by the batteries only. | ||
During the interview, the possibility of incorporating both settings was discussed. The researchers got very excited about this idea and the opportunities it provides for research. As mentioned in the movement chapter, having both functions would make the robot unique. | During the interview, the possibility of incorporating both settings was discussed. The researchers got very excited about this idea and the opportunities it provides for research. As mentioned in the movement chapter, having both functions would make the robot unique. | ||
===Additional Features=== | ===Additional Features=== | ||
Collision detection is an important feature to keep the robot safe around the coral reefs. As described in the general underwater robots research section and shown by the RangerBot, obstacle avoidance for obstacles in the direction the camera is moving in can be done with visual odometry. Even with a separate camera for movement control and taking the pictures, a camera still cannot cover all sides of the robot. Particularly in the tethered, exploration application of the robot this could be a problem. In the interview it was suggested that sonar could be used for simple obstacle detection, when it is just used to avoid getting directly stuck under something, so sonar could be used for detecting obstacles directly above the robot. | |||
The M-AUE’s use a method for localization that is very similar to the one described in the general underwater robots research section. The difference in method is that Kussat er al, (2005, p.156) use transponders that are located on the seafloor, whereas the M-AUEs receive the acoustic signal with their built-in hydrophones and respond back to floating moorings. The moorings sent out 8-15 kHz signals at a time interval of 2 seconds, localizing the M-AUEs with an accuracy of ± 0.5 km (Jaffe et al., 2017). Whereas the method that includes using transponders on the seafloor achieving an accuracy of ± 1m (2-𝜎) of the horizontal positioning of the AUVs (Kussat et al., (2005, p. 156). Which method is best to use depends on the type of research for which a robot is going to be used. When for example, globally analyzing the state of the coral reefs, it is less important to have the exact location of where images have been taken than when repetitive research in one area is being done, since in that scenario you would need to know exactly where each picture was taken. | |||
The | |||
A feature that is not mentioned in any of the robots and also did not come up in general research for underwater robots at all, was sensors that can be used to detect topographic complexity of the coral reefs. In our interview, it became clear that this was a much desired feature. Because the importance of this feature was only discovered later on in the project we could not research it in detail and the implementation of this sensor is something that would need to be investigated further. The chain-and-tape method that dr. Meesters described in the interview (where a chain is laid over the corals so that it follows the grooves between them, and the total distance is then measured with a tape) would likely not work for the basic robot we have outlined so far, since it would need ways of manipulating the chain, marking where the measurement is started and ended and retrieving the chain in such a way that the measurement could be done. This might be a feasible module to be added onto the robot, it could not be part of the basic design. Alternative options are giving it a way to detect the highest point within an area and the lowest point, for this it would only need to be able to detect the furthest distance to a single point. This could be accomplished with sonar or laser distance measurement techniques. A final option is to use a technique similar to the multibeam echosounder that ships use to map the topography of the ocean floor. But the validity of these options and how they are affected by having a robot that can move close to the reefs would need to be further investigated. | |||
=== Overview Table === | === Overview Table === | ||
[[File: | [[File:16.jpg|550 px|]] | ||
== Final Results == | == Final Results == | ||
=== Guidelines === | === Guidelines === | ||
The main result of our project is an understanding of what coral reef researchers need out of a robot to help in their research, and seeing where those meets are not being met yet by the robots currently on the market. The results of the work we have done here could be used by coral researchers themselves who want to know where the robots that they might invest in might be lacking, but it could also be used by the people developing these robots to see what features and attributes are worth investing in to make such a robot more appealing and useful for users. As this is very much a project that was build on collecting and organising information, not all the useful insights we have written down can necessarily be translated to concrete development guidelines, but we can summarise the main takeaways in this form. | |||
# For the robot to be able to move around a fragile environment like the coral reefs, it must have reliable obstacle avoidance methods build in. It is worth considering the use of camera's and visual obstacle detection methods for this, since the coral reefs are uniquely suited to this method of obstacle avoidance. | |||
# If the robot uses thrusters to move, it should not kick up sand when moving up and away from the ocean floor. One of the ways of avoiding this is by making the robot slightly naturally buoyant, which will allow it to float up if the engines are turned off. Therefore, the density of the materials chosen to build the robot should be slightly below the density of salt water, though the materials should also be resistant to salt water corrosion. | |||
# The main application the robot will likely be used for is taking pictures of coral reefs that are being studied to track their development over time. A camera is therefore needed and lights to make sure the pictures are of high quality. The pictures must be consistent and useful, so the angle of the camera should not change, the scale that the pictures are taken at must be recorded, as well as the precise location where they were taken. Ideally, there would be some system for tracking the height of the corals as well, so that the topographic complexity can be studied. | |||
# If the robot is only going to be moving along known sites of study, it does not need to have live control functionality. It can follow along pre-programmed paths with a few set parameters like how often it should take a picture, how big the area covered in each picture should be, etc. This method of control is preffered to real-time user control in this application. | |||
# However, the additional option of real-time user control is a very desirable additional feature. The robot will need to be tethered in this case so that live video footage is possible, and users will likely want more detailed environmental information like simple sonar-based obstacle detection above the robot. This would allow the robot to be used to explore unknown environments. | |||
# When the robot is being used for research, it is important to know the robot's location so it is known where the images have been taken. Since GPS does not work underwater, acoustic signals should be used instead to determine where the robot is. | |||
# For ease of use, particularly while the user is on a boat, the interface should be as clean and simple as possible. There should be a live camera feed if the robot is being controlled live, as well as depth meters, the compass and orientation display, and the current location of the robot. | |||
# The robot will be bought by universities and organizations, not individuals. To make the robot more appealing to these buyers, it is better to give it some basic functionalities and the ability to add modular components to give it extra applications, rather than build it from the ground up for just a single application. | |||
===Infographic=== | ===Infographic=== | ||
Line 400: | Line 414: | ||
== Discussion == | == Discussion == | ||
As there are quite a few underwater robots already out there, a robot build according to our guidelines will be the one that is suitable to use for research and more specifically for research in coral reefs. Getting in touch with two experts during this project has allowed us to get into the perspective of the user and understand what needs to be looked at when designing such an underwater robot. Since there is only so much you can find online on the functionalities of products, collecting this information by means of interviewing the users was greatly needed. For us to gather as much valuable information as possible from the interview, it was important to do lots of research beforehand. By knowing what already exists and by being able to envision ourselves interacting with the robots, higher quality questions could be asked. This work, however, was slightly overlooked during this project. As this project has only a duration of 8 weeks and as at the start, time was spent finding a concrete research question, the interview resulted in taking place a bit later in the process than initially hoped, and we were only able to interview two researchers which limits the generality of our observations. This and the sudden need in having to change the form of our research due to COVID-19 took away the opportunity for us to realize our findings into a working robot, which had been our original intention to validate our design, and a lot of early effort had gone into planning how we would approach building this physical prototype. Although, as a result to this the research that has eventually been conducted, has resulted to much more profound guidelines to how such a robot should be made. These guidelines and their corresponding research can function as a base for futuristic work. | |||
== Process == | == Process == | ||
( | Since the project changed direction some times, here is an overview of the process of this project. Due to shifting priorities over the course of our project, not everything we did ended up being relevant to our final research. | ||
===Phase 1=== | |||
* Discussed our strengths, weaknesses and interests in order to find out what we want to do during the project | |||
* Decided that we wanted to focus on coral reefs, a topic with real global importance and that would hopefully be specific enough that we could focus on good design for a specific environment | |||
* Wanted to make (parts of) a physical robot | |||
===Phase 2=== | |||
* Did a lot of research into coral reefs, what threatens them and how they are being helped. Some of this research ended up in the problem background and general coral reef information sections, but most was never developed in enough detail to be added to the wiki. It was an exploration of the topic | |||
* Came across the topic of acoustic enrichment, a lot of research was done into how we could use a robot to assist with acoustic enrichment and how it worked exactly | |||
* Decided to make a robot which contains a speaker | |||
[[File:IMG-20200304-WA0007.jpg|250 px|Robot which contains a speaker.]] | |||
* Specified the parts of the robot we would be working on: movement and motor system, design and user interface | |||
[[File:WhatsApp Image 2020-02-13 at 16.25.46.jpeg|250 px|Robot which contains a speaker.]] | |||
* Made sketches to iterate on the shape and design | |||
[[File:shape.jpg|250 px|Shape sketches.]] | |||
* Did research on user interfaces and controllers | |||
* Built a small physical prototype of the way the movement of the robot could be controlled | |||
[[File:WhatsApp_Image_2020-03-26_at_16.30.40.jpeg|250 px|Shape sketches.]] | |||
===Phase 3=== | |||
This phase is elaborated, discussed and explained on this wiki. | |||
* The acoustic enrichment focus was let go when it became clear that work in that area would mean we would have to focus more on collecting and analyzing audio databases, which is not what interested us or where our specialties lie. Instead, we decided to focus on making a robot suitable to the coral reefs, with the application still being somewhat open. The researcher can add modules according to their needs | |||
* Decided to step away from actually building the robot and more focus on making guidelines on what the robot should be able to do and which components it needs. This decision was taken due to the narrow time frame (ordering components took and waiting on the delivery would take up about a week). Moreover, coming together to work on the prototype was no longer possible due to the COVID-19 virus and restrictions from the TU/e and government | |||
* Did a lot of research on current robots. Individual robots were studied based on their valuable aspects. | |||
* Researched into collision detection and localization was done since there was a need to really understand how it worked before making conclusions about these areas. | |||
* Did user research. An interview with researchers was done to get user needs. | |||
* Finally, combined the research into general underwater robots and coral reefs from earlier in the project with the information we had received from the interview and research into existing underwater robots to make guidelines for robots useful for coral reef researchers | |||
== References == | == References == | ||
Line 477: | Line 519: | ||
*[https://www.youtube.com/watch?v=7zjKTvj0lB4 Queensland University of Technology (2018). "RangerBot: The Robo Reef Protector", ''TheQUTube YouTube channel''. Retrieved on March 20, 2020] | *[https://www.youtube.com/watch?v=7zjKTvj0lB4 Queensland University of Technology (2018). "RangerBot: The Robo Reef Protector", ''TheQUTube YouTube channel''. Retrieved on March 20, 2020] | ||
*[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4934316/ Qureshi, U., Shaikh, F., Aziz, Z., Shah, S., Sheikh, A., Felemban, E., & Qaisar, S. (2016). RF Path and Absorption Loss Estimation for Underwater Wireless Sensor Networks in Different Water Environments. Sensors, 16(6), 890. doi: 10.3390/s16060890] | |||
*[https://ucsdnews.ucsd.edu/pressrelease/swarm_of_underwater_robots_mimics_ocean_life Reisewitz, A. (2017). "Swarm of Underwater Robots Mimics Ocean Life", ''UC San Diego News Center''. Retrieved March 16, 2020] | *[https://ucsdnews.ucsd.edu/pressrelease/swarm_of_underwater_robots_mimics_ocean_life Reisewitz, A. (2017). "Swarm of Underwater Robots Mimics Ocean Life", ''UC San Diego News Center''. Retrieved March 16, 2020] | ||
Line 482: | Line 526: | ||
*[https://doi.org/10.1016/j.marpolbul.2007.03.022 Rios, L. M., Moore, C., Jones, P. R. (2007). "Persistent organic pollutants carried by synthetic polymers in the ocean environment", ''Marine Pollution Bulletin'', volume 54, issue 8, pages 1230-1237.] | *[https://doi.org/10.1016/j.marpolbul.2007.03.022 Rios, L. M., Moore, C., Jones, P. R. (2007). "Persistent organic pollutants carried by synthetic polymers in the ocean environment", ''Marine Pollution Bulletin'', volume 54, issue 8, pages 1230-1237.] | ||
*Spalding, M. D., Green, E. P., & Ravilious, C. (2001). World Atlas of Coral Reefs (1st edition) | *[https://scripps.ucsd.edu/projects/coralreefsystems/about-coral-reefs/biology-of-corals/ Scripps, Institution of Oceanography at the University of California San Diego. (n.d.). "Coral Reef Systems | About Coral Reefs", ''The Scripps Institution of Oceanography website''. Retrieved in February 2020] | ||
*[https://www.unep-wcmc.org/resources-and-data/world-atlas-of-coral-reefs-2001 Spalding, M. D., Green, E. P., & Ravilious, C. (2001). "World Atlas of Coral Reefs (1st edition)", ''Berkeley, Los Angeles, London: University of California Press.''] | |||
*[https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980214915.pdf Spiess, F. N. (1997). Program for Continued Developement and Use of Ocean Acoustic/GPS Geodetic Techniques. Retrieved from https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980214915.pdf] | *[https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980214915.pdf Spiess, F. N. (1997). Program for Continued Developement and Use of Ocean Acoustic/GPS Geodetic Techniques. Retrieved from https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980214915.pdf] |
Latest revision as of 20:18, 6 April 2020
Group
Team Members
Name | ID | Major | |
---|---|---|---|
Amit Gelbhart | 1055213 | a.gelbhart@student.tue.nl | Sustainable Innovation |
Marleen Luijten | 1326732 | m.luijten2@student.tue.nl | Industrial Design |
Myrthe Spronck | 1330268 | m.s.c.spronck@student.tue.nl | Computer Science |
Ilvy Stoots | 1329707 | i.n.j.stoots@student.tue.nl | Industrial Design |
Linda Tawafra | 0941352 | l.tawafra@student.tue.nl | Industrial Design |
Time Sheets
Peer Review
Name | Peer review individual relative grades |
---|---|
Amit Gelbhart | 0 |
Marleen Luijten | 0 |
Myrthe Spronck | 0 |
Ilvy Stoots | 0 |
Linda Tawafra | 0 |
Project Goal
The Problem
Due to the acidification and warming of oceans due to climate change, pollution and destructive fishing practices (Ateweberhan et al., 2013) the amount and variety of coral reefs is declining. Understanding and tracking how these developments happen is key to figuring out ways to protect and help the coral reefs. Monitoring the reefs is labor-intesive and time-intesive work, which often involves having to dive down to take measurements and pictures. This kind of work: structured, often repeated and in environments humans are not suited to, could benefit greatly from robotic assistance. However, for scientific work in a fragile environment like the coral reefs, not just any robot will do. In this project, we will investigate how to design a specialized robot that will help coral reef researchers to sustain and improve the coral reefs.
Approach
To design a robot specialized to coral reef environments, we will investigate the needs of both coral reef researchers, as well as the needs of the corals themselves. When using the robot in an already threatened ecosystem, it is not acceptable for it to cause any harm to the corals. Each of the aspects of the robot will need to be assessed in light of these needs. Interviewing experts will give us insights into both of these categories, and we will also do our own research into the coral reefs. Since this project is focused on the specialized needs for robots that operate near coral reefs, not all parts of underwater robot design in general, we will base much of our design on existing underwater robots and focus on what needs to be modified.
Deliverables
- An in-depth exploration of current robots like our objective and an assessment of how suitable they are for coral reefs.
- Guidelines for a robot specialized for coral reefs.
- A report on what coral reef researchers desire and how our guidelines meet those needs.
Problem Background
One of the great benefits of the coral reef is that in case of natural hazards, such as coastal storms, the reef on average can reduce the wave energies by 97% (Ferrario, 2014). Meaning that it can prevent storms and flooding and thus protect the coastal inhabitants. Since roughly 40% of the world’s population is located within a range of 100 km from the coast (Ferrario, 2014), protecting the coral reef will result in a reduction of a great amount of damage. This would not only be in regard to human lives but also to environmental destruction. Coral reefs are also a source of income for many people, through the attraction of tourist but also as a source of fish. Coral reef fisheries contribute to the degredation of the reefs through overfishing, but they are also currently extremely important as a source of food and income for many people (Newton et al., 2007). Healthy coral reefs are important for people and the environment, which makes it such a problem that the coral reefs have been degrading for a long time, and their recovery is lacking.
Proper monitoring of the reefs to track their development over time and identify factors that help and harm them is the main task that our research focussed on. However, a specialized coral reef robot could do much more. The initial idea behind this project was to design a robot that would not assist the researchers working with the corals, but would rather directly help the reefs themselves. While this focus was shifted later on in the project to allow us to work more directly on the user aspect, potential other uses for a coral reef robot were always on our minds.
For instance, one of the factors that could prevent the downgrading of a reef, is the resistance of a reef, its ability to prevent permanent phase-shifts; and the resilience of a reef, its ability to bounce back from phase-shifts (Nyström et al., 2008). These phase-shifts are undesirable because the reef ends up in a state where it can no longer return to a coral-dominated state (Hughes et al., 2010). If the reef has better resilience, it will be able to bounce back quicker. One of the ways to improve the resilience of the reef is increasing the species richness and abundance through the use of acoustics (Gordon, et al., 2019), which improves the reef’s resilience by giving it protection from macroalgae (Burkepile and Hay, 2008). For a coral reef to flourish, a wide biodiversity of animals is needed. Fish that lay their larvae on corals are one of the essential components in a healthy reef ecosystem. However, once corals are dying, the fish do not use them for their larvae and the whole system ends up in a negative cycle. By playing sounds, with different frequencies, fish are tricked into believing that the corals are alive and come back with their larvae. This attracts other marine animals, which causes the entire system to flourish again (Gordon, et al., 2019).
A good, coral-reef oriented roboted with a modular design could be modified to go down near the reefs to place speakers for acoustic enrichment, which would allow them to be placed faster and in areas potentially dangerous to humans.
Additionally, while our robot discussion is oriented towards the current standard coral reef research methods, which is mainly photographing, there are other ways of monitoring reefs that a robot could assist in. Monitoring the sounds of a reef has been suggested as a way of analyzing the fauna in the reef (Lammers et al., 2008), and any other number of other sensors could be attached to the robot. Specifications for a good coral reef oriented robot can therefore be applied to a wide variaty of applications.
User Needs Research
Since the goal of this project is to design a robot that is especially adapted to the coral reefs, the robot can be used for a lot of things. The user of this robot will be researchers who could use our robot to research coral reefs, interact with them and support them. The robot will be used by everybody who is trying to help the corals by making changes in the reefs or by researching them. Since the robot will have a lot of different possibilities the users and use cases can differ a bit. The robot will have variating jobs and attachment modules. Therefore the uses will also differ. The user could be a researcher that is trying to find out the current state of the coral reefs but it could also be a member of an organization that is trying to help the corals through acoustic enrichtment or delivering larvae. All of the members of our user groups are passionate about the coral reefs. One of their priorities will therefore be the safety of the corals and therefore the robot will need to not harm its environment. The robot also will need possibilities to connect a variation of attachment modules sinds the robot will be used in a lot of different use cases. We have made contact with a member of one of our user groups. An interview was held with researchers at a university to find out what their perspectives on this project are.
Interview
We interviewed Erik Meesters and Oscar Bos, both doctors at the Wageningen University & Research. Dr. Meesters works with coral reefs and specifically studies the long-term development of the reefs in Bonaire and Curacao, and dr. Bos focusses on the preservation of the biodiversity in the North Sea. They recently bought a robot that is helping them with their research. An interview was done with them to find user needs and therefore a series of questions was asked about what they would want from this robot.
We only managed to arrage this meeting further along in our project, when we had already started research into current underwater robots and coral reefs. This made the interview flow better, as we could refer to our findings during it and relate it to the information we already had.
While we intended to let the discussion flow naturally and get as much information on what the researchers want without pushing them in any particular direction with guided questions, we did prepare a list of questions to ask if the discussion ever slowed down and to make sure we covered all relevant parts of our project:
- What are issues you are facing while studying/ managing/ maintaining coral reefs?
- How do you deal with them now?
- How could a robot help?
- Is a underwater robot which is specialized for coral reefs relevant?
- Do you like the robot you bought recently?
- Why/ why not?
- What is the primary reason you chose to invest in this robot over the other robots on the market?
- Which aspects of your robot should be carried over into our design? And are those aspects already good enough or do they need to be optimized further?
- Is it easy to control the robot? If not, is the problem more in the UI or the input method?
- Are you satisfied with the control method? (At this point, we can show the various control methods we have seen for existing robots and ask for their preferences)
- Regarding the UI, what kind of information would you like to be visible on screen while you are controlling the robot?
- What things should we keep in mind regarding useful and safe movement of the robot?
- In videos of current drones moving around coral reefs, they usually float near the top, they do not go into grooves or move between the corals. Would that be a function that would be helpful?
- How much fluidity and freedom do you need in movement? (Here we can show the robots with very complex movement, like the Scubo, and those with more basic movement like the RangerBot)
- Most underwater robots use thrusters, how do you feel about some alternative movement methods? (Here we can show the Aqua2 and the JenniFish)
- What are your concerns about having a robot moving near the corals? Is it just collisions, or also the way it moves through the water?
- How fast do you need the robot to move? Is fast movement necessary for your research? Is there potential danger in having fast-moving propellors?
- Is changing the depth of the robot via buoyancy control a viable method in your opinion?
- What materials would a robot need to be made from to be considered safe?
- Are there certain chemicals to avoid specifically?
- Are there certain battery types you would recommend avoiding?
- Are there any parts of your current drone that concern you in this area?
- What would you consider a reasonable price for a robot that could assist in your research?
- If a robot was available at a reasonable price, what applications would you want to use it for, besides just research?
Interview Results
We found out that the primary application the robot would be used for would be taking photographs of known reefs to monitor their development over time, this already happens but robots would allow it to go faster and over larger areas. In the interview, dr. Meesters mentioned this being more worthwhile for researchers than single-application targeted robots like the RangerBot. This focus on research and data collection informs a lot of their needs.
Needs:
- Good lighting is needed for high-quality photographs. If the reefs are very near the surface of the water this is less needed, but ideally the robot would also be useful in deeper reefs and when it is less sunny.
- A positioning system is also important, since the information of where the picture was taken should be recorded to make repeated studies of the same area valuable. The robot that dr. Meesters and dr. Bos described used an acoustic positioning device that measures the relative distance to a pole that is hung into the water from a boat. Sonar or gps were also suggested. This kind of positioning system can apparently be really expensive, it doubled the cost of their robot.
- A scale on the pictures would be very useful, so they can be compared more easily.
- The topographic complexity of the reef (the variation in coral heights) is very important, so that information should also be recorded. 3D images would be one way to visualize them, but the time cost (of taking the pictures and more importantly, processing them) is too much compared to their use. Instead, lasers or another distance measure tool, like a small chain dropped down from the robot are much easier. A multibeam echosounder is also worth looking into.
- The closer the robot can get to the corals, the higher quality the pictures will be.
- The pictures should be consistent, so the camera should be steady and preferably not rotate. If it does rotate, the angle should be part of the image data.
- The main concern regarding protecting the corals is not bumping into them. Collision detection is highly recommended. It should not only detect the corals, but also what is ahead and preferably also what is above the robot, so that it does not get stuck under something. This is particularly important in applications where the robot is being controlled live instead of travelling along a pre-programmed route.
- If the robot is being controlled live, rather than travelling along a pre-programmed route, this control should be as simple as possible. The researchers will likely be on a boat as they are controlling the robot, and the movement of the boat and seasickness makes controlling the robot much harder. We cannot take it for granted that everyone is used to gamepad controllers or joysticks.
- For the UI, it should be as clean and simple as possible, with no unnecessary information on screen. However, things like current depth, map position and the angle and direction of the camera would be useful. Multiple screens could make the interface easier to understand and calmer on the eyes.
- The robot need not be fast, since it needs to take clear pictures anyway, which means stopping every so often or moving very slowly. However, the robot should be able to resist the current. We showed some of the weirder movement systems (the Aqua2 and JenniFish), but their preference went out to the traditional thrusters, which allow for movement in multiple directions, and are more flexible and precise.
Other points:
- There is not actually a need for the robot to go into the grooves of the reef, which was one of our concerns since this would require the robot to be much smaller and more flexible, and would increase the risk of bumping into the corals. Due to the degradation of the reefs, there are no deep grooves anymore, so floating above the reef is sufficient.
- If the robot kicks up sand when thrusting up near the ocean floor, this would hurt the picture quality and the sand could also get into the thrusters and damage them. They recommended making the robot slightly buoyant so that it could float a little bit away from the ocean floor before thrusting up.
- Unexpectedly for us, since we saw a lot of robots advertise their real-time controls, both of the interviewees were very interested in having the robot follow pre-planned routes. Since a lot of the work is monitoring known sites, it would be ideal to be able to tell the robot where to go and how to take the pictures and have it just do that without further input needed. Real-time control is only needed for exploring unknown locations. In that scenario, more complicated movements are needed. Having some of these movements pre-programmed rather than being dependent on advanced user control is recommended, to make control easier.
- A tether causes drag on the robot, which is not good. This problem could be addressed by putting a weight on the line, so that is can rest securely at the ocean floor and be connected to the robot from that point, instead of moving directly up from the robot to the boat.
- The safety of the reefs was not a major concern when it came to material, instead the resistance of the material to salt water is the primary concern. If oil and/or grease is used it should be biologically degradable.
- Batteries can cause problems, not just in use but also in transport. Not all types of batteries are allowed to be taken onto airplanes. Some batteries can also cause explosions when charged, so they are a much bigger concern than we anticipated.
- The budget for a robot like this could range anywhere between 15 thousand to 80 thousand euros. To make a robot more appealing to the university to invest in, a modular design that makes it useful in multiple different projects helps a lot.
- The survey’s that dr. Meesters described doing in the Caribbean having 115 sites with lines of around 150 meters over the reefs, where divers have to dive down and take pictures along those lines. This exactly the kind of work that robots could assist with. Pictures should show 0.5 to 1 m2 in a picture.
Literature Research
Coral Reefs
To design a robot for use in and around the coral reefs, we needed to research the reefs themselves, their vulnerabilities and general environments. This information is needed so we can make sure the robot does not damage the corals.
Collisions
It is important to consider the risks associated with having the robot move close to the corals. If these risks are manageable, then there might be value in making a smaller, more agile robot that can get up close to the reefs if needed. However, if the risks are so great that having the robot too near the reefs is unaccaptable, that option can be ignored and the robot can be slightly larger and more bulky, which opens up other design opportunities. Direct contact between the robot and the corals could cause great damage to the corals. Corals are quite fragile, and bumping into them could easily cause them to break or damage their outer layer, which might make them move vulnerable to disease (Barker and Roberts, 2004). Especially branching corals are vulnerable to breaking, as their phsyical shape makes them very fragile (Hawking and Roberts, 1997). Additionally, if the robot causes pieces of the coral to break off and this is not recorded, this would not be good for research as it would seem like pieces of the corals were being broken off more often without clear cause. This makes it clear that physical contact between the robot and the corals should absolutely be avoided. This affects things like collision detection and how close the robot is allowed to get to the corals, but it also means that the robot's movement systems should be able to reliably resist the current, so that it is not pushed into the corals.
Movement
The damage might also be more indirect. The propellors could kick up the sediment and cause turbidity in the water, if this persists too long the light penetration to the corals is affected, which can cause them to die off. If these effects persist for a very long time, this could even change the coral reef's biodiversity, as the more tolerant corals survive and the senitive corals die (PIANC, 2010). The scale of the environmental effects of the robot should therefore be kept in mind, it should not affect the environment on the long term.
Material
To keep the robot from damaging the reefs, the materials used should also be reef safe. In discussions surrounding coral reefs, plastics are often mentioned as a problem. When they wrap around corals, they can seal them from light and oxygen and release toxins which can make the corals ill (Plastic Soup Foundation, 2018) (Lamb et al., 2018). However, these concerns are with loose pieces of plastic that might wrap around the corals, not a robot that will quickly leave again. Other frequent plastic concerns are surrounding plastic pellets, which can spread organic pollutants (Rios, Moore and Jones, 2007), but this is a problem caused by the manufacturing of plastic materials, not the prescense of an object made out of plastic in the ocean. Toxins are also not a great concern in this application, due to the extremely slow degredation of plastic (Webb et al., 2013). From all of this we can conclude that using plastic for the robot is not a problem for the health of the corals, as long as the robot is not present in the same place for a long stretch of time. This means that there should be an easy way to retrieve the robot should it break down, since that is the only scenario that would lead to it being stationary in one place for that long.
If water-proof coatings or paints are used, it is important that these do not contain chemicals that might damage the corals. The Haereticus Environmental Laboratories test various products for chemical pollutants, they specifically recommend avoiding anything that contains:
- Any form of microplastic sphere or beads.
- Any nanoparticles like zinc oxide or titanium dioxide.
- Oxybenzone
- Octinoxate
- 4-methylbenzylidene camphor
- Octocrylene
- Para-aminobenzoic acid (PABA)
- Methyl Paraben
- Ethyl Paraben
- Propyl Paraben
- Butyl Paraben
- Benzyl Paraben
- Triclosan
General Coral Reef Information
Finally, some general information on where the coral reefs are is useful, as it tells us in what environments our robot might be used. Most of our information comes from the World Atlas of Coral Reefs (1st edition), the first chapter of which provides a lot of general explanation of what the coral reefs are. There are many types of corals, but for the coral reefs it is the hermatypic (reef-building) corals that are important. They flourish in warm, shallow water. Hermatypic corals grow extremely slowly, with just a few millimetres each year. Branching corals grow much faster, but even they only grow about 150 millimetres each year. Hermatypic corals lay down a skeleton of calcium carbonate, and these structures form the basis of the coral reefs, since other corals and organisms can then grown on these structures. This source provides much more information regarding the different types of reefs, their spread over the Earth and the organisms found in reefs, but this more specific information is not relevant at this stage of our project: if it turns out that the exact nature of the corals in a reef have a strong influence on the requirements of the robot, this is a topic worth revisiting, but otherwise this general information is enough.
According to the Scripps Institution of Oceanography at the University of California San Diego: “Coral reefs can be found at depths exceeding 91 m (300 ft), but reef-building corals generally grow best at depths shallower than 70 m (230 ft). The most prolific reefs occupy depths of 18–27 m (60–90 ft), though many of these shallow reefs have been degraded.” (Scripps, n.d.) Studying these degrading coral reefs will likely be one of the main applications of a research assisting robot, so the information that those coral reefs are largely close to the surface is useful.
General Underwater Robotics
Operating in water, particularly salt water, has a great impact on the design of a robot. It needs to be well adapted to this environment, it should be able to resist the corrosion of the salt water and be waterproof so that the electronics are not damaged. It should also be able to move freely through the water.
Movement
Many ROVs [Remote Operate Vehicles] and AUVs [Autonomous Underwater Vehicles] use a wide variety of different moving methods and techniques to navigate underwater. Some employ biomimicry, these robots move around the water in ways that are inspired by nature. However, most robots that are more oriented at professional users, such as marine researchers, use a number of propellers to move around in all directions underwater. This provides multidirectional movement. It is important that these thrusters are powerful enough to move through the current in the ocean.
Moving along the x- and y-axes is not a great problem, thrusters can be used to push it in a direction or some kind of steering wheel could be manipulated to allow for turning. Moving up and down is a bit more complicated. Thrusters could also be used for this, but having an underwater robot gives an alternative option as well: using buoyancy. If the robot is naturally slightly buoyant, it will start floating up if there is no downwards force acting on it, this construction would mean that thrusters are used for downward movement, but going up just means turning the thrusters off. Alternatively the density of the robot could be flexibile by having it suck water into a tank to increase the density and move down, and having it push the water out again to return to its natural, slightly buoyant state and have it move up. If this movement system is chosen, it will affect the chosen materials for the robot since the final robot will need to be less dense than salt water.
Sensors
It is likely we will want some form of obstacle avoidance, regardless of whether the movement of the robot is fully automated or controlled by the user. This system will make sure that in case the user makes a wrong judgment, no collision will happen. This is important since no harm should come to the corals. Sonars are frequently used for collision detection. The resolution and reliability of sonar sensors degrade when being close to objects. Since the robot will have to manoeuvre a lot in coral reefs, sonar will not work sufficiently (Dunbabin, Dayoub, Lamont & Martin, 2018).
There are real-time vision-based perception approaches that make it possible to provide a robot in coral reefs with collision avoidance technologies. To make obstacle avoidance possible the challenges of real-time vision processing in coral reef environments needs to be overcome. To do so image enhancement, obstacle detection and visual odometry can be used. Cameras are not used frequently in underwater robots but will work well in coral reefs since the coral reefs are quite close to the surface (Spalding, Green & Ravilious (2001)), and the water in these areas is very clear.
Image enhancement is useful for making the processed images more valuable. To do so different types of colour correction are applied. And for detection semantic monocular obstacle detection can be used. Dunbabin et al.(2018) explain that for shallow water position estimation visual odometry combined with operational methods to limit odometry drift was explored and evaluated in early work using a vision only AUV. This already showed navigation performance errors of <8% of distance travelled (Dunbabin et al. 2018). Therefore this visual odometry can be used for the robot.
Localization
As a user, you want to know where your robot is, particularly if you are controlling it from a distance. For this, localization is needed.
As GPS is known to not work subsurface and thus cannot solely be used to detect the location of our underwater robot. It is, however, possible for GPS positions to be transferred underwater. As Kussat, Chadwell, & Zimmerman (2005, p. 156) state, the location of autonomous underwater vehicles (AUVs) can be determined by acquiring ties between the GPS before and after subsurface and integrating the acceleration, velocity, and rotation of the vehicle during the time of subsurface. However, they go on stating that this method causes an error of 1% of the distance traveled, which means a 10 m error with an AUV track of 1 km. According to them, this error occurs due to the quality of the inertial measurement unit (IMU).
Having such an error in an underwater robot that is being teleoperated can create many issues with relocating where the robot has been. The ocean is yet still a very unknown and widely spread out space where there is constant movement that can interrupt the location constantly. This could be problematic if one were to want to research in the same location again. For this reason, it is highly important to be able to detect an object's location, as accurately as possible as well as above water as underwater. Kussat et al.(2005, p. 156), continue explaining how a much more precise localization for AUVs can be achieved by combining precise underwater acoustic ranging and kinematic GPS positioning together.
To use such a method, a precise measuring of travel time is required. Acquired travel time data with a resolution of only a couple of microseconds, can be achieved by improving the correlator system (Spiess, 1997). This can be done by having fixed delay lines and cross-correlation of a coded signal in the transponders (Kussat et al., (2005, p. 156). Kussat et al, (2005, p.156) continue to explain that the method starts off by determining the location of the transducers, usually aboard a ship, by means of kinematic GPS. With their method transponders were placed on the seafloor, receiving signals from the transducers. This was done so a coordinate frame could globally be referenced. As a second step the autonomous underwater vehicles were located relative to these transponders by means of acoustic signals (see fig. 1).
User Interaction and Communication
If the robot is sent out along a preprogrammed path, it can collect and store all its data while traveling and does not need to transfer it during its movement underwater. It would be good to have the robot send some signal to the users of roughly where it is so that if something goes wrong the users know where to pick it up, but this information is quite simple and does not need to be updated constantly, regular updates of its current position would be enough. On the other hand, if we want the user to be able to control the robot in real-time, the user should have access to real-time information about where the robot is and what its surroundings are. This will likely involve video footage and potentially the data from some other sensors as well, for instance, close range sonar data to see if the robot is about to bump into something in any of the directions the camera is not pointing. This is a lot of information that needs to be transmitted very quickly to the user. The user should also be able to give commands to the robot that it responds to very quickly. "high-frequency EM signals cannot penetrate and propagate deep in underwater environments. The EM properties of water tend to resist their propagation and cause severe attenuation." (Qureshi, et al., 2016). Due to this, a tether must be used for most communication with the robot in the use-case of live control. While there are systems that exist for underwater wireless communication they are rare, pricey, and reserved for military and large scale project use.
Robot Design Requirements
From our literature research and our user interview, we can conclude a number of important factors to keep in mind when looking at a robot for coral reef researchers. A brief summary of those points:
- The robot's movement should not damage the reefs or the fish around the reefs.
- This means that obstacle avoidance or some other way of preventing damage to the reefs in case of collision is required. According to dr. Meesters, obstacle avoidance is the way to go here
- The method of movement also should not damage the reefs, for instance: there should be no danger of fish or parts of the corals getting stuck in the thrusters, if thrusters are used. While we expected this to be important, our interview told us it was not a major concern, mainly since the robot will likely not be getting that close to the corals.
- The movement of the robot should not cause permanent changes to the environment of the reef. If there are no collisions, this is less of a major concern. Kicking up sand when taking off is a problem to consider, but this is less important for the reef environment and more for the quality of the photos and the robot not getting damaged.
- The materials that make up the robot should not be damaging to the coral reefs, even if the robot gets damaged itself.
- Some way of locating and retrieving the robot in case of failure will be needed, to prevent the robot from getting lost and becoming waste in the ocean.
- Batteries should be chosen so as to minimize damage in case of leakage, and they should also be safe to transport and charge.
- Chemicals or materials that might damage the reefs should not be used.
- The robots physical design should suit the coral reefs.
- Fish should not see it as prey or predators to avoid the robot being damaged by fish or causing too great a disruption.
- Depending on how close the robot moves to the corals, it should not have bits that stick out and might get caught in the corals. Based on our interview there will not be a major need for the robot to move that close to the corals, but a sleak design will still be useful for movement.
- The robot should suit the needs of the researchers (our user group) in use.
- The main task the robot will be assisting with is taking pictures for research into the state and development of the reefs. This means at least one good camera is needed, and the pictures should be taken in such a way that the information is consistent and useful. This means the camera should take all pictures at the same angle and should include information like the scale of the picture and the location the picture was taken at.
- The user-interface and control methods should be adapted to the needs of the user, simplicitly is key here. The robot should not be too confusing or difficult to control, and more nuanced control might actually make it harder to get consistent results.
- Specific attention should be paid to how the needs of coral reef researchers differ from those of hobbyists or general divers, since our research into existing robots will include robots aimed at different user bases.
- The researchers should want to buy this robot.
- Besides the usefulness of the robot, this involves researching what cost would be acceptable. The main point here seems to be that it is important to consider what features might make a university, research institute or organisation willing to pay for such a product. For instance, if the robot would be more likely to get bought if it could be applied in multiple different ways.
Research into Current Underwater Drones
We are not the first people to think of using robots in underwater environments or for research, so it is worth considering what already exists on the market. This gives us insight into the standards for underwater robots and what might still be missing for our specific user group.
Scubo / Scubolino
Scubo is an ROV built by Tethys Robotics, a robotics team from ETH Zurich (Tethys, 2019). Scubo's defining feature is its omnidirectional movement, as well as its modularity. Scubo uses 8 propellers that extrude from its main body in order to allow the tethered robot to move with extreme agility underwater. The robot is made out of a carbon cuboid, which features a hole throughout the middle for better water flow and also cooling of the electronic components. It is constructed to be neutrally buoyant, allowing depth control through natural movement through the 8 propellers. On its body, there are 5 universal ports for modularity. The robot’s tether provides power for the onboard batteries, as well as allow direct control from a computer outside the water. It is controlled with a SpaceMouse Joystick. For processing, Tethys say they use an Arduino Due for the hard, real-time tasks, and an Intel NUC for high-performance calculations (Arduino Team, 2016).
LarvalBot / RangerBot
Initially, the Queensland University of Technology (QUT) designed the COTBot to deal with the dangerous crown-of-thorns starfish that threatens coral reefs. This design got improved quite a bit (most importantly, it was reduced in size and cost) to make the RangerBot, which has the same purpose as the COTBot. This design then got shifted around a bit to make the LarvalBot, which is extremely similar to the RangerBot, but instead of killing starfish this robot is used to deliver coral larvae to promote the growth of new coral. All three of the robot designs were specifically for coral reefs. Most sources on the robots focus on the video analysis capabilities most of all, so that seems to be where the innovation of this robot lies, not the coral reef application. The RangerBot has some extra features as well, such as water quality sensors and the ability to collect water samples (Braun, 2018).
Both RangerBot and LarvalBot have 6 thrusters (Zeldovich, 2018) that allow full six degrees-of-freedom control, including hover capabilities (Dunabin et al., 2019). Both are controlled with an app that, according to the creators, takes just 15 minutes to learn (Dunabin, August 2018). The LarvalBot follows only preselected paths at a constant altitude, with the release of the larvae being controlled by user input (Dunabin, November 2018), it is unclear if the RangerBot also follows a pre-selected path, but it is likely since it operates fully automatically to dispatch the crown-of-thorns starfish, and there is no mention of automatic pathfinding in any of the report.
RangerBot and LarvalBot have 2 stereo cameras (Zeldovich, 2018). The RangerBot uses these for obstacle avoidance amongst other functions, it is not specified if the LarvalBot has obstacle avoidance (Dunabin, August 2018). RangerBot uses video for obstacle avoidance because the usual method, sonar, does not work in coral reef environments. Unwater use of sonar and ultrasound is difficult either way, but the degree of complexity in coral reefs makes it manageable. Video is usually not used for this purpose because deep in the water it quickly gets dark and murky. However, coral reefs are quite close to the surface (or at least, the coral reefs we are concerned with are) and they are in clear water, so this environment is uniquely suited for cameras as a sensor (Dunabin et al., 2019).
RangerBot weighs 15 kg and is 75 cm long (Dunabin, August 2018), the LarvalBot will be bigger because it has the larvae on board, but the base robot has the same size. The RangerBot floats above the reefs and can only reach the easy-to-reach crown-of-thorns starfish (Zeldovich, 2018), that is why it can have the slightly bulky shape and can have the handle at the top without bumping into the reefs being a concern. There is unfortunately no information available online on what materials the robot was made out of.
RangerBot and LarvalBot can both be untethered. However, the current version of LarvalBot is still tethered (Dunabin, December 2018). It is not mentioned in the report why, but it can be assumed this is because you need reliably real-time video to be able to tell the LarvalBot to drop the larvae at the right time. When without a tether, the robot can last 8 hours and it has rechargeable batteries (Queensland University of Technology, 2018).
OpenROV v2.8
The purpose of the OpenROV robot kit is to bring an affordable underwater drone to the market. It costs about 800 euro to buy the kit and 1300 euro to buy the assembled robot (OpenROV, n.d.). The user is challenged to correctly assemble the robot, a fun and rewarding experience. Once the robot is made, the user is free to decide where and how to use the robot. The most common use is exploring the underwater world and making underwater videos.
The robot is remotely controlled and has a thin tether. Via a software program, which is provided and can be installed on a computer, the user is able to control the robot. The camera on the robot provides live feedback on the robots’ surroundings and the way it reacts to direction changes.
Two horizontal and one vertical motor allow the robot to smoothly move through the water. When assembling the robot, one should pay special attention to the pitch direction of the propellers on the horizontal motors. As the horizontal motors are counter rotating, the pitch direction of the propeller differs.
Other crucial parts are a Genius WideCam F100 camera, a BeagleBone Black processor, an Arduino, a tether, lithium batteries and extra weights.
All the components are held in place by a frame and stored in waterproof casings. The frame and multiple casings are made from laser cutted acrylic parts. The acrylic parts are glued together with either acrylic cement or super glue. When wires are soldered together, heat shrinks are used to secure a waterproof connection. Epoxy is used to fill up areas between wires to prevent leakage.
The design of the robot does not look very streamlined but moves surprisingly smooth. A drawback of the design is that the lower part, which contains the batteries, is always visible on the camera feed. This can be annoying when watching the video.
The robot is relatively small and can be compared to the size of a toaster. This is ideal for moving through places such as shipwrecks and coral reefs.
The robot uses 26650 (26.5mm × 65.4mm) Lithium batteries. The batteries need to be charged beforehand. The guide warns the user that the batteries could be dangerous and must be charged with the correct amount of voltage, which is 3V.
Aqua2
The Aqua2 robot is designed to assist human divers. It is currently employed for monitoring underwater environments and provides insights regarding robotic research and propulsion methods. The robot can go 30 meters deep and has a maximum speed of 1 m/s (AUVAC, n.d.).
The robot can operate autonomously or be controlled via a tablet. The tablet is waterproof, so that the divers can control the robot while being underwater. When the user tilts the tablet in a certain direction, the robot moves in the same direction (De Lange, 2010).
Aqua2 has six flippers which move independently. The fins move slowly, this results in little disturbances in the robot's surroundings. The robot can also be employed on land. There it walks with a speed of 0.7 m/s.
The rectangle shaped body of the AquaA2 robot is made from aluminum. The flippers are made from vinyl and have steel springs inside. These materials are sea water proof.
The robot is powered through two lithium batteries. The batteries have a voltage of 28.8 V and a capacity of 7.2 Ah. After 5 hours, the batteries must be recharged, which takes 8 hours.
It is equipped with image collection and processing software. This allows the robot to operate autonomously. The image procession is made available through the ROS development environment and the OpenCV vision library.
Ocean One
The robot is designed for research purposes. Due to its design, the robot can take over tasks from human divers such as exploring archaeological sites or collecting a specimen. Moreover, it can excess places where human divers cannot go, such as places below 50 meters deep.
The Ocean One has a high level of autonomy and is controlled via an intuitive, haptic interface. The interface “provides visual and haptic feedback together with a user command center (UCC) that displays data from other sensors and sources” (Khatib et al., 2016, p.21) . There are two haptic devices (sigma.7), a 3D display, foot pedals, and a graphical UCC. A relay station, which is connected to the controller, allows the robot to function without a tether.
“The body is actuated by eight thrusters, four on each side of the body. Four thrusters control the yaw motion and planar translations, while four others control the vertical translation, pitch, and roll. This thruster redundancy allows full maneuverability in the event of a single thruster failure” (Khatib et al., 2016, p.21).
It has an anthropomorphic shape as it should have the same capabilities as human divers. The design of the robot’s hands allow “delicate handling of samples, artifacts, and other irregularly-shaped objects” (Khatib et al., 2016, p.21). “The lower body is designed for efficient underwater navigation, while the upper body is conceived in an anthropomorphic form that offers a transparent embodiment of the human’s interactions with the environment through the haptic-visual interface” (Khatib et al., 2016, p.21).
The relay station can be used as a nearby charging station.
In the paper, the authors seem to hint that the robot is equipped with a collision detection system. The hard- and software behind it is not explained.
M-AUE
The M-AUE's were developed to do research into plankton in the ocean. The M-AUEs study environmental processes in the ocean (Reisewitz, 2017).
The idea of the M-AUEs is that they move just like planktons in the ocean, by adjusting their buoyancy (through programming) going up and down while drifting with the current (Reisewitz, 2017). The author continues stating that the M-AUEs are moving vertically against the currents that are caused by the internal waves; they do this by repeatedly changing the buoyancy.
“The big engineering breakthroughs were to make the M-AUEs small, inexpensive, and able to be tracked continuously underwater,” said Jaffe (inventor of the M-AUEs) in Reisewitz (2017).
The M-AUEs are designed to not go as deep to a point that they would be coming close to the seafloor. This will mean that in case of being used for coral research they will not maneuver through the coral, but instead hover above them. The M-AUEs are preprogrammed with a PID control algorithm, meaning that they will sense the actual depth that they are currently in and adjust the settings to the desired depth that they want to be at.
According to Reisewitz (2017), acoustic signals are used to keep track of the M-AUEs while submerging, since GPS does not work under water. Receiving three-dimensional information on the location every 12 seconds, showed where exactly the M-AUEs are inside the ocean. Augliere (2019), says that GPS equipped moorings (which float on the surface of the ocean) are being used to send out sonar pings in those 12 seconds. These pings are then received by the hydrophones of the M-AUEs (which are located around 50 meters deep from the surface), giving data which researchers can use to localize them.
As stated by Augliere (2019), the M-AUEs have batteries with power to last for several days and also data storage that can last that long. “The system is powered by a battery pack made from 6 Tenergy 1.2 V, 5000, mA hour NiMH cells in a series configuration. The battery pack is fused with a 2 A fuse and is recharged at an optimal rate of 600 mA” (Jaffe et al., 2017)
The size of the M-AUEs is to be compared with the size of a grapefruit. It exists out of 2 shells that are concentric and made out of syntactic foam (Jaffe et al., 2017) which slide on top of each other. Inside these shells the batteries, a piston and an antenna are positioned and on the bottom, on the outside of the shell, multiple sensors are placed.
Regarding possible future work, Reisewitz (2017) states that adding a camera to the M-AUEs will provide images of the state of the ocean, resulting in enabling to map the coral habitats and the movement of larvae or to do further research on the creatures in the ocean such as the planktons. Enhancing the robot with audio recording devices (hydrophones), will let the M-AUEs function as an ‘ear’ by being able to keep track of the sounds in the ocean.
JenniFish
Designed at Florida Atlantic University (Floride Atlantic University, 2019), this robot is based on the way jellyfish move through the water to give an alternative to the standard underwater robot design of a hard body propelled forward by thrusters. Coral reefs were amonst the environments that this robot is intended to be used in. This design is not much suited to our needs, as its erratic movement makes it unsuitable for taking pictures (Turner, 2018). It is also too small to really have the variability we want for a modular design. However, the robot does present an interesting alternative way of movement, and gives some insight into what kinds of soft materials might be used near the coral reefs, should we come to the conclusion that we want some kind of shock absorption material.
The JenniFish is slightly negatively buoyant, but can propel itself upwards by filling its "tentacles" with water and pushing this water out again. These tentacles are soft, which allows the JenniFish to squeeze through narrow spaces. Various plastics, including polyester, silicon and plexiglass were used for this robot. Two 6V submersible impeller pumps were used to control the absoption and ejection of water. A 9V primary cell alkaline battery was used to power the system (Frame, 2016), though it should be noted that this description comes from early prototype design and the choice of battery was largely motivated by price and availability, this might not be the best choice for the final product.
Research Conclusions
From our research into user needs, coral reef needs, underwater robotics in general and current underwater robots already used in practice, we can draw conclusions on what a good robot for coral reef researchers would need to look like.
Movement
Through interviewing the researchers, we gained some insights on the required range and method of movements the robot will need in order to be sufficient. First and foremost, the robot must have 2 operation modes to be optimal. These modes are tethered live control (teleoperation) and tetherless pre-programmed routes. These 2 modes ensure that the robot can be used comfortably and for a wide range of uses, a robot that can be used for many different tasks is, after all, a more cost-effective and sustainable solution. The teleoperated mode would ideally be used to explore new sectors and find new sites to scan underwater. The tetherless operation is intended for the scanning and photographing of known sites, this eliminates the need for operation by an expert in order to study certain sites. The researchers we interviewed were asked about biomimicry, robots that are inspired by nature for their design and operation. Examples of robots that use biomimicry are the Aqua2 and the JenniFish. Our conclusion is that biomimicry is not the answer for most types of research done on reefs. This is because most forms of movement that these types of robots use are not nearly as stable and precise as the thruster system can be. The way underwater animals move is inherently different than what this robot needs to accomplish to provide stability for underwater pictures and provide precise positioning for measurement. Therefore we decided that it is not a suitable movement method for our robot. Therefore, the multi-propeller system, such as Scubo 2.0, is much preferred. This is due to its agility and ability to move fluently in all directions. Furthermore, omnidirectional thrusters can be used to stabilize the robot while taking photographs or scans underwater, and steady and clear scans and photographs are, according to researchers, one of the single most important features this robot can have. This movement must be followed by powerful enough thrusters to counter some strong currents (up to 0.5 knots) in order to be able to remain stable in the water.
For controlling upwards and downwards movement, the method preferred by dr. Meesters and dr. Bos was having the downward movement done by thrusters, but making the upwards movement facilitated by building the robot to be slightly buoyant, so that it will float towards the surface when the thrusters are turned off. This will affect the materials of the robot. This method means that the robot can float away from the ocean floor without kicking up sand, which could damage the thrusters and mess up images taken.
UI and Control
User Interface [UI] and Control of an ROV are very crucial, without an easy method to send instructions to the robot, the operator cannot efficiently carry out the task that is required, and in research use cases, this could mean bad data collection and potential time wasting. Currently most robots use a few different kinds of input methods to control the robots via a computer interface. Most amateur oriented robots use on screen controls on a tablet or smartphone, while most professionally oriented robots use either standard gamepads, joysticks, or more specialised systems for a more unique robot operation (such as the sigma.7 haptic controller and foot pedals used to control the Ocean One robot). The use of a standard joystick or gamepad seems to be the most common control input method, which is likely due to most of these hardware devices being widely available, easy to operate, and very cost effective (a standard Xbox One controller costs 60 euros (Xbox)). On-screen controls seem to be missing from most serious professional ROV. The user interface of most of these robots seem to follow a general feature list, while the organization on screen varies slightly from system to system. Some features that can be found on the user interface of these robots include: a live camera feed, speedometer, depth meter, signal strength, battery life, orientation of the robot, compass, current robot location, and other less relevant features on some systems.
The system used by the researchers we interviewed uses an xbox one controller to navigate the robot, they had little to say on this matter as neither of them are avid gamers or have used this robot yet. This leads us to believe that the control input method is less relevant in a general sense, but the key importance is that it should be a standardized system in such a way that any user can plug in their controller of choice. This ensures that not only are the users using their most prefered hardware, but also will enable them to use existing hardware to avoid having to spend more resources on buying another item. Each person will have their prefered method to control the robot (which is made increasingly difficult when operating from a moving vessel on the water) and will therefore be more comfortable in controlling the robot.
Furthermore, we discussed the case of the UI with the researchers, and the conclusion was that the less busy the display is, the more ease of use the system has. The most important features we concluded are: the depth meter, the compass and orientation display, and the current location of the robot. According to researchers, the most important thing when controlling a robot underwater is knowing where you are at all times (this is why underwater location systems are so complex, expensive, and crucial to these operations). Finally, there should be some way of indicating the scale of the picture, how far away from the corals it was taken so that their size can be assessed.
In the movement section, 2 operation methods were discussed; a tethered live control and a tetherless pre-programmed operation. For the tetherless operation, UI can be just as crucial for two distinct reasons. The first being that you want to be able to easily program the needed path and instructions you want the robot to take. Most marine researchers will not also be programmers, so the software to program the paths needs to be very intuitive and full of features primarily being the ability to control the area of operation, and the type and size of photographs (or other types of readings) the robot will take. The second use of this software would be similar to the tethered operation but except for the live control of it, this will be for monitoring purposes and ideally has the exact same features as the tether operation setting, with the small addition of a progress meter to indicate how the route is progressing.
Materials, and Shape
Before the user interview, we had assumed that the shape of the robot would be very important in letting it safely move near the reefs. After the interview, it became clear that the robot will not be getting that up close and personal with the corals, so the shape does not need to be adapted to this. Instead, the ease of movement and the option of putting various components, like extra camera's or sensors, on the robot is much more important. A simple, squarish shape like the Scubo or the OpenROV is good enough. However, when it comes to specific guidelines for robots for coral reef researchers, shape is not a major concern for them so there is no strong recommendation to give here.
For the materials, the main points are that they should stand up to salt water and that the final robot sould be naturally slightly buoyant to allow it to float up slowly without needing to use its thrusters near the ocean floor. This limits the use of metal, stainless steel is a nice material that is quite resistant to salt water, but its density (about 7 times that of salt water) means that it cannot be used in large quantities. The use of plastics, specific kind dependent on the flexibility needed in that part of the robot, is therefore more viable for large parts of the robot. The specific type of plastic will be dependent on the hardness and flexibility requirements for specific parts of the robot, but this is no longer part of the guidelines specific to coral reef research robots.
In general, the shape and materials of the robot turned out to not need to be changed that much for the coral reef application, since the safety of the corals will largely be covered by the obstacle avoidance systems. Some of the more unique designs, like the JenniFish and the Ocean One, are very interesting but do not give any benefits for the functions that our users want.
Power Source
In the robots which were researched, power was either supplied by batteries, a tether or a combination of both. There are different kinds of tethers. Some tethers only provide power, others transfer data and some do both (Kohanbash, 2016). The most commonly used batteries are lithium based. As mentioned in the online guides from OpenROV one should be cautious when using Lithium batteries as they can be dangerous (OpenROV, n.d.). During the interview, the researchers said that once something went wrong with the charging of a battery and caused a fire on a ship.
As described in the movement chapter, the robot we will be designing operates in two settings. Firstly, the user can program a path via software on a laptop, upload the path and the robot will autonomously move along the path. In this case, just the batteries will suffice in providing power. Without a tether, the robot can move closer to the reefs with the risk of getting entangled. In the second setting, the user can control the robot via a controller and real time video feedback. In that case, a tether is needed for the data transfer. Power can still be supplied by the batteries only.
During the interview, the possibility of incorporating both settings was discussed. The researchers got very excited about this idea and the opportunities it provides for research. As mentioned in the movement chapter, having both functions would make the robot unique.
Additional Features
Collision detection is an important feature to keep the robot safe around the coral reefs. As described in the general underwater robots research section and shown by the RangerBot, obstacle avoidance for obstacles in the direction the camera is moving in can be done with visual odometry. Even with a separate camera for movement control and taking the pictures, a camera still cannot cover all sides of the robot. Particularly in the tethered, exploration application of the robot this could be a problem. In the interview it was suggested that sonar could be used for simple obstacle detection, when it is just used to avoid getting directly stuck under something, so sonar could be used for detecting obstacles directly above the robot.
The M-AUE’s use a method for localization that is very similar to the one described in the general underwater robots research section. The difference in method is that Kussat er al, (2005, p.156) use transponders that are located on the seafloor, whereas the M-AUEs receive the acoustic signal with their built-in hydrophones and respond back to floating moorings. The moorings sent out 8-15 kHz signals at a time interval of 2 seconds, localizing the M-AUEs with an accuracy of ± 0.5 km (Jaffe et al., 2017). Whereas the method that includes using transponders on the seafloor achieving an accuracy of ± 1m (2-𝜎) of the horizontal positioning of the AUVs (Kussat et al., (2005, p. 156). Which method is best to use depends on the type of research for which a robot is going to be used. When for example, globally analyzing the state of the coral reefs, it is less important to have the exact location of where images have been taken than when repetitive research in one area is being done, since in that scenario you would need to know exactly where each picture was taken.
A feature that is not mentioned in any of the robots and also did not come up in general research for underwater robots at all, was sensors that can be used to detect topographic complexity of the coral reefs. In our interview, it became clear that this was a much desired feature. Because the importance of this feature was only discovered later on in the project we could not research it in detail and the implementation of this sensor is something that would need to be investigated further. The chain-and-tape method that dr. Meesters described in the interview (where a chain is laid over the corals so that it follows the grooves between them, and the total distance is then measured with a tape) would likely not work for the basic robot we have outlined so far, since it would need ways of manipulating the chain, marking where the measurement is started and ended and retrieving the chain in such a way that the measurement could be done. This might be a feasible module to be added onto the robot, it could not be part of the basic design. Alternative options are giving it a way to detect the highest point within an area and the lowest point, for this it would only need to be able to detect the furthest distance to a single point. This could be accomplished with sonar or laser distance measurement techniques. A final option is to use a technique similar to the multibeam echosounder that ships use to map the topography of the ocean floor. But the validity of these options and how they are affected by having a robot that can move close to the reefs would need to be further investigated.
Overview Table
Final Results
Guidelines
The main result of our project is an understanding of what coral reef researchers need out of a robot to help in their research, and seeing where those meets are not being met yet by the robots currently on the market. The results of the work we have done here could be used by coral researchers themselves who want to know where the robots that they might invest in might be lacking, but it could also be used by the people developing these robots to see what features and attributes are worth investing in to make such a robot more appealing and useful for users. As this is very much a project that was build on collecting and organising information, not all the useful insights we have written down can necessarily be translated to concrete development guidelines, but we can summarise the main takeaways in this form.
- For the robot to be able to move around a fragile environment like the coral reefs, it must have reliable obstacle avoidance methods build in. It is worth considering the use of camera's and visual obstacle detection methods for this, since the coral reefs are uniquely suited to this method of obstacle avoidance.
- If the robot uses thrusters to move, it should not kick up sand when moving up and away from the ocean floor. One of the ways of avoiding this is by making the robot slightly naturally buoyant, which will allow it to float up if the engines are turned off. Therefore, the density of the materials chosen to build the robot should be slightly below the density of salt water, though the materials should also be resistant to salt water corrosion.
- The main application the robot will likely be used for is taking pictures of coral reefs that are being studied to track their development over time. A camera is therefore needed and lights to make sure the pictures are of high quality. The pictures must be consistent and useful, so the angle of the camera should not change, the scale that the pictures are taken at must be recorded, as well as the precise location where they were taken. Ideally, there would be some system for tracking the height of the corals as well, so that the topographic complexity can be studied.
- If the robot is only going to be moving along known sites of study, it does not need to have live control functionality. It can follow along pre-programmed paths with a few set parameters like how often it should take a picture, how big the area covered in each picture should be, etc. This method of control is preffered to real-time user control in this application.
- However, the additional option of real-time user control is a very desirable additional feature. The robot will need to be tethered in this case so that live video footage is possible, and users will likely want more detailed environmental information like simple sonar-based obstacle detection above the robot. This would allow the robot to be used to explore unknown environments.
- When the robot is being used for research, it is important to know the robot's location so it is known where the images have been taken. Since GPS does not work underwater, acoustic signals should be used instead to determine where the robot is.
- For ease of use, particularly while the user is on a boat, the interface should be as clean and simple as possible. There should be a live camera feed if the robot is being controlled live, as well as depth meters, the compass and orientation display, and the current location of the robot.
- The robot will be bought by universities and organizations, not individuals. To make the robot more appealing to these buyers, it is better to give it some basic functionalities and the ability to add modular components to give it extra applications, rather than build it from the ground up for just a single application.
Infographic
Discussion
As there are quite a few underwater robots already out there, a robot build according to our guidelines will be the one that is suitable to use for research and more specifically for research in coral reefs. Getting in touch with two experts during this project has allowed us to get into the perspective of the user and understand what needs to be looked at when designing such an underwater robot. Since there is only so much you can find online on the functionalities of products, collecting this information by means of interviewing the users was greatly needed. For us to gather as much valuable information as possible from the interview, it was important to do lots of research beforehand. By knowing what already exists and by being able to envision ourselves interacting with the robots, higher quality questions could be asked. This work, however, was slightly overlooked during this project. As this project has only a duration of 8 weeks and as at the start, time was spent finding a concrete research question, the interview resulted in taking place a bit later in the process than initially hoped, and we were only able to interview two researchers which limits the generality of our observations. This and the sudden need in having to change the form of our research due to COVID-19 took away the opportunity for us to realize our findings into a working robot, which had been our original intention to validate our design, and a lot of early effort had gone into planning how we would approach building this physical prototype. Although, as a result to this the research that has eventually been conducted, has resulted to much more profound guidelines to how such a robot should be made. These guidelines and their corresponding research can function as a base for futuristic work.
Process
Since the project changed direction some times, here is an overview of the process of this project. Due to shifting priorities over the course of our project, not everything we did ended up being relevant to our final research.
Phase 1
- Discussed our strengths, weaknesses and interests in order to find out what we want to do during the project
- Decided that we wanted to focus on coral reefs, a topic with real global importance and that would hopefully be specific enough that we could focus on good design for a specific environment
- Wanted to make (parts of) a physical robot
Phase 2
- Did a lot of research into coral reefs, what threatens them and how they are being helped. Some of this research ended up in the problem background and general coral reef information sections, but most was never developed in enough detail to be added to the wiki. It was an exploration of the topic
- Came across the topic of acoustic enrichment, a lot of research was done into how we could use a robot to assist with acoustic enrichment and how it worked exactly
- Decided to make a robot which contains a speaker
- Specified the parts of the robot we would be working on: movement and motor system, design and user interface
- Made sketches to iterate on the shape and design
- Did research on user interfaces and controllers
- Built a small physical prototype of the way the movement of the robot could be controlled
Phase 3
This phase is elaborated, discussed and explained on this wiki.
- The acoustic enrichment focus was let go when it became clear that work in that area would mean we would have to focus more on collecting and analyzing audio databases, which is not what interested us or where our specialties lie. Instead, we decided to focus on making a robot suitable to the coral reefs, with the application still being somewhat open. The researcher can add modules according to their needs
- Decided to step away from actually building the robot and more focus on making guidelines on what the robot should be able to do and which components it needs. This decision was taken due to the narrow time frame (ordering components took and waiting on the delivery would take up about a week). Moreover, coming together to work on the prototype was no longer possible due to the COVID-19 virus and restrictions from the TU/e and government
- Did a lot of research on current robots. Individual robots were studied based on their valuable aspects.
- Researched into collision detection and localization was done since there was a need to really understand how it worked before making conclusions about these areas.
- Did user research. An interview with researchers was done to get user needs.
- Finally, combined the research into general underwater robots and coral reefs from earlier in the project with the information we had received from the interview and research into existing underwater robots to make guidelines for robots useful for coral reef researchers
References
- Haereticus Environmental Laboratory (n.d.). "Protect Land + Sea Certification", Haereticus Website, retrieved March 2020.
- Hawkins, J.P., & Roberts, C.M. (1997). "Estimating the carrying capacity of coral reefs for SCUBA diving", Proceedings of the 8th International Coral Reef Symposium, volume 2, pages 1923–1926.